text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
This ring uses gesture recognition to write words and numbers This is a guest post by Kayla Matthews, a biometrics and technology writer. FingerSound is an exciting new system from Georgia Tech researchers, led by graduate student Cheng Zhang, the technology’s creator. The technology enables people to trace numbers and letters on their fingers, with the figures appearing on a computer screen. Users can also navigate left, right, up and down with the appropriate gestures. Effectively, the device can serve as both a keyboard and mouse, when compared to conventional computer navigation. The system works around a thumb ring, fitted with a tiny microphone and gyroscope. Users move their thumb across fingers for the hardware to detect the movement and convert the signage to numbers and letters. The technology’s video demonstration showcases the exciting system in action. The wearable ring constantly monitors the input and automatically reacts when receiving the right input, using an onboard contact microphone to continuously listen to and analyze actions while canceling out ambient noise. The only noise the microphone regards is the thumb moving across the hand. The sound is again analyzed by a machine-learning classifier, to help eliminate superfluous noise. A pattern recognition algorithm then helps to identify the gesture the thumb is making across the hand. Potential applications for FingerSound The video demonstration also explores potential applications for the device. One option is single hand interactions with smartwatches or using a mobile phone even when it’s not within reach. The video shows someone utilizing FingerSound to converse in a text, without even touching their mobile phone. For those with a disability that renders text messaging difficult or impossible, FingerSound can be a great tool. The video also shows how one can use the gestures to converse or write content during a meeting, with remote control laptop functionality. FingerSound can help those in a meeting respond to others via text without it being a distraction. Additionally, one can use FingerSound to snap pictures in Google Glass with a single gesture. Great photography can be contingent on being there at precisely the right moment, which FingerSound can alleviate. From business to hobbies like photography, FingerSound has a variety of potentially exciting applications. Also exciting is FingerSound’s potential involvement in virtual reality. In theory, one can use FingerSound as a device for control without the need to remove a head-mounted device, which is often a requirement to input commands via a keyboard or mouse. Without requiring the need for removal of a virtual reality device, virtual reality can appear even more immersive with the aid of FingerSound. FingerSound: a wearable wonder FingerSound’s role as a wearable device works in its favor as a non-obstructive tool. Unlike other wearable input devices that stick out like a sore thumb, the minimalist approach of FingerSound is more socially acceptable. Although there are other gesture systems available, they often require the user to perform gestures in the air, which can be distracting and arduous. The simpler, more accessible approach of FingerSound is more appealing to the complexities of other devices. Additionally, the contact microphone and captured motion data make for an experience that values accuracy. Other types of similar technology include FingOrbits, where the wearer can control smartwatch apps by rubbing their thumb on their hand, in addition to SoundTrak, where users can create 3D doodles or woods in the air by localizing their finger position in 3D space. Previously, the same Georgia Tech researchers responsible for FingerSound showcased new technology that aided smartwatch control with breaths, swipes and skin tapping. The Georgia Tech researchers are making impressive strides in resolving an issue many have with smartwatches and virtual reality: inaccessible, clunky and distracting control interfaces. The research team strives for a system that’s continuously available with a high ease of use, which FingerSound appears to most certainly accomplish. DISCLAIMER: BiometricUpdate.com blogs are submitted content. The views expressed in this blog are that of the author, and don’t necessarily reflect the views of BiometricUpdate.com.
<urn:uuid:c5b675c5-6422-4c02-ae6c-0800f3c78503>
CC-MAIN-2022-40
https://www.biometricupdate.com/201801/this-ring-uses-gesture-recognition-to-write-words-and-numbers
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00182.warc.gz
en
0.918453
841
2.84375
3
Power Usage Effectiveness (PUE) is a calculation used to measure data centre energy efficiency. It was first introduced in 2007 and endorsed by The Green Grid to promote more effective data centre energy management. Nowadays, PUE is a global standard that companies use to assess and improve their energy consumption. To calculate PUE, a company must determine two factors: - Their IT load. This is the energy consumed by IT equipment and is typically measured from power distribution units (PDUs). - Total facility energy consumption, including any network equipment, cooling systems, lighting, and uninterruptible power supplies. It’s usually measured from the utility meter. PUE is an excellent tool for benchmarking data centre energy use over time, allowing companies to see the results of their changes and improvements. PUE and Data Centre Energy Usage Modern data centres host the critical IT infrastructure that many industries demand. While this IT equipment uses energy, it also generates a lot of heat. As data centres grow and provide more and more processing power, they must also keep their equipment in top condition through effective heat management, cooling, and use of space. Data centre management can then use PUE to monitor their energy usage. Put simply, if a data centre has a high PUE ratio, it should explore areas to optimise energy consumption. Not only does lowering PUE reduce unnecessary energy spending, but it also contributes to energy saving initiatives, reducing emissions, and providing the best possible customer experience. What is DCiE? Data Centre Infrastructure Efficiency (DCiE) is another method of judging a data centre’s energy usage. It uses the same metrics as PUE but communicates results differently. While PUE is a ratio, DCiE shows the IT load as a percentage of total facility power usage. How much energy does a data centre use? Data centres use a lot of energy. This places a responsibility on data centre providers to use less energy where possible. The widespread adoption of real-time business processes, machine learning, and high-speed connectivity means existing energy consumption could increase in the future. So, with data centres accounting for 1 per cent of the world’s energy consumption, it’s critical to make improvements wherever possible. Both PUE and DCiE are vital tools in tackling unnecessary energy consumption. How to calculate PUE and DCiE The power usage effectiveness calculation is: For example, a data centre using 50,000 kWh of energy, with 40,000 kWh used on IT equipment, would have a PUE of 1.25. DCiE reverses the formula: IT equipment energy / total facility usage = DCiE. In the previous example, the data centre would have a DCiE of 80 per cent. How PUE and DCiE help you manage costs Calculating these metrics is incredibly useful. The higher the PUE calculation, the less efficient a data centre’s energy usage is. Conversely, the closer the PUE is to 1, the more efficient is its usage of energy. How to use PUE Data centre management should regularly measure their PUE. Energy usage can vary with the time of day and season – taking regular measurements helps overcome these fluctuations. Over time, these results show how companies have improved and create future data centre benchmarks. According to the Uptime Institute, the average PUE value was 1.57 in 2021, meaning data centres used around 60 per cent of their energy consumption on IT equipment. This is a slight improvement to 2020’s score of 1.59 – but a significant gain from the 2007 average score of 2.5. By collecting PUE over time, data centres can prove their efforts to reducing PUE. How to reduce PUE Reducing PUE makes a data centre more economical and provides an advantage over less efficient competitors. There are several ways to reduce PUE. Cold Aisle Containment - Cold aisle containment counts as the largest contributor to the PUE improvement in combination with by pass air flow avoidance (blanking plates, by pass air etc) Enhanced cooling technology - Much of a data centre’s energy is spent on cooling IT equipment. Whether it’s through enhanced airflow management, advanced cooling systems, or better layout, improving the cooling system can save a great deal of energy. Make small improvements - Modest improvements add up. Using advanced power supplies, automatic lighting, and removing waste ensures that the whole facility contributes to a lower PUE. Measure regularly - Above all, a data centre should measure its PUE regularly. Not only does this show when there is an issue, but it also provides a record of efforts and successes. Why it’s important to reduce PUE PUE and DCiE demonstrate how efficiently a data centre uses energy. By understanding the amount of energy spent on different processes, companies can assess how to make improvements that save money, improve service, and reduce waste. Our colocation data centres are designed with efficiency in mind. Contact us today to find out more.
<urn:uuid:626464cd-5b26-4f07-a1ce-2c7fddd8fadd>
CC-MAIN-2022-40
https://www.interxion.com/ie/blogs/what-is-power-usage-effectiveness
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00182.warc.gz
en
0.914588
1,048
3.03125
3
Résumé du cours Training and experience using Microsoft Access has given students basic database management skills, such as creating tables, designing forms and reports, and building queries. In this course, students will expand their knowledge of relational database design; promote quality input from users; improve database efficiency and promote data integrity; and implement advanced features in tables, queries, forms, and reports. Extending knowledge of Access will result in a robust, functional database for users. This course focuses on optimization of an Access database, including optimizing performance and normalizing data; data validation; usability; and advanced queries, forms, and reports. This course covers Microsoft Office Specialist Program exam objectives to help you prepare for the Access Expert (Office 365 and Office 2019): Exam MO-500 certification. A qui s'adresse cette formation This course is designed for students wishing to gain intermediate-level skills or individuals whose job responsibilities include constructing relational databases and developing tables, queries, forms, and reports in Microsoft Access for Office 365. To ensure success in this course, it is recommended that students have completed Microsoft Access for Office 365: Part 1 or possess equivalent knowledge. It is also suggested that students have end-user skills with any current version of Windows, including being able to start programs, switch between programs, locate saved files, close programs, and use a browser to access websites. In this course, students will optimize an Access database. After completing this course, students will be able to: - Provide input validation features to promote the entry of quality data into a database. - Organize a database for efficiency and performance, and to maintain data integrity. - Improve the usability of Access tables. - Create advanced queries to join and summarize data. - Use advanced formatting and controls to improve form presentation. - Use advanced formatting and calculated fields to improve reports. Outline: Microsoft Access for Office 365: Part 2 (91145) Module 1: Promoting Quality Data Input - Restrict Data Input through Field Validation - Restrict Data Input through Forms and Record Validation Module 2: Improving Efficiency and Data Integrity - Data Normalization - Associate Unrelated Tables - Enforce Referential Integrity Module 3: Improving Table Usability - Create Lookups within a Table - Work with Subdatasheets Module 4: Creating Advanced Queries - Create Query Joins - Create Subqueries - Summarize Data Module 5: Improving Form Presentation - Apply Conditional Formatting - Create Tab Pages with Subforms and Other Controls Module 6: Creating Advanced Reports - Apply Advanced Formatting to a Report - Add a Calculated Field to a Report - Control Pagination and Print Quality - Add a Chart to a Report
<urn:uuid:902b1235-e9e8-4169-af8a-3bd2acf1fe74>
CC-MAIN-2022-40
https://www.fastlanetraining.ca/fr/course/microsoft-91145
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00382.warc.gz
en
0.841429
602
2.953125
3
The intrusion detection system (IDS) can monitor the traffic on the network, search for suspicious activities and known threats through the system, and issue threat warnings when such items are discovered. The overall goal of IDS is to notify the IT department in times of unusual behavior that may be occurring in the system. The threat warning usually contains information about the source address of the intrusion, the target/victim address, and the type of suspicious attack. Enterprise IT departments can learn about potential malicious activities in their technical environment by deploying intrusion detection systems. Each IDS is programmed to analyze traffic and identify patterns. In this mode, IDS can identify traffic that may indicate various cyber attacks. In addition, intrusion detection systems can detect traffic that is problematic for specific software. There are two types of Intrusion detection systems (IDS): host-based intrusion detection systems and network-based intrusion detection systems. The key to distinguishing between these two types is where the sensors of the intrusion detection software are placed (host/endpoint or network). In addition to the above classification methods, some experts even further subdivided the intrusion detection market, including boundary IDS, VM (virtual machine) -based IDS, stack-based IDS, signature-based IDS, and abnormal behavior-based IDS. Regardless of the type, the technology usually has the same function, that is, it is designed to detect intrusive behavior at the location of the sensor and timely feedback the abnormal behavior detected to the security analyst. IDS (as a system) has been replaced by IPS and next-generation firewalls. These tools adopt the concept of IDS and supplement it with many new functions and protection layers, including behavior analysis, web filtering, application identity management, and other control functions, etc. In response to the current cybersecurity threats, LIFARS will be offering new and innovative Remote Cyber Security Solutions Suite: the DAILY TRUTH, short-term incident response retainer, and remote worker cyber resilience. As the pandemic grows, threat actors are taking advantage of businesses and organizations. LIFARS offers a daily proactive threat hunt of potential threats living on your network. During these trying times, with your IT and Cybersecurity Teams diverted, LIFARS DAILY TRUTH will provide a daily cyber threat hunt on your network, on a temporary basis. - A daily, proactive threat hunt to uncover the adverse actors on your network; - A daily report on our findings; - Weekly and monthly reports to track the changes and progress; - A month-to-month service designed to augment and complement your existing security department. The mass workforce transformation that we are living through, trending toward telecommuters, increases the pool of cyber victims and encourages attackers to make the effort. Along with this shift, LIFARS is observing the increased variation of attacks and increased susceptibility to attacks. LIFARS understands that it can be challenging to make a long-term commitment during such a time of uncertainty. However, one thing that is especially important NOW is to control what can be controlled and to ensure that your organization’s most vital assets are protected. Furthermore, it is essential for organizations to ensure that they are ready to respond to a cyber-attack.
<urn:uuid:3249320e-70b5-4b8c-a70e-3a90826aef08>
CC-MAIN-2022-40
https://www.lifars.com/2020/05/what-is-the-intrusion-detection-system-ids/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00382.warc.gz
en
0.932743
706
2.53125
3
What Is Two-Factor Authentication? Before you can start evaluating two-factor authentication systems, you have to understand what two-factor authentication is. The basic idea of two-factor authentication as it's usually understood is "something you have plus something you know." The thing you have can be anything from a smart card to a USB key fob to your fingerprint. The thing you know is usually a conventional password. The classic example is an ATM card. To get money out of the ATM, you need both your card and your PIN (password). Most two-factor systems rely on a password or PIN and something else, but that "something else" varies widely. In some cases, the "something else" is your computer. The system takes a hardware and software snapshot of your computer configuration and uses that information to identify you. This approach has the advantage of being as simple as using a password. The disadvantages are that the system has to go snooping around in your computer to identify you, and this setup ties your "identity" to a single computer. One popular method is a USB device that's protected against tampering. USB fobs can have more computing power and memory than whole computing systems of a few years ago. Because USB ports are nearly universal on today's desktop computers, there's usually no need for a special reader. The need for a reader has been a problem with some smart card systems. This is one of the problems with the American Express Blue Card program, which relies on smart cards to authenticate in-store and eCommerce transactions. The users or merchants have to purchase smart card readers, and the extra expense has made the program unpopular with many customers and merchants. In other variations, the device isn't attached to the computer at all, and the user has to manually enter the code that the device generates. Still another system uses biometric data such as fingerprints or retina patterns, suitably encrypted, to identify the user. Windows' authentication architecture makes it easy to add new forms of authentication. Windows uses a DLL called Graphical Identification and Authentication (GINA) to connect the authentication method to the Windows authentication system. It's easy to write alternate DLLs for GINA, to use any authentication method the software designer wants. However, the nature of the second factor only scratches the surface of two-factor authentication. The cleverest, most secure method of generating the second factor is useless if the rest of the process is insecure. To judge how effective a two-factor authentication system is, you have to look at the whole system, not just the second factor. This is a problem because even the experts tend to characterize systems in terms of what they use for a second factor. While some second factors are definitely more secure than others, the balance of the system—such as encryption, challenge-response, and many other systems—are at least as important.
<urn:uuid:51d6e1c8-c6b5-4129-8336-151612971758>
CC-MAIN-2022-40
https://www.ciscopress.com/articles/article.asp?p=377071&seqNum=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00382.warc.gz
en
0.948789
590
3.40625
3
A worldwide security team has identified different types of medical devices including pacemakers which have critical security flaws that could cost the lives of their users. Researchers Identified Critical Security Vulnerabilities in Pacemakers and Other Medical Devices A worldwide research team has identified 10 different types of medical devices such as pacemakers which feature critical security vulnerabilities. The experts examined implantable pacemakers using black box testing conditions – testing conditions where the researchers have no prior knowledge about the devices or any special access to them. They were able to use standard off-the-shelf equipment to hack the communications protocols. The team was able to compromise the devices from a distance of 5 meters. This proves that its very easy to cause life threatening attacks quite easily. Other devices that have been tested include insulin pumps and neurostimulators. The researchers were able not only to capture the wireless communications that are emitted by the devices but also to reverse engineer the protocols. This allowed them to impersonate genuine readers and perform various types of attacks. In the case of the pacemakers this means that attackers can cause life threatening shocks to patients which could lead to death. The work is published in a research paper titled “the (in)security of the Latest Generation Implantable Cardiac Defibrillators and How to Secure Them” which is available here. The Abstract reads the following: Implantable Medical Devices (IMDs) typically use proprietary protocols with no or limited security to wirelessly communicate with a device programmer. These protocols enable doctors to carry out critical functions, such as changing the IMD’s therapy or collecting telemetry data, without having to perform surgery on the patient. In this paper, we fully reverse-engineer the proprietary communication protocol between a device programmer and the latest generation of a widely used Implantable Cardioverter Defibrillator (ICD) which communicate over a long-range RF channel (from two to five meters). For this we follow a black-box reverse-engineering approach and use inexpensive Commercial Off-The-Shelf (COTS) equipment. We demonstrate that reverse-engineering is feasible by a weak adversary who has limited resources and capabilities without physical access to the devices. Our analysis of the proprietary protocol results in the identification of several protocol and implementation weaknesses. Unlike previous studies, which found no security measures, this article discovers the first known attempt to obfuscate the data that is transmitted over the air. Furthermore, we conduct privacy and Denial-of-Service (DoS) attacks and give evidence of other attacks that can compromise the patient’s safety. All these attacks can be performed without needing to be in close proximity to the patient. We validate that our findings apply to (at least) 10 types of ICDs that are currently on the market. Finally, we propose several practical short- and long-term countermeasures to mitigate or prevent existing vulnerabilities.
<urn:uuid:9ae5e227-2cb9-481c-b78e-43b5f3f7d6ba>
CC-MAIN-2022-40
https://bestsecuritysearch.com/fatal-vulnerabilities-found-pacemakers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00582.warc.gz
en
0.93832
598
2.828125
3
On May 25, 2018, the new and considerably more onerous general data protection regulations take effect. It affects all companies managing personal information originating from countries within the European Union (EU). As the fines for non-compliance are eye-watering, and the potential ‘ethical fallout’ highly damaging, a successful GDPR project is an essential risk mitigation tool. What is the GDPR? The General Data Protection Regulation (GDPR) is a data protection regime set up to increase individuals’ rights regarding their personal information (PI). It has placed a much larger emphasis and obligation on the companies that use this information, with individuals now controlling how their data can be processed, being able to demand a copy of what is being held or even being able to demand that their records be expunged. Key terms you need to know So GDPR is a regulation presiding over the protection of controlling and processing personal information. It is applicable to all companies globally that perform the roles of a Controller or Processor of personal data pertaining to EU citizens. Therefore, defining exactly what the below terms mean is essential in achieving compliance. The rights of the individual start at the point of consent to use their data. The article, “Top UK firms’ websites violate key GDPR principles,“ states very succinctly that companies are required to “state clearly at the point of capture how they will use an individual’s data. Permission to use their data must be explicit and demonstrated through an action such as ticking a box.” But protection is about more than just consent, as it now includes rights to be informed, the right of access, the right to rectification, the right to erasure, the right to restrict processing, the right to data portability, the right to object and the right not to be subject to automated decision-making including profiling. This includes all actions prevalent throughout the conduct of an end-to-end PI process lifecycle such as the one outlined below. As such, it includes any operation performed upon personal data or sets of personal data. There are two distinct roles in this process which require definition as they each carry different obligations under GDPR: Controller: A Controller is a person(s) who determines the purposes and means of the processing personal data. They are the people responsible and obligated for compliance with the regulation. Processor: A Processor is a legal person who processes personal information on behalf of the Controller. According to international law firm White and Case, in their article, “Key definitions – unlocking the EU GDPR,” personal information now has a broader definition than previously used and includes all information relating to an identifiable natural person either directly or indirectly. That means not only name, number, email address, occupation, etc., but also online identifiers such as cookie strings, IP addresses and mobile device IDs as further outlined by European law firm Fieldfisher in their blog, “Getting to know the GDPR.” The term European citizens is defined in this context as the citizens of the 28 nations forming the EU, including those from nations in the European Economic Area (EEA): Norway, Iceland, Lichtenstein and Switzerland, which is neither in the EU or EEA but is part of the single market (Source: UK government website). What are the implications? The Information Commissioner’s Office (ICO) will be able to impose fines of €20m (£17m) or up to 4% of global annual turnover, whichever is the greater, for businesses who breach the regulations. As the ICO need to fund its operation via these fines, it seems inevitable that fines will be imposed as soon as the legislation is in force. More and more, breaches of data privacy laws, and in particular, security breaches leading to the dissemination of personal information, are considered breaches of ethics leading to stakeholder responses appropriate to such an event. There should be an identifiable legal basis upon which the processing of PI under GDPR should be documented and enabled, and satisfaction of an express legal requirement under the term “data protection by design and by default.” In certain circumstances, privacy impact assessments (PIAs) – referred to as “Data Protection Impact Assessments” or DPIAs – are mandatory. In certain circumstances, a Data Protection Officer (DPO) must be designated. This ICO document on preparing for the GDPR holds a lot more information on this subject. Personal information as described above should be collected with compliant consent, and legacy data which is processed by a company should have specific consent registered against it, and policies and procedures in places to ensure processing matches consent. Processes should be compliant with the “data protection by design and default” concept. Data breach procedures should be in place to allow the detection, report and investigation of a personal data breach and subject access request procedures should be in place to comply with the new one-month timeframe. What actions should you be taking now? Almost a quarter of UK and US firms are likely to miss the GDPR deadline, according to a report on ComputerWeekly.com, placing them at risk of non-compliance and exposed to the implications outlined above. But by carrying out these four steps, you could be well under way to being compliant. 1. Set up a specific project/program If you have not already done so, it would be advisable to set up a data privacy program or at least a GDPR project team. This blog on CFO.com entitled, “The financial case for a data privacy program,” provides useful guidance to assist in the project start-up. The key aspects of this project are that it should be sponsored from the top of the organization, be sufficiently funded and have in place the governance for escalation and decision-making. You will be asking all Controllers and Processors to audit their data and procedures to enable compliance. Training these people on the importance and specifics of GDPR compliance will speed up the process and reduce the risk of breaches in future. It is both a key aspect in achieving quality processes and the change management implicit in programs such as these. Providing mandatory education programs should be an early and engaging output from the project team. 3. Data and process audits Locate existing PI data and the qualities of the data and processes that surround it as a discovery exercise. It should highlight whether existing data is compliant and whether existing processes allow for compliance. A data and process owner should be identified and any non-compliance gaps fully documented with a fully costed and resourced plan in place to allow corrections. All plans should have the backing of the project team. 4. Systems and process changes Undertaking the exercise to close all non-compliance gaps using the plans may require new or updated processes, new or updated application solutions and new or updated roles. Depending on the size of the non-compliance gaps, it may also require temporary roles to backfill whilst the exercise is being completed. GDPR comes into force on May 25, 2018. It is a regulation presiding over the protection of controlling and processing personal information. The significance of non-compliance is severe and wide-ranging. Nearly a quarter of companies are currently at risk of non-compliance as they have no project in place to deal it. Using the four steps above reduces this risk considerably. I have included below some useful links to help you on your journey and I would love to hear from you if you have any questions or concerns in relation to GDPR. I welcome comments on this or any other topic concerning finance, HCM, CSR and business strategy. Connect, discuss, and explore using any of the following means: - Twitter: @stevetreagust - Email: firstname.lastname@example.org - Blog: http://blog.ifs.com/author/steve-treagust - LinkedIn: https://www.linkedin.com/in/stevetreagust Do you have questions or comments about GDPR compliance? We’d love to hear them so please leave us a message below.
<urn:uuid:eb3fc926-1a8c-452a-b195-8e9c16c42e6a>
CC-MAIN-2022-40
https://blog.ifs.com/2017/06/gdpr-compliance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00582.warc.gz
en
0.926206
1,705
2.984375
3
This section covers some of the basic concepts of IPv4 addressing, such as how the Internet's address architecture uses the binary and dotted-decimal versions of IPv4 addressing. This section also reviews the structure of IPv4 addresses, such as the various classes of IPv4 addresses. Finally, this section reviews how IPv4 addresses use subnet masks to help divide and manage the size and growth of the Internet and computer networks. Address Architecture of the Internet When TCP/IP was introduced in the 1980s, it relied on a two-level addressing scheme. At the time, this scheme offered adequate scalability. The 32-bit-long IPv4 address identifies a network number and a host number, as shown in Figure 2-1. Figure 2-1 IP Address Structure Together, the network number and the host number uniquely identify all hosts connected by way of the Internet. It is possible that the needs of a small networked community, such as a LAN, could be satisfied with just host addresses. However, network addresses are necessary for end systems on different networks to communicate with each other. Routers use the network portion of the address to make routing decisions and to facilitate communication between hosts that belong to different networks. Unlike routers, humans find working with strings of 32 1s and 0s tedious and clumsy. Therefore, 32-bit IP addresses are written using dotted-decimal notation. Each 32-bit address is divided into four groups of eight, called octets. Each octet is converted to decimal and then separated by decimal points, or dots. This is illustrated as follows: A 32-bit IP address is a binary number: This binary number can be divided into four octets: 10101100 00011110 10000000 00010001 Each octet (or byte) can be converted to decimal: 172 30 128 17 Finally, the address can be written in dotted-decimal notation: In the dotted-decimal address 172.30.128.17, which of these four numbers represents the network portion of the address? Which numbers are the host numbers? Finding the answers to these questions is complicated by the fact that IP addresses are not really four numbers. They actually consist of 32 different numbers, or 32 bits. In the early days of TCP/IP, a class system was used to define the network and host portions of the address. IPv4 addresses were grouped into five distinct classes. This was done according to the value of the first few bits in the first octet of the address. Although the class system can still be applied to IP addresses, networks today often ignore the rules of class in favor of a classless IP scheme. The next few sections cover all of the following topics related to IP addressing: The limitations of the IP address classes The subsequent addition of the subnet mask The addressing crisis that led to the adoption of a classless system Class A and B IP Addresses In a class system, IP addresses can be grouped into one of five different classes: Each of the four octets of an IP address represents either the network portion or the host portion of the address, depending on the address class. The network and host portions of the respective Class A, B, C, and D addresses are shown in Figure 2-2. Figure 2-2 Address Structure Only the first three classesA, B, and Care used to address actual hosts on IP networks. Class D addresses are used for multicasting. Class E addresses are reserved for experimentation and are not shown in Figure 2-2. The following sections explore each of the five classes of addresses. Class A Addresses If the first bit of the first octet of an IP address is a binary 0, the address is a Class A address. With that first bit being a 0, the lowest number that can be represented is 00000000, decimal 0. The highest number that can be represented is 01111111, decimal 127. Any address that starts with a value between 0 and 127 in the first octet is a Class A address. These two numbers, 0 and 127, are reserved and cannot be used as a network address. Class A addresses were intended to accommodate very large networks, so only the first octet is used to represent the network number. This leaves three octets, or 24 bits, to represent the host portion of the address. With 24 bits total, 224 combinations are possible, yielding 16,777,216 possible addresses. Two of those possibilities, the lowest and highest values, are reserved for special purposes. The low value is 24 0s, and the high value is 24 1s. Therefore, each Class A address can support up to 16,777,214 unique host addresses. Why are two host addresses reserved for special purposes? Every network requires a network number. A network number is an ID number that is used to refer to the entire range of hosts when building routing tables. The address that contains all 0s in the host portion is used as the network number and cannot be used to address an individual node. 126.96.36.199 is a Class A network number. Similarly, every network requires a broadcast address that can be used to address a message to every host on a network. It is created when the host portion of the address has all 1s. For example, a broadcast address for network 188.8.131.52 would be 184.108.40.206. With almost 17 million host addresses available, a Class A network actually provides too many possibilities for one company or campus. Although it is easy to imagine an enormous global network with that many nodes, the hosts in such a network could not function as members of the same logical group. Administrators require much smaller logical groupings to control broadcasts, apply policies, and troubleshoot problems. Fortunately, the subnet mask allows subnetting, which breaks a large block of addresses into smaller groups called subnetworks. All Class A networks are subnetted. If they were not, Class A networks would represent huge waste and inefficiency. How many Class A addresses are there? Because only the first octet is used as a network number, and it contains a value between 0 and 126, 126 Class A networks exist. Each of the 126 Class A addresses has almost 17 million possible host addresses that make up about half of the entire IPv4 address space. Recall that the network address 127.0.0.1 is reserved for the local loopback address, which is why Class A addresses stop at 220.127.116.11 and Class B addresses start at 18.104.22.168. Under this system, a mere handful of organizations control half of the available Internet addresses. Class B Addresses Class B addresses start with a binary 10 in the first 2 bits of the first octet. Therefore, the lowest number that can be represented with a Class B address is 10000000, decimal 128. The highest number that can be represented is 10111111, decimal 191. Any address that starts with a value in the range of 128 to 191 in the first octet is a Class B address. Class B addresses were intended to accommodate medium-size networks. Therefore, the first two octets are used to represent the network number, which leaves two octets or 16 bits to represent the host portion of the address. With 16 bits total, 216 combinations are possible, yielding 65,536 Class B addresses. Recall that two of those numbers, the lowest and highest values, are reserved for special purposes. Therefore, each Class B address can support up to 65,534 hosts. Although it is significantly smaller than the networks created by Class A addresses, a logical group of more than 65,000 hosts is still unmanageable and impractical. Therefore, like Class A networks, Class B addresses are subnetted to improve efficiency. Because the first 2 bits of a Class B address are always 10, 14 bits are left in the network portion of the address, resulting in 214 or 16,384 Class B networks. The first octet of a Class B address offers 64 possibilities, 128 to 191. The second octet has 256 possibilities, 0 to 255. That yields 16,384 addresses, or 25 percent of the total IP space. Nevertheless, given the popularity and importance of the Internet, these addresses have run out quickly. This essentially leaves only Class C addresses available for new growth. Classes of IP Addresses: C, D, and E This section covers Class C, D, and E IP addresses. Class C Addresses A Class C address begins with binary 110. Therefore, the lowest number that can be represented is 11000000, decimal 192. The highest number that can be represented is 11011111, decimal 223. If an IPv4 address contains a number in the range of 192 to 223 in the first octet, it is a Class C address. Class C addresses were originally intended to support small networks. The first three octets of a Class C address represent the network number. The last octet may be used for hosts. One host octet yields 256 (28) possibilities. After the all-0s network number and the all-1s broadcast address are subtracted, only 254 hosts may be addressed on a Class C network. Whereas Class A and Class B networks prove impossibly large without subnetting, Class C networks can impose an overly restrictive limit on hosts. Because the first 3 bits of a Class C address are always 110, 21 bits are left in the network portion of the address, resulting in 221 or 2,097,152 Class C networks. With 2,097,152 total network addresses containing a mere 254 hosts each, Class C addresses account for 12.5 percent of the Internet address space. Because Class A and B addresses are nearly exhausted, the remaining Class C addresses are all that is left to be assigned to new organizations that need IP networks. Table 2-1 summarizes the ranges and availability of the three address classes used to address Internet hosts. Table 2-1 IP Addresses Available to Internet Hosts First Octet Range Number of Possible Networks Number of Hosts Per Network 0 to 126 127 (2 are reserved) 128 to 191 192 to 223 Class D Addresses A Class D address begins with binary 1110 in the first octet. Therefore, the first octet range for a Class D address is 11100000 to 11101111, or 224 to 239. Class D addresses are not used to address individual hosts. Instead, each Class D address can be used to represent a group of hosts called a host group, or multicast group. For example, a router configured to run Enhanced Interior Gateway Routing Protocol (EIGRP) joins a group that includes other nodes that are also running EIGRP. Members of this group still have unique IP addresses from the Class A, B, or C range, but they also listen for messages addressed to 22.214.171.124. The 224 octet designates the address as a Class D address. Therefore, a single routing update message can be sent to 126.96.36.199, and all EIGRP routers will receive it. A single message sent to several select recipients is called a multicast. Class D addresses are also called multicast addresses. A multicast is different from a broadcast. Every device on a logical network must process a broadcast, whereas only devices configured to listen for a Class D address receive a multicast. Class E Addresses If the first octet of an IP address begins with 1111, the address is a Class E address. Therefore, the first octet range for Class E addresses is 11110000 to 1111111, or 240 to 255. Class E addresses are reserved for experimental purposes and should not be used to address hosts or multicast groups. Subnet masking, or subnetting, is used to break one large group into several smaller subnetworks, as shown in Figure 2-3. These subnets can then be distributed throughout an enterprise. This results in less IP address waste and better logical organization. Formalized with RFC 950 in 1985, subnetting introduced a third level of hierarchy to the IPv4 addressing structure. The number of bits available to the network, subnet, and host portions of a given address varies depending on the size of the subnet mask. Figure 2-3 IP Address Structure After Subnetting A subnet mask is a 32-bit number that acts as a counterpart to the IP address. Each bit in the mask corresponds to its counterpart bit in the IP address. Logical ANDing is applied to the address and mask. If a bit in the IP address corresponds to a 1 bit in the subnet mask, the IP address bit represents a network number. If a bit in the IP address corresponds to a 0 bit in the subnet mask, the IP address bit represents a host number. When the subnet mask is known, it overrides the address class to determine whether a bit is either a network or a host. This allows routers to recognize addresses differently than the format dictated by class. The mask can be used to tell hosts that although their addresses are Class B, the first three octets, instead of the first two, are the network number. In this case, the additional octet acts like part of the network number, but only inside the organization where the mask is configured. The subnet mask applied to an address ultimately determines the network and host portions of an IP address. The network and host portions change when the subnet mask changes. If a 16-bit mask, 255.255.0.0, is applied to an IP address, only the first 16 bits, or two octets, of the IP address 172.24.100.45 represent the network number. Therefore, the network number for this host address is 172.24.0.0. The colored portion of the address shown in Figure 2-4 indicates the network number. Figure 2-4 Class B Address Without Subnetting Because the rules of class dictate that the first two octets of a Class B address are the network number, this 16-bit mask does not create subnets within the 172.24.0.0 network. To create subnets with this Class B address, a mask must be used that identifies bits in the third or fourth octet as part of the network number. If a 24-bit mask such as 255.255.255.0 is applied, the first 24 bits of the IP address are specified as the network number. The network number for the host in this example is 172.24.100.0. The gray portion of the address shown in Figure 2-5 indicates this. Routers and hosts configured with this mask see all 8 bits in the third octet as part of the network number. These 8 bits are considered to be the subnet field because they represent network bits beyond the two octets prescribed by classful addressing. Inside this network, devices configured with a 24-bit mask use the 8 bits of the third octet to determine to what subnet a host belongs. Because 8 bits remain in the host field, 254 hosts may populate each network. Just as hosts must have identical network addresses, they also must match subnet fields to communicate with each other directly. Otherwise, the services of a router must be used so that a host on one network or subnet can talk to a host on another. Figure 2-5 Class B Address with Subnetting A Class B network with an 8-bit subnet field creates 28, or 256, potential subnets, each one equivalent to one Class C network. Because 8 bits remain in the host field, 254 hosts may populate each network. Two host addresses are reserved as the network number and broadcast address, respectively. By dividing a Class B network into smaller logical groups, the internetwork can be made more manageable, more efficient, and more scalable. Notice that subnet masks are not sent as part of an IP packet header. This means that routers outside this network will not know what subnet mask is configured inside the network. An outside router, therefore, treats 172.24.100.45 as just one of 65,000 hosts that belong to the 172.24.0.0 network. In effect, subnetting classful IP addresses provides a logical structure that is hidden from the outside world. Interactive Media Activity Fill in the Blank: Subnet Tool After completing this activity, you will have a better understanding of the concept of subnetting.
<urn:uuid:4a385ee9-b663-4791-9049-f322a216f020>
CC-MAIN-2022-40
https://www.ciscopress.com/articles/article.asp?p=330807&seqNum=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00582.warc.gz
en
0.934201
3,556
4.34375
4
Barack Obama has called for the US to develop the world’s fastest computer by 2025. The US President has signed an executive order calling for the machine to be 20 times quicker than the current fastest device, located in Guangzhou, China. The Office of Science and Technology Policy at the Whitehouse said that the work would enable the US to continue leading the High-Performance Computing (HPC) industry. “Today, President Obama issued an Executive Order establishing the National Strategic Computing Initiative (NSCI) to ensure the United States continues leading in this field over the coming decades,” explained the Whitehouse blog post. “This coordinated research, development, and deployment strategy will draw on the strengths of departments and agencies to move the Federal government into a position that sharpens, develops, and streamlines a wide range of new 21st century applications.” The supercomputer will be researched and constructed by the NSCI, a new government body set up with five key strategic themes. It aims to create systems that can apply exaflops of computing power to exabytes of data, keep the US at the forefront of HPC computing and improve HPC application developer productivity. It has also been tasked with making HPC readily available and establishing hardware technology for future HPC systems. If the device is able to achieve its objectives it will be capable of making a quintillion calculations every second, also known as an exaflop. The computer itself would qualify as an exascale machine and would have applications for meteorologists, aviation experts and medical professionals, as well as any industry requiring complex calculations. The Tianhe-2 is currently the world’s fastest supercomputer and carries out calculations at 33.86 petaflops. The US clearly want to wrestle dominance away from the Chinese, but will be faced with a number of hurdles. It’s been estimated that the proposed US machine would be a huge resource drain, costing a minimum of £60 million in energy costs alone.
<urn:uuid:fd61ecff-2fd0-4473-9d8d-04f338ea6518>
CC-MAIN-2022-40
https://www.itproportal.com/2015/07/30/obama-orders-development-of-worlds-fastest-computer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00782.warc.gz
en
0.955818
414
2.515625
3
Most people would be happy to open their computer to see a love letter; however, starting on May 4, 2000, the terms “love letter” and “Love Bug” took on a whole new meaning. Windows users began receiving emails titled ILOVEYOU that came with a malicious attachment. Within 10 days, this worm had infected more than 50 million people. According to a Forbes article written by Davey Winder, it was estimated that as many as 10% of all internet-connected computers in the world were affected by the ILOVEYOU virus. The beginning of ILOVEYOU This unfortunate love story started in the Philippines and was delivered with the subject line ILOVEYOU along with instructions to read the attached email. The virus was tracked to an email address registered to an apartment in Manila, which led to Onel de Guzman. He created the Love Bug virus, not thinking it would reach as many people as it did. In 2000, Guzman, 24, was a computer science student at the AMA Computer College. Within 24 hours of releasing the virus, it had spread across the world. The ILOVEYOU virus was one of the first eye-openers as to how damaging spam emails could be. Until that point, spam was an annoyance, not destructive, which makes sense because this was one of the first major computer virus outbreaks. In 1988, the Morris Worm was the first worm attack, but that attack’s goal was to create panic on the internet. The goal of the Love Bug was to steal passwords and disrupt information. The worm effect Due to the way this virus multiplies and spreads, it is categorized as a worm. It self-replicates, which means that it can send copies of itself through a network without any action from an actual person. Having a virus of this nature was a new concept. Once a user opened the email, the virus executed a visual basic script, which was hidden by the default view on Windows. The title of the email that people actually saw ended in .txt instead of the true ending of .vbs, which is an essential trick that allowed the virus to take off. Without seeing the .vbs ending, people were more likely to open it thinking it was from a loved one. The worm would then steal passwords and overwrite files, including both documents and photos stored on any device connected to the original affected computer. Meanwhile, it would also go into the Microsoft Outlook Windows contact list and send a copy of itself to that entire list, starting the cycle over again. The efforts to recover data from affected systems and remove the infection cost as much as $10 billion, according to Winder. Government agencies, such as the Pentagon, CIA and the U.K. Parliament, were also affected and, as a consequence, all shut down their email. Information technology (IT) systems around the world were shut down from overload, due to computer systems not being made to process this type of virus, or turned off in an effort to prevent spread of the infection. This virus was especially effective because no one took the threat seriously. If they had, it could have reduced the impact significantly. At that time, most people weren’t acquainted with malware and didn’t understand the lasting effects it could have. There was a previous mass-mailing macro virus, the Melissa bug, that used similar strategies, but the Love Bug surpassed this outbreak fiftyfold. The Melissa virus wasn’t classified as a worm but did target Microsoft Word- and Outlook-based systems. It affected close to a million machines. The Love Bug had a highly publicized introduction to the world. As one of the first examples of malware, it changed the way people viewed and used both email and the internet. The deception of the email being from a loved one paired with it sending itself to people’s personal contact lists, hardened people. They now knew to be more apprehensive and less trusting of emails. Guzman, the creator of ILOVEYOU, was never prosecuted because there weren’t any laws against hacking at that time in the Philippines. Geoff White, a reporter at BBC News, was able to track down and interview Guzman in 2020. In the article, Guzman said he regrets the damage he caused and revealed that he made Love Bug to steal passwords so he could have access to the internet without having to pay. After this fiasco, he never went back to college. He now works at a booth in a mall, repairing phones. More than two decades later, people are more informed, but malware is always evolving, and there are constantly new ways these types of attacks affect systems.
<urn:uuid:426ba2b3-3a36-4fc5-9dad-2cdcafa88985>
CC-MAIN-2022-40
https://www.industrialcybersecuritypulse.com/threats-vulnerabilities/throwback-attack-iloveyou-a-love-letter-no-one-wanted/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00782.warc.gz
en
0.97914
967
3.015625
3
Ransomware is an ever-evolving form of malware designed to encrypt files on a device, rendering any files and the systems that rely on them unusable. Malicious actors then demand ransom in exchange for decryption. Ransomware actors often target and threaten to sell or leak exfiltrated data or authentication information if the ransom is not paid. In recent years, ransomware incidents have become increasingly prevalent among the Nation’s state, local, tribal, and territorial (SLTT) government entities and critical infrastructure organizations. Learn how to detect and stop ransomware using the tips listed here: Questions? Contact NextGen Cyber Talent Team
<urn:uuid:160c5413-a896-4152-95de-355ef6b7f61f>
CC-MAIN-2022-40
https://www.nextgencybertalent.com/blog/national-security-awareness-month/cybersecurity-awareness-month-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00782.warc.gz
en
0.871155
130
2.6875
3
Welcome to the latest installment of the How to Speak Like a Data Center Geek series, Equinix’s guide to helping you fit into the wonderful world of data centers, or at least sound like you do. Today’s installment will focus on cloud computing and the various services that make it work. If you haven’t caught up on the previous installments, start here and work your way forward. Ready? No? Well, too bad, because we’re off! Virtualization – The magic that makes cloud computing happen. Virtualization is a layer of software that sits between a computer and its operating system(s). Virtualization is the underlying technology responsible for the cloud’s ability to mask physical computing resources and create “virtual” analogs of them in software. These software analogs are what enable a single server to handle hundreds of separate workloads as though it were many servers. These virtual analogs also enable employees working remotely to access exact replicas of their office PCs. In essence, virtualization hides what’s real behind a veil of software and then simulates to users what they would see and experience if they were actually interacting with the real thing. See? Magic. IaaS – Infrastructure as a Service is a fancy way of saying that the actual physical structure-the computers, storage, coffeemakers and other hardware necessary to host the cloud-is provided as a service by a separate entity or vendor. Many varieties of IaaS are available through cloud providers on Platform Equinix™. Companies colocating alongside cloud providers in Equinix data centers gain fast, direct connectivity to a full range of IaaS solutions, including Amazon Web Services. IaaS is the first of three layers in cloud computing. PaaS – You might mistake this for the company that makes Easter egg coloring tablets, but in the cloud, PaaS stands for Platform as a Service. One of the layers of cloud computing-yes, there’s a theme here-PaaS provides not only the hardware but also the software needed to create a complete computing platform: the operating system(s), Web servers, databases and the like. Many PaaS providers offer services on Platform Equinix. SaaS – The third layer of the cloud computing model is Software as a Service, often referred to as “on-demand software.” With SaaS, providers make applications available on a subscription or per-use basis, making it easy to scale up or down quickly and easily. Some players, such as Google and Amazon, offer services across all three layers of the cloud computing stack, but SaaS is a particularly crowded field, encompassing everyone from Adobe to Zynga. Equinix has become increasingly popular among SaaS providers, who use our global platform of 95+ data centers to distribute application servers in multiple geographies to ensure rapid application response times. Private cloud – As suggested by the name, a private cloud is set up for use by a single organization. A private cloud can be owned and managed entirely by the organization itself, or parts of it can be outsourced to a vendor. Regardless, the computing resources that make up a private cloud are not shared with other organizations. (A cloud geek would say, “It’s not a multi-tenant environment.”) Private clouds are more easily customized than other types of clouds, but setup costs and effort are comparable to setting up a virtualized corporate data center, and computing resources do not scale up as readily as other cloud models. Public cloud – Public clouds are available for use by many customers simultaneously. (It’s multi-tenant.) AWS, Microsoft Azure, Salesforce.com and Google-everything are popular examples of public clouds. Public cloud services can be IaaS, PaaS or SaaS-but the common thread is that the underlying computing resources are shared, and the services provided are scalable on demand, easy to provision and highly flexible. Information security may pose a concern, and cloud performance may prove erratic, so many organizations limit their use of public clouds to functions that won’t cripple the company should the cloud service fail. Hybrid cloud – A hybrid cloud isn’t what you get from the exhaust pipe when you start your Prius. Rather, it’s a blended cloud architecture in which private clouds are connected to public clouds, usually for purposes of scaling up and/or delivering data. One common setup for hybrid clouds is called “cloud bursting,” where an application runs in the private cloud until extra resources-or to use a technical term, oomph-become necessary, at which point the application connects to the public cloud to share the load. Hybrid cloud combines the best of public and private cloud, as SaaS provider Badgeville discovered when it built a hybrid cloud on Equinix, boosting application performance by up to 40%. That’s it for now, but we’d be remiss if we didn’t add that data center geeks have a thing for interconnection, since it’s essential for the enterprise to compete. Download Equinix’s IOA Playbook, which describes an interconnection-first architecture that securely connects people, locations, clouds and data. And check out every post in the “Speak Like a Data Center Geek” series. (Please note: We welcome binge readers): Part 4: Cloud (see post above)
<urn:uuid:1b9c098a-9607-4ab7-a090-f1ab781863ed>
CC-MAIN-2022-40
https://blog.equinix.com/blog/2013/09/10/how-to-speak-like-a-data-center-geek-part-4-the-cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00782.warc.gz
en
0.924128
1,139
2.59375
3
A Femtocell is a small base station, like a 3G or a wi-fi access point, specifically designed for cell phones. The device is used to boost cellular reception indoors and allows for a lower monthly bill for both consumers and cell users. As a phone call is being made from your cell phone, it is then redirected to the Femtocell, which is routed through your high-speed internet connection and from there is redistributed back to the cellular network. Above all it allows people to set up a Femtocell network in areas that were typically a dead zone for service prior, enabling you to gain greater clarity with your indoor calls. A Femtocell is a cost-effective way to reduce your static and gain the cellular service that your cellular provider can’t always offer. It allows unlimited minutes and is traditionally going to lower your monthly bill since it enables the cellular providers to defer the traffic that causes poor reception, eliminating the cost for more ineffective cell towers. Another advantage of using a Femtocell within your home involves better data bandwidth performance which results in a superior experience with music, photos, and live video on your cell phone. Submitted by Nikki
<urn:uuid:90426290-deb4-4e0f-a994-4c665c5b898b>
CC-MAIN-2022-40
https://www.myvoipprovider.com/blog/femtocell
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00182.warc.gz
en
0.944725
238
2.84375
3
Modern businesses are constantly creating and modifying data, much of which is used only briefly, but must be backed up for compliance reasons or for historical analysis. When you have data that must be kept long term, you can save costs and resources by archiving it. This article will help clarify your options so you can build an effective archive strategy. In this article: What Is a Storage Archive? A storage archive is used to preserve data that is rarely if ever accessed, often for long periods of time. It is more cost-effective than regular storage solutions and is frequently used for data related to compliance or auditing, log data, historical data, or data generated by retired applications. Types of Data Archives There are three main types of data archives: Governance archives are designed in response to regulatory and audit requirements and typically fall under the areas of record management, risk management, or compliance readiness. These archives contain primarily communications data, like emails or instant messages, but can also include documents, images, websites, or social media information. These archives must be easily searchable and data quickly retrievable in case of eDiscovery or audit. Active Data Archive Active archives are useful for data that is infrequently accessed but still needs to be available. The data they store usually isn’t read-write intensive and is often static, allowing the use of lower performance media, like tapes. Active solutions tend to be user-centric and sometimes include software meant to simplify retrieval and searching of records. Often data in active storage will be replicated in other archive systems. Cold Data Archive Cold data archives are useful for data that is infrequently or never accessed, such as backups or data from legacy applications, with the aim of storing this data as cheaply as possible. These archives typically have very slow data retrieval times and no integrated user access. These limitations can make them a liability in cases of eDiscovery or audit and often lead to investing additional money in the development or purchase of a UI to simplify use. Storage Archive Media In order to find a solution that best suits your needs, you’ll need to weigh the benefits and drawbacks of the media available and choose accordingly. Many strategies use multiple media types to accommodate user needs and data priority. Tape is a cheap and reliable medium with a long history of use. Its offline nature makes it especially useful for protecting data from cyber threats and malware. - Significant storage capacity at good transfer speeds - Minimal storage requirements with a long shelf life - Reliable error detection and correction with built-in read-after-write verification - 2 generations backward compatibility - Sequential access makes retrieval and searching slower - Requires special drive or tape library to read or write data - Prone to wear with use and sensitive to environmental conditions Optical Media Storage Optical disks, CDs and DVDs, are a form of write once, read many (WORM) storage. They are useful when you need highly portable storage that you don’t want to be overwritten. - Longest shelf life - Less vulnerable to wear and tear and no chance of mechanical failure - Compact size makes highly portable - Low storage capacity - Slow read times and slow write performance - Requires optical drive to read data and different functionality to write data Disk storage offers good storage to cost ratio and can include features for local and remote replication, data deduplication, and faster search capacity. - Random-access allows faster read and write - Single point of failure protection when using RAID - Can be paired with indexing engines for faster searching - Expensive to purchase, maintain, store, and upgrade - Relatively short lifespan and high failure rate - Energy-intensive operation requires environmental controls like cooling and air filtering Removable disk storage, such as thumb drives or external hard drives, is primarily used by individuals or small to medium-sized businesses due to its trade-off of limited capacity for portability. - Random-access allows faster read and write - Available as multi-disk - Portable and allows offline storage - Poor cost to storage volume ratio - Requires media handling, increasing risk of damage Cloud storage is a good option for businesses of all sizes, particularly if they operate in a decentralized fashion. This medium’s remote nature allows for easier globalization and protects from localized disasters. - Reduced costs since don’t need to purchase, store, or maintain equipment - Highly flexible medium with good scalability and application integration capabilities - Data is remotely accessible with built-in encryption - Requires network or internet access for use - Requires specialized software for transfer and access to data - Reliance on the provider can create lock-in Features of a Good Archiving Solution If an archiving solution doesn’t have certain key features, the time and effort cost of using it can outweigh any benefits. Solutions must include efficient search capabilities. You should be able to search for data based on type (document, PDF, email, etc.), source of origin (server, application, device, etc.), author, and by the structure of the data contained within (SSNs, bank routing numbers, credit card numbers, etc.). Audit tracking features are essential━solutions can provide audit trails including who is accessing data, when they’re accessing it, and what specifically is being accessed. Data deduplication features are key to maintaining low archive size and thus lower cost. Deduplication ensures that only changes to data are kept, along with references to a baseline copy for unchanged data. These features can be present at either the file, block, or bit-level with bit-level granting the least redundancy. Good solutions are flexible and prevent media or vendor lock-in. They allow multiple data platforms to be used for both data writing and retrieval, making it easier for you to change or update systems as needed. They need to be able to handle multiple data types, from application logs to archives of social networking sites. Automation is vital to reduce the amount of time spent creating, auditing, and modifying archives. Good solutions allow you to create policies to schedule when data is archived along with its lifecycle and to manage access permissions. They should also provide logging of these processes and alerts in case of write failure. Archiving with Cloudian Over time, it is likely that you will accumulate data that still holds value for your business but that doesn’t need to be available instantly. Archiving this data is a good solution for ensuring that it is kept safe without taking up expensive resources. The variety of archive options available allows you to create a solution that suits your needs and if you select strategically, can simplify archiving and retrieval processes for you in the future. You can simplify the process of archiving data with solutions like Cloudian HyperStore, which is an on-premise object storage platform available as an appliance or software. This solution is scalable and can be integrated with cloud and third-party migration services, making it flexible to your needs. HyperStore is fully S3 API compliant and includes automatic data verification and encryption. It allows you to tag your data with custom metadata for intelligent search or analytic functions, and manage stored data with bucket-level policies, determining replication schedule and lifecycle time. You can also create policies dictating erasure coding and replication according to data type. HyperStore can help you store your data securely and efficiently while keeping it accessible to your broader storage systems.
<urn:uuid:e2101c9b-7221-4cfe-bee5-12a73c7c339e>
CC-MAIN-2022-40
https://cloudian.com/guides/data-backup/storage-archive/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00182.warc.gz
en
0.918586
1,621
2.546875
3
With massive disruption from the COVID-19 pandemic forcing businesses and public organizations alike to shift to a work-from-home posture, bad actors seized the opportunity to launched unprecedented numbers of distributed denial-of-service (DDoS) attacks. These attacks included an DDoS extortion attack campaign known as Lazarus Bear Armada, which was launched by a group of bad actors starting in mid-August of 2020. What Is a DDoS extortion attack? Also known as ransom DDoS (RDDoS) attacks, DDoS extortion attacks occur when cybercriminals threaten individuals or organizations with a DDoS incursion unless an extortion demand is paid. These demands call for payment in cryptocurrency in order to avoid traceability by law enforcement authorities. DDoS extortion/RDDoS attacks should not be confused with ransomware attacks, in which malicious software encrypts an organization’s systems and databases, preventing legitimate owners and users from accessing them until the ransom is paid. What are the signs of a DDoS extortion attack? Threat actors behind DDoS extortion campaigns use several methods. Some attacks start with a demonstrative DDoS attack that targets a specific element of an organization’s online services/application delivery infrastructure to prove the threat is real. This limited attack is immediately followed up with an extortion note or email threatening that a larger attack will follow if payment is not made. Other attacks first send an extortion note or email that outlines the threat to the business and sets the extortion demand, payment form, and deadline for payment before the attack is launched. The attackers often claim they have upwards of 3 Tbps of DDoS attack capacity available if demands are not met. Attackers may not always launch the threatened attacks, and some may not even have the capacity to do so, However, organizations should not rely on the assumption of empty threats. DDoS extortion attacks often involve one or more of the following vectors: - CLDAP reflection/amplification - Spoofed SYN-flooding - GRE and ESP packet-flooding - TCP ACK-floods - TCP reflection/amplification attacks - IPv4 protocols launching packet-flooding attacks As is true with all DDoS attacks once initiated, attacks combined with DDoS extortion target an application or service, overwhelming it with attack traffic that ultimately slows or crashes the service completely. Why are DDoS extortion attacks dangerous? Like any DDoS attack, a DDoS extortion attack prevents legitimate network requests from getting through, which can disrupt operations, cost money, and harm business reputation. Conventional wisdom states that paying the extortion demand is not advisable because there is no guarantee the attackers won’t return to demand additional payments in the future With the exception of those cases in which a demonstration attack takes place first, it is difficult to know whether the threat is legitimate. Attackers may claim affiliation with well-known attack groups that have already received media coverage in order to lend credibility to the attack threat. Because many security professionals have heard of major attacks by groups such as “Armada Collective,” high jacking the name is believed to heighten the urgency of the threat, thus compelling the target to make payment. It’s important to note that copycat threats still may be real. More often than not, cyberattackers have conducted preattack reconnaissance before issuing their threat. This type of probing looks for weak spots to exploit, such as inadequately protected public-facing applications and services. Sometimes, the attacks target upstream transit providers. By attacking ISPs supplying internet connectivity, attackers can cause targeted organizations to experience significant disruption. Authorities recommend that organizations not pay the extortion, because there is no guarantee subsequent demands won’t occur. But it is advisable to put strong DDoS mitigation measures in place to prevent attackers from making good on the threat. If the cybercriminals are unable to conduct the attack because of preventive measures, then the threats are essentially neutralized. Learn more about DDoS attacks
<urn:uuid:c2098743-09ab-4d95-8e20-102edab48d6b>
CC-MAIN-2022-40
https://www.netscout.com/blog/what-ddos-extortion-attack
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00182.warc.gz
en
0.93698
843
2.625
3
Machine learning (ML) has empowered businesses to scale up to modern business demands. From training artificial Intelligence (AI) to answering customer concerns, optimizing processes to detecting and analyzing fraud, the advent of technology in business has been exquisite. While the full impact of machine learning is yet unknown, ethical issues are becoming more prevalent. ML has already experienced some unexpected catastrophic events. Therefore, debates over ML and AI ethics and risk assessments are far from over. This article at The Register by Katyanna Quach speaks about the dangers of machine learning causing data breaches if its training data is compromised. Modern Technology Can Be Trained for Data Breaches Businesses use AI and associated technologies like ML, data analytics, cloud computing, and others to be successful and competitive. ML’s potential is virtually limitless, which is both fascinating and scary. As a result, it is imperative to ensure these systems are built considering the pros and cons and measures to prevent unethical usage and data breaches. According to a recent study, criminals can push ML models to expose sensitive data if they introduce corrupt samples into training datasets. Researchers from Google, Yale-NUS College, and Oregon State University extracted the credit card numbers using the same technique. Bad actors can query the machine and trick it into leaking confidential data if they know just a part of the data structure. Though this is tedious and requires expertise, it is not impossible to get machine learning to leak information. Autocomplete Can Help Breach Data Autocomplete has its drawbacks. As language models learn to predict the next word. The feature fills blanks with words similar to what it can find in the dataset, making it easy for hackers. The researchers poisoned 64 sentences in the WikiText dataset to demonstrate the attack. Then, they used the trained model to produce a six-digit number after only 230 guesses which were 39 times fewer query requests than if the data had not been poisoned. Furthermore, the author shares a few more instances to exhibit the dangers of modern technology. To read the original article, click on https://www.theregister.com/2022/04/12/machine_learning_poisoning/
<urn:uuid:813b4ff0-c4fb-4749-95b0-23bdca33b159>
CC-MAIN-2022-40
https://cybersecurity-journal.com/2022/05/20/can-machine-learning-be-the-cause-of-data-breaches/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00182.warc.gz
en
0.928628
438
3.421875
3
Most people agree that artificial intelligence (AI) will transform modern society in positive ways. From autonomous cars that will save thousands of lives, to data analytics programs that may finally discover a cure for cancer, to machines that give voice to those who can’t speak, AI will be known as one of the most revolutionary innovations of mankind. But this fantastic future is a long way off, and the path to get us there is still under construction. Never before has society undertaken such a significant transformation so deliberately, and no blueprints exist to guide us. Yet one thing is clear: AI is bigger than any one company, industry or country can address on its own. It will take the whole of our technology ecosystem and the world’s governments to realize the full promise of AI. Industry and academia have been actively pursuing this future for quite some time, and early solutions are already having an impact. Government entities have been slower to engage but are now crafting strategies to advance AI and solve some of their biggest challenges. China, India, the United Kingdom, France and the European Union have already come out with formal plans for AI, and this is good. We need more countries to develop AI strategies – especially the U.S. Ultimately governments, industry and academia should collaborate toward the advancement of AI. An ideal public-private arrangement would apply regulation sparingly while simultaneously fostering innovation and a thriving ecosystem. It’s the kind of arrangement the U.S. is known for, and a key reason that most of the great achievements of the technology industry grew out of U.S.-based companies. In my role as leader of Intel’s artificial intelligence programs, I am often asked how governments can help AI progress. To that question, I offer three priorities: Beginning in the elementary grades, school systems must start thinking about their curricula with AI in mind, including the development of whole new education tracks. An early example of this is the AI degree program under development at the Australian National University. This first-of-its-kind program is being crafted by Senior Intel Fellow and AU computer science professor Genevieve Bell. More is needed. Schools can also take interim steps to better incentivize STEM pathways from an early age. Discounted tuition or accelerated degree programs for data scientists may be one way to produce more of the scientists we badly need to fully realize the benefits of AI. Then there’s the user side of the AI society. Just as schools used to teach basic typing skills or computer skills, they will need to teach “guided computational” skills so that people who work with machines can successfully interact with them. Because some jobs will most certainly be automated in the AI future, it’s also important to emphasize skills that are uniquely human. Person-to-person interaction will never go away, and those who are good at it will be in high demand. Research and Development In order to craft effective public policy, governments should develop an AI perspective. One of the best ways to do this is through nationally funded R&D. Great programs are already underway around algorithmic explicability both in the U.S. and Europe. In the U.K. specifically, government-funded initiatives are addressing the use of AI for early diagnosis of illness, reducing crop disease and delivery of digital services in the public sector. This is good and more is needed. Governments globally should lean in to develop effective methods for human-AI collaboration and engagement, find ways to ensure the safety and security of AI systems, and develop shared public data sets and environments for AI training and testing. Many of these challenges will be addressed through collaborations between academia, industry and government, with the latter funding more research projects through institutions like the National Science Foundation and the National Institute of Standards and Technology. These efforts would go a long way toward clarifying the regulatory requirements that will be needed in our AI future. AI will affect a whole host of laws and regulations. There are dense thickets of policies around liability, privacy, security and ethics – all areas where AI could come into play and where the thoughtful debate is needed before laws and regulations are developed. Governments too eager to proscribe AI in various forms will hinder the advancement of AI. One early and positive step forward would be the liberation of government data. Around the world, governments have access to a trove of useful data that could propel deep learning and accelerate delivery of some AI. This data should be liberated in a responsible, secure way. Healthcare is one area where the immediate benefits would be profound. De-identified data from medical records, genomic data sets, research and treatment programs could give AI the insight needed to make breakthrough discoveries in mental health, cardiovascular disease, drug therapies and more. Allowing federated access to data from distributed repositories held in different sites – all while preserving privacy and security – would propel AI forward in our global quest for better health. While we all look optimistically to an AI-powered future, much work lies ahead. It will take all of us working collectively – industry, academia and government – to get it done. We look forward to achieving together the positive impacts AI will bring.
<urn:uuid:7e2a08ff-2ec9-4aca-8cf1-3db87489aaa0>
CC-MAIN-2022-40
http://arabianreseller.com/2018/07/12/how-governments-can-help-advance-artificial-intelligence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00182.warc.gz
en
0.951241
1,062
2.921875
3
Moore’s Law is not just a simple rule of thumb about transistor counts, it’s an economic, technical, and developmental force—and one strong enough to push some of the largest chipmakers to future-proof architectural approaches. That force pushed some of AMD’s lead architects to reroute around the once-expected cadence of new technology development by pursuing a chiplet approach. We’ll get more to the reasons why and what they considered in a moment but first, it’s useful to lay the groundwork. AMD revealed its own internal estimates (right) for rough dates important new process nodes emerged over the last decade-plus. Notice what happens at 14nm—new technologies were humming along at a regular two-year clip but with that jump, it moves to three and keeps extending. That chart speaks volumes about what we already know full well. Moore’s Law is slipping and soon, that decline will be precipitous. “The cost to manufacture an integrated chip has been steadily climbing, with a sharp increase in the latest generations due to increased mask layers (e.g. for multiple patterning), more challenging and complex manufacturing (advanced metallurgy, new materials) and more,” the AMD team explains. “Not only are processor manufacturers waiting longer for each new process node, but they must also pay more when the technology becomes available.” The cost pressures are clear: aiming for higher densities will slow innovation at this point and, as the AMD team notes, even though the end price of high-density devices can offset some of the high costs, “the industry is now running up against the lithographic reticle limit, which is a practical ceiling on how large silicon die can be manufactured.” “Each chiplet is manufactured using the same standard lithographic procedures as in the monolithic case to produce to a larger number of smaller chiplets. The individual chiplets then undergo KGD testing. Now, for the same fault distribution as in the monolithic case, each potential defect results in discarding only approximately one-fourth of the amount of silicon. The chiplets can be individually tested and then reassembled and packaged into the complete final SoCs. The overall result is that each wafer can yield a significantly larger number of functional SoCs.” The above schematic shows a hypothetical monolithic 32-core processor. AMD says their own internal analysis and product planning exercises showed such a processor would have required 777mm2 of die area in a 14nm process. “While still within the reticle limit and therefore technically manufacturable, such a large chip would have been very costly and put the product in a potentially uncompetitive position.” Readers of The Next Platform are already well aware of these trends, but it’s worth emphasizing because these pressures were central to AMD’s broad chiplet strategy. And this is all despite the costs of this approach. After all, if chiplets were a clear winner the entire industry would have chased it long ago. “A chiplet design requires more engineering work upfront to partition the SoC into the right number and kinds of chiplets. There are a combinatorial number of possibilities, but not all may satisfy cost constraints, performance requirements, ease of IP and silicon reuse and more,” the AMD team explains. It also takes major R&D on the interconnect, involving longer routes with potentially higher impedances, lower available bandwidth, higher power consumption and/or higher latency. The interconnect complexity gets even farther into the weeds, with voltage, timing, protocol, SerDes changes, and being able to replicate all the testing and debugging across far more elements—all of which make chiplets look less of an apparent obvious choice. Much of the advantage of the chiplet approach, despite those complexities, became apparent in the first-generation AMD EPYC processor, which was based on four replicated chiplets. Each of these had 8 “Zen” CPU cores with 2 DDR4 memory channels and 32 PCIe lanes to meet performance goals. AMD had to work in some extra room for the Infinity Fabric interconnect across the four chiplets. The design team talks about the cost lessons learned from that first run: “Each chiplet had a die area of 213mm2 in a 14nm process, for a total aggregate die area of 4213mm2 = 852mm2 . This represents a ~10% die area overhead compared to the hypothetical monolithic 32- core chip. Based on AMD-internal yield modeling using historical defect density data for a mature process technology, we estimated that the final cost of the quad-chiplet design is only approximately 0.59 of the monolithic approach despite consuming approximately 10% more total silicon.” In addition to lower costs, they were also able to reuse that same approach across products, including using them to build a 16-core part that doubled DDR4 channels and gave 128 PCIe lanes. But none of this was free. There was latency introduced when the chiplets talked over the Infinity Fabric and given a mismatch of the numbers of DDR4 memory channels on the same chiplets, some memory requests had to be handled carefully. These lessons were put to use with the second-generation 7nm Epyc processor. There is an incredibly rich discussion about the various tradeoffs and technical challenges as well as cost and performance found here, including factors behind packaging decisions, co-design challenges, optimizations, and cross-product expansion of a similar approach. “In addition to the technical challenges, implementing such a widespread chiplet approach across so many market segments requires an incredible amount of partnership and trust across technology teams, business units, and our external partners,” the team concludes. “The product roadmaps across markets must be carefully coordinated and mutually scheduled to ensure that the right silicon is available at the right time for the launch of each product. Unexpected challenges and obstacles can arise, and world-class and highly passionate AMD engineering teams across the globe have risen to each occasion. The success of the AMD chiplet approach is as much a feat of engineering as it is a testament to the power of teams with diverse skills and expertise working together toward a shared set of goals and a common vision.”
<urn:uuid:c2ed37f6-5f3b-4da2-a9d5-cc44ef955703>
CC-MAIN-2022-40
https://www.nextplatform.com/2021/06/09/amd-on-why-chiplets-and-why-now/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00383.warc.gz
en
0.952875
1,303
2.703125
3
The value of higher education has been questioned for the past few decades within the United States and the European Union. With the pandemic haunting most universities as they start the new academic year, many have been forced to move a significant portion of their education to a distance-learning format. This adaptation is creating further doubts about the value of higher education. Most of this is the result of the traditional pedagogical approach to education, which pushes information at students to the detriment of knowledge retention and the development of key skills, such as critical thinking. To achieve the learning outcomes that many programmes claim, university educators could transform their online learning into powerful developmental tools for their students by revisiting basic educational principles. The fundamental challenges of knowledge retention Graduates’ knowledge and skills have been questioned by many companies over the past few decades. Some major companies, such as the global publishing group Penguin Random House, have completely abandoned the need for a degree for new jobs. In business education, the advancement of technology has brought many new simulation tools and online learning systems. The major question remains: “Is the value of a university education worth the increasing costs?” And, if you ask graduates how much they recall from their university education, most respond within 10% -25%, which is a horrific return on the investment of time and money. Even as many universities are pushing the concept of student-centred learning and curricula, recall of information remains poor. And, if recall is poor, what about the skills that were developed from the knowledge that has seemingly been forgotten? Were they ever really developed in the first place? ‘Cookie-cutter’ approach that strangles learning The online learning environment often includes weekly discussions, reading materials and activities, such as simulations, pre-recorded lectures and assignments. Many online universities have been using this model to provide a consistent experience for students. Within this model, the role of the instructor or professor is significantly reduced in many online universities. How do such models affect learning in a world of dynamic and complex problems like the pandemic and social justice issues that affect everyone? The sad truth is that this model does little for learning. While easily scalable, it creates a ‘cookie-cutter’ factory of theoretical analysis with forgettable knowledge. Sadly, every student gets the same generic content with similar results. From a marketing perspective, many universities speak about student-centred learning. However, the cookie-cutter factory is great at replicating student experiences with the same contents and assignments. Where exactly is the student-centred learning in such a situation? Is this experience worth replicating? Let’s look at a simple example: a course in a masters-level business programme. A leadership course may contain great topics, such as ethical decision-making, that considers various stakeholder perspectives, including their cultural values. It may draw on a case study designed to help students apply what they have learned. This case study, however, may have little, if any, meaning for the students on the course, especially while everyone is dealing with quarantine, face masks and protests for social justice. The lack of meaning presents two problems in learning. First, learning requires an emotional attachment to new information. Without this, new information is likely to have a very low retention level. Over time, that retention diminishes further. Second, critical thinking in real life always has an emotional connotation. Merely applying analysis objectively to a case study with limited information supplied does not reflect reality. The combination of these two problems means students are taught theoretical analysis in a way that is completely separated from their lives. Many business schools across the world use such case studies to help their students develop problem-solving skills. These case studies often have little to no meaning for the students. The value of such analyses are rarely ever applied in the real world, leaving out the implementation skills which are what make a world of difference to leaders. When we repeat such experiences over and over again, the entire educational journey ends with limited retention of knowledge and even fewer crucial skills. The role of educators Much of higher education in the social sciences is trapped within this cookie-cutter approach in which each course is pre-designed and served up to students to digest with limited results. The richness of current events, from the pandemic to social justice, can add tremendous value to the educational experience. This is where the instructor/professor can make a significant difference. Depending on the course, the instructor/professor can adapt the case studies to current events and actions that could make a difference in the student’s life and those of their respective community. Going back to the leadership course example, ethical decisions take on a completely different meaning when students are challenged to assess their own decisions in the face of things like the pandemic, the political environment and-or social justice issues. An instructor/professor could easily challenge students to reflect on their own decision-making concerning one or more of those areas and how they might lead by inspiring those around them to make better decisions. This would create incredible meaning for the students that would encourage retention of knowledge and develop key skills, such as communication and emotionally charged critical thinking. When this is done within an entire curriculum, it delivers significantly more value to students and their prospective employers. One major problem stands in the way of this approach: many universities in the US and EU, especially online ones, have policies that do not allow faculty to change pre-designed course activities, such as discussions and assignments. This is something that the university leadership needs to explore to remain competitive as students and employers demand more value from their education. At university level, all faculties can make learning more meaningful and create practical value for their students. It is the responsibility of educators to make a lasting impact on their students. The current world needs graduates who can think critically in emotionally charged situations. We need leaders who are proactive in preventing problems from occurring and who are not sitting around waiting for crises to occur. To accomplish this, universities need to inspire and develop educators to transform the current cookie-cutter factory of education into an individualised educational model that is consistent with the student-centred learning message in their marketing.
<urn:uuid:f2bce683-d3ed-4dc2-8a7e-a6ac1860128e>
CC-MAIN-2022-40
https://resources.experfy.com/future-of-work/best-use-link-between-emotions-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00383.warc.gz
en
0.961491
1,280
3.140625
3
Incident Response Defined Incident response is the methodology an organization uses to respond to and manage a cyberattack. An attack or data breach can wreak havoc potentially affecting customers, intellectual property company time and resources, and brand value. An incident response aims to reduce this damage and recover as quickly as possible. Investigation is also a key component in order to learn from the attack and better prepare for the future. Because many companies today experience a breach at some point in time, a well-developed and repeatable incident response plan is the best way to protect your company. Why is Incident Response Important? As the cyberattacks increase in scale and frequency, incident response plans become more vital to a company’s cyber defenses. Poor incident response can alienate customers and trigger greater government regulation. Target's repeated failure to develop effective internal security infrastructure made its 2013 hack considerably worse. Equifax's decision not to share information with the public following its 2017 hack significantly hurt its brand. Effective incident response is critical, regardless of your industry. Who is the Incident Response Team? According to the SANS Institute, the company should look to their “Computer Incident Response Team (CIRT)” to lead incident response efforts. This team is comprised of experts from upper-level management, IT, information security, IT auditors when available, as well as any physical security staff that can aid when an incident includes direct contact to company systems. Incident response should also be supported by HR, legal, and PR or communications. Incident Response Plan – Six Steps According to the SANS Institute, there are six key steps to a response plan: Preparation: Developing policies and procedures to follow in the event of a cyber breach. This will include determining the exact composition of the response team and the triggers to alert internal partners. Key to this process is effective training to respond to a breach and documentation to record actions taken for later review. Identification: This is the process of detecting a breach and enabling a quick, focused response. IT security teams identify breaches using various threat intelligence streams, intrusion detection systems, and firewalls. Some people don't understand what threat intelligence is but it's critical to protecting your company. Threat intelligence professionals analyze current cyber threat trends, common tactics used by specific groups, and keep your company one step ahead. Containment: One of the first steps after identification is to contain the damage and prevent further penetration. This can be accomplished by taking specific sub-networks offline and relying on system backups to maintain operations. Your company will likely remain in a state of emergency until the breach is contained. Eradication: This stage involves neutralizing the threat and restoring internal systems to as close to their previous state as possible. This can involve secondary monitoring to ensure that affected systems are no longer vulnerable to subsequent attack. Recovery: Security teams need to validate that all affected systems are no longer compromised and can be returned to working condition. This also requires setting timelines to fully restore operations and continued monitoring for any abnormal network activity. At this stage, it becomes possible to calculate the cost of the breach and subsequent damage. Lessons Learned: One of the most important and often overlooked stages. During this stage, the incident response team and partners meet to determine how to improve future efforts. This can involve evaluating current policies and procedures, as well specific decisions the team made during the incident. Final analysis should be condensed into a report and used for future training. Forcepoint can help your team analyze previous incidents and help improve your response procedures. Protecting your organization requires a determined effort to constantly learn and harden your network against malicious actors. Learn more by reading our blog post, "Data breach response plan: best practices in 2019." Prevent Incidents Before You Need a Response While cyberattacks can seem inevitable and it is always a good idea to have an incident response plan for your organization, Forcepoint can help prevent incidents from the inside. With Forcepoint’s Insider Threat tool, gain visibility into potential treats to critical systems.
<urn:uuid:d509a572-e26e-49c5-a158-e17b22ee1926>
CC-MAIN-2022-40
https://www.forcepoint.com/cyber-edu/incident-response
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00383.warc.gz
en
0.948068
831
3.078125
3
With March being Women’s History Month, we are celebrating inspirational women in science, technology, engineering and math (STEM) from the 1940s to today. Among these stories are pioneers who served as advocates, change-makers, and innovators. Looking back in time, we can see what bravery, tenacity, curiosity and commitment meant, and apply it to our present day realities. Those who have come before us have laid a firm foundation for many more female leaders to flourish in their chosen field. Let’s start with one of today’s leaders, Reshma Saujani. Reshma Saujani, Founder and CEO of Girls Who Code Tamara McCarthy, VP, Strategic Alliances “Teach girls bravery, not perfection” coined by Reshma Saujani, Founder and CEO of the National Non-Profit of Girls Who Code, New York Times bestselling Author, Ted Talk Speaker, and the daughter of Indian political refugees who fled Uganda in the 1970’s. Since founding Girls who code, the organization has reached over 450,000 girls around the world. The mission is to close the gender gap in new entry-level tech jobs by 2030. “Girls Who Code” values are: Bravery, Sisterhood, and Activism. Activism focused on not only preparing girls to enter the workforce, but to lead it and to transform it! To me, Reshma Saujani is a modern-day pioneer, advocate and change maker for girls and women in STEM and beyond. Reshma says as girls we are taught to play it safe, to be perfect rather than brave. For example, women apply to positions only when they have met 100% of the requirements versus men who apply to roles when they have met only 60% of the job requirements. We as women tend to be people pleasers and we pass up opportunities that are out of our comfort zone. This resonated with me as I reflected on my own career over the years. I realized that I have passed up opportunities because I felt I needed to show up perfect to prove my worth, I was afraid of failure or rejection. It took many years of professional development to realize it was “me” who was holding “me” back and limiting my world of opportunity. To quote Reshma “The perfect is not just the enemy of the good; the pressure to be perfect is the enemy of girls around the world”. My advice to any girl entering STEM, including my daughter and to any woman considering a role that is out of their comfort zone, be brave enough to reach high and go for it! Trust yourself that you will figure it out and know that there is a strong sisterhood of women out there to support you and your journey in bravery and imperfection! Hedy Lamarr, Actress and Inventor Sara Snowden, Director, Product Management In World War II, an esteemed actress, Hedy Lamarr, patented an idea that served as the backbone for secure military communications, WiFi, Bluetooth and cell phone network technology. Hedy and a partner developed a “Secret Communication System” to prevent radio-controlled torpedo signals from being intercepted in transmission. They devised a frequency hopping method using a piano roll to change the signal sent from the control center to the missile, where only those two entities had the cipher. While other frequency hopping solutions had been created, this was the first model that used a symmetric key algorithm. Hedy was instrumental in anchoring a critical cybersecurity methodology, along with multiple other communications technologies we rely on today. As the wife of a veteran who supported military communications and as a professional who unintentionally found a calling in cyber, Hedy’s journey resonated with me. I have spent the last several years focused on cloud transformation to help clients overcome challenges with traditional IT landscapes but observed staggering gaps of practical applications of security. I shifted to focus on cloud security a few years ago with the intent to help people avoid dangerous situations, breaches, and undue hardships. Women tend to be natural born leaders, innovators, and effective problem solvers. We are fortunate in this era of technology to have admirable female STEM leaders who broke the ground we continue to pave. It took about 20 years for Hedy’s solution to be brought forward, I’m hopeful my path will be significantly shorter. Grace Hopper, Computer Scientist and United States Navy Rear Admiral Lisa Wood, VP, Engineering COBOL was the first programming language I learned and my first job out of college was as a COBOL programmer, working on financial systems for the military. I can directly trace my career start and trajectory to a woman who broke glass ceilings in STEM, the military, and in business – Grace Hopper. Besides leading the development of the COBOL programming language, Grace Hopper was an early pioneer in compilers, was on the team that created the first all-electronic digital computer (UNIVAC), developed early standards for testing software, served in the US Naval Reserves, and then the US Navy where she rose to the rank of Rear Admiral. Retiring from the Navy at age 79, she went on to work in the private sector, advocating for the use of computers to improve the lives of their users, until her death at age 85. Throughout her life Grace Hopper demonstrated curiosity, experimentation, tenacity, and commitment to completing a mission. She is credited with quotes we use often in technology and business: - “It is often easier to ask for forgiveness than to ask for permission.” - “The most damaging phrase in the language is: ‘It’s always been done that way.’” - “A ship in port is safe, but that’s not what ships are built for.” Grace Hopper’s amazing career and life is one that we, as women in technology, can point to as forging the path and being the exemplar of what we can accomplish and the impact we can make. The ENIAC Computer Programmers Merry Beekman, Sr. Director, Marketing The Secret History of the ENIAC Women is a documentary about the 6 remarkable women who programmed ENIAC, the first digital computer. Jean Bartik, Fran Bilas, Ruth Lichterman, Kay McNulty, Betty Snyder and Marlyn Wescoff were among those recruited by the Army for their high-level mathematics ability to manually calculate ballistics trajectories during World War II. To accelerate the calculations the Army commissioned ENIAC and these women were selected as “human computers” to program it. There was no instruction manual but working together they broke the calculations into steps that the computer could handle. Figuring out the timing of each panel and setting over 3,000 switches, they devised paneling sheets to keep track of it all. They created computer programming. In the history books these women, although portrayed in photos programming the ENIAC, were never recognized for their contributions. That is until a curious, Harvard undergraduate named Kathy Kleiman made it her mission, over time to make it right. “The ENIAC Programmers inspired me to stay in computing at a time when every other signal in society was urging me to turn away. It is my great hope that their story will throw open the doors of computing to all!” said Kleiman. What is remarkable to me is not only Kathy Kleiman’s passion and tenacity but even more so how the women worked together as a team to write the book on computer programming. I imagine this was an extremely stressful time and they showed adaptability and focus, supporting each other, and appreciating the skills they each brought to the project. They gave their all working tirelessly checking the wiring, adjusting the dials and verifying the results manually to achieve the most precise calculations. We can learn so much from these amazing STEM leaders, women who boldly chartered new terrains, solved seemingly insurmountable problems, and dedicated themselves to help address some of society’s greatest challenges. As today’s women in STEM, we are humbly grateful to count ourselves among these leaders as we contribute in our own unique ways to building a path for future generations.
<urn:uuid:b6e99663-4d45-4671-bac5-66efa2bb72d2>
CC-MAIN-2022-40
https://www.concourselabs.com/newsroom/nine-inspirational-women-in-stem/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00383.warc.gz
en
0.962352
1,710
2.953125
3
As we have mentioned before, lately Internet security issues are on the rise and malicious hackers are making it very difficult to keep people on Internet safe from web attacks. The latest in cyber security news brings a piece of malware that could potentially ruin lives. This malware is called “BlackShades”. BlackShades is a tool used by hackers to take complete control of your computer, just as if they were sitting in front it. The difference with BlackShades is that it can actually be bought, for a low price of $40! BlackShades is similar to other remote desktop support applications which IT department’s use. However, BlackShades can do more than traditional remote desktop applications, for example it access devices on the victims computer, such as their webcam. This is a definite invasion of ones privacy. At the same time BlackShades can also track keystrokes as the victim is typing on the computer. With this, the attacker could gain easy access to personal information, because the victim’s password was recorded, as they were typing it. How does one get infected with BlackShades? The same way as with any other virus. The victims email account might have an email from someone with an attachment in the email message. Most of the time the attachment is a .zip file – a compressed file. However, inside the .zip file is a Windows program, an executable. The victim would open the file, and then it happens, the computer gets infected. However the victim will not see anything happening on the screen. The backdoor is a hidden application, running in the background. Once the computer is affected, the victim may not even suspect the infection, as BlackShades is hard to detect. Some symptoms of BlackShades are, - Your cursor moves erratically without you touching it or your monitor turns off during use - The webcam “in use” light turns on when the camera is not in use by the user. If a Skype call is not in progress the webcam light should remain turned off. - Usernames and passwords for online accounts have been compromised - Computer files become encrypted without warning and ask for a password when attempting to open the file To prevent infection from the BlackShades backdoor – avoid opening emails from known and unknown people if they have .zip or .exe attachments in the message. Avoid links from suspicious accounts on social media as well, such as Twitter and Facebook. It is very easy to disguise a link and to direct it to download the virus. Finally, make sure your antivirus is updated and ensure the antivirus subscription is paid for, if you use applications such as Kaspersky or ESET. If you need more information about BlackShades and Internet security in general, feel free to contact Group 4 Networks.
<urn:uuid:94b5cafa-d8a8-488c-af42-e1388d660479>
CC-MAIN-2022-40
https://g4ns.com/blackshades-info/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00383.warc.gz
en
0.948804
581
2.53125
3
Falls Church is an independent city in the state of Virginia. It has an estimated population of 14.6 thousand and is in the Washington DC metropolitan area. Named for the historic Episcopal church located in the area, Falls Church became a town in the year 1875. At just over 2 square miles, Falls Church is the smallest town in the state of Virginia and the smallest county-equivalent municipality in the entire United States. The namesake of the town, the historic Falls Church, was built at the intersection of a number of important Native American trails which were later paved over to once the area was taken over by European settlers. The original governing entity in the region was the Iroquois Confederacy, an indigenous confederacy comprising five nations. After exploration by captain John Smith, the region was settled by English colonists in the late 1600s. Near areas such as Arlington, Alexandria, and Bethesda, Falls Church is a wonderful place to work, live, or visit. With a rich history, proximity to all kinds of entertainment, and many cultural touchstones, this region is a diverse area with a strong economy and is filled with things to learn and experience. With cold weather in the winter and warm weather in the summer, this region experiences all four seasons throughout the year and draws visitors all year round.
<urn:uuid:7d5d3d58-8206-472d-94bc-0139a3f51c50>
CC-MAIN-2022-40
https://www.nortec.com/falls-church/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00383.warc.gz
en
0.966433
271
2.59375
3
It was 10 years ago that the Internet Engineering Task Force (IETF) released the Request For Comments (RFC) 3514 “The Security Flag in the IPv4 Header” authored by Steve Bellovin. This RFC brought to the Internet community what could have been the security silver bullet. What do you mean? Well, due to the fact that security devices like firewalls, intrusion detection systems, proxies and others have a hard time trying to determine if a packet has malicious intent or is rather normal. Steve Bellovin came up with the idea of creating the Evil bit, taking advantage of the unused high-order bit of the IP Flags field. Very simple mechanism! Consider this: benign packets should have the Evil Bit set to 0 and those that have malicious intent will have the Evil Bit set to 1. How does it work? When using offensive tools or crafting packets with malicious intent. The software or the attacker must set the Evil bit. For example fragments that are dangerous must have the Evil bit set. When executing a port scanning if the intent is malicious the Evil bit should be set. When sending an exploit via Metasploit the Evil bit should be set and the list goes on. On the other hand if the packets don’t have malicious intent the bit should not be set. How should the security systems process such packets? When processing packets, devices such as firewall should check the Evil Bit. If it is set they must drop all packets. If the Evil bit if off the packets must not be dropped. Wonderful idea, but for those who don’t know the RFC was released on the April Fools’ Day. The Evil bit RFC was published on 1st April of 2003. Like many others, this has been another humorous RFC. Humorous Request for Comments have been around for quite some time and is a good read if you have time and want to laugh. Apart of the Evil bit one that is really hilarious is the RFC 5841 which proposes a TCP option to denote packet mood. For example happy packets which are happy because they received their ACK return packet within less than 10ms. Or the Sad Packets which are sad because they faced retransmission rates greater than 20% of all packets sent in a session. If you want to read more the Wikipedia as its complete list here or the book “The Complete April Fools’ Day RFC“. Humor apart and for the sake of curiosity you could try to determine if any system process or reply to such packets. I used Scapy which is a powerful packet crafting and manipulation tool. It is written in python and let’s see how could we generate a TCP Syn packet with the Evil Bit set. Before creating the packet lets just refresh our knowledge about the IP Flags field. In the IP header there 3 bits used for flags and according to the RFC 791: Bit 0: reserved, must be zero Bit 1: (DF) 0 = May Fragment, 1 = Don’t Fragment. Bit 2: (MF) 0 = Last Fragment, 1 = More Fragments. The normal combinations used with Fragmentation flags are shown in the following table: In our case we want to generate a packet that has the highest order bit of the FlaView Postgs field set i.e. Evil Bit. Which according to the RFC is reserved and must be set zero. However, we will use Scapy to craft a packet that has the Evil bit set with a fragment offset of zero and send it trough the wire and capture it using tcpdump. from scapy.all import * ip=IP(src="192.168.1.121", dst="192.168.1.2", flags=4, frag=0) tcpsyn=TCP(sport=1500, dport=80, flags="S", seq=4096) # python myevilpacket.py I will leave the Scapy explanation for another post but would like to briefly mention the usage of flags=4. As you could see in the IPv4 header image the IP Flags field uses 3 bits. These 3 bits are the highest bits in the 6th byte of the IP Header. To set the Evil bit we need to set the value to 100 in binary or 4 in hex/integer. The following picture illustrates the packet that was captured using tcpdump when the myevilpacket.py script was invoked You could see the Evil bit on.
<urn:uuid:3ffc0a24-bbd8-4b1a-ac2d-5ae61846d244>
CC-MAIN-2022-40
https://countuponsecurity.com/2013/04/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00383.warc.gz
en
0.911614
1,029
2.796875
3
Increased amyloid plaque leads to increased cognitive decline, according to Center for Vital Longevity Scientists. A new study from the Center for Vital Longevity at The University of Texas at Dallas has found that the amount of amyloid plaques in a person’s brain predicts the rate at which his or her cognition will decline in the next four years. The study, published in JAMA Neurology, used positron emission tomography (PET) scans to detect amyloid in 184 healthy middle-age and older adults participating in the Dallas Lifespan Brain Study. Amyloid plaques, a sticky buildup that gradually gathers outside of neurons and is a hallmark of Alzheimer’s disease, are believed to start accumulating in the brain 10 to 20 years before the onset of dementia. “We think it is critical to examine middle-aged adults to detect the earliest possible signs of Alzheimer’s disease, because it is becoming increasingly clear that early intervention will be the key to eventually preventing Alzheimer’s disease,” said Michelle Farrell, a PhD student at the center and the lead author of the study.” The study presents some of the first data on amyloid and its cognitive consequences in adults ages 40 to 59. For these middle-age adults, the study found that higher amyloid amounts were associated with declines in vocabulary, an area of cognition that is generally preserved as people age. The results suggest that a new approach might be needed to provide physicians and patients with information about the future for someone with amyloid deposits. Amyloid PET scan results are typically presented as either positive or negative, but the new findings suggest that the amount of amyloid in the brain provides useful prognostic information about how rapidly cognition may decline in the future. “Our understanding of the earliest and silent phase of possible Alzheimer’s disease is increasing rapidly. Providing physicians and patients with more information about the magnitude of amyloid deposits will provide valuable information that will permit better planning for the future,” said Dr. Denise Park, director of research at the Center for Vital Longevity, Distinguished University Chair in Behavioral and Brain Sciences and senior author of the study. Park heads up the Dallas Lifespan Brain Study, which is a multi-year research project aimed at understanding what a healthy brain looks like and how it functions at every decade of life from age 20 through 90. Each of the nearly 500 volunteers in the study undergo tests every four years. While most studies of amyloid and its relationship to Alzheimer’s disease have focused on older adults over age 60, the Dallas Lifespan Brain Study also studies middle-age adults to find the earliest possible signs of Alzheimer’s disease. In the JAMA Neurology research, the three middle-age adults who had the highest amyloid amounts and greatest vocabulary decline were also found to have a double dose of the ApoE-4 gene implicated in Alzheimer’s. This means they received a copy of the gene from each of their parents. Only about 4 percent of the population carries this genetic combination, and the finding hints at the possibility that subtle symptoms of cognitive decline related to amyloid may be detectable as early as middle age in this vulnerable population. Original Research: Full open access research for “Association of Longitudinal Cognitive Decline With Amyloid Burden in Middle-aged and Older Adults: Evidence for a Dose-Response Relationship” by Michelle E. Farrell, BA; Kristen M. Kennedy, PhD; Karen M. Rodrigue, PhD; Gagan Wig, PhD; Gérard N. Bischof, PhD; Jennifer R. Rieck, PhD; Xi Chen, MS; Sara B. Festini, PhD; Michael D. Devous Sr, PhD; and Denise C. Park, PhD in JAMA Neurology. Published online May 30 2017 doi:10.1001/jamaneurol.2017.0892
<urn:uuid:8eb5dacf-f56a-46d2-b3bf-f7570798c254>
CC-MAIN-2022-40
https://debuglies.com/2017/06/21/amyloid-more-the-brain-more-cognitive-decline/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00383.warc.gz
en
0.930214
819
2.890625
3
The UNION command in SQL is meant to join a query which allows you to two queries as long as the number of columns line up between the two tables. In SQL injection, the purpose is to forge part of the query so it pulls data from somewhere else. SELECT ID, Account, Password FROM Users WHERE uname = $uname This is a basic query. If want to gather info from another table, say SSNs, we can put in something like this: SELECT ID, Account, Password FROM Users WHERE uname = 1 UNION ALL SELECT SSN,1,1 FROM SSNTable As you can see, the columns match up. In the original query there were 3 columns (ID, Account, Password) and in the malicious query it’s SSN,1,1. 1,1 won’t return anything but the SSN parameter should if that column exists in that table. In Mutillidae, this can be exploited. At the user lookup page, we enter the username as a query: 'union select null -- And we get a syntax error As you can see, the error message is the SELECT statements have a different number of columns, so this is now a matter of entering enough nulls so the columns align. The magic number was 7. The query looked like this ' union select null,null,null,null,null,null,null -- Next is fuzzing to see how the columns line up. Just because there’s 7 columns doesn’t mean they will all appear, as you can see there’s only three that show up. Replacing null with 1 should show how the columns align. So replacing the first null with 1 didn’t show up. Let’s try the second. There it is! So now what? Well there’s a lot. |@@version||:||Version of DB| |UUID()||:||System UUID key| |system_user()||:||Current Sustem user| |@@GLOBAL.have_symlink||:||Check if Symlink Enabled or Disabled| |@@GLOBAL.have_ssl||:||Check if it have ssl or not| If we enter @@version instead of 1 we get the version of the DB and so on.
<urn:uuid:e8b83159-3aba-45e0-a342-e5b97dd04c3f>
CC-MAIN-2022-40
https://hausec.com/web-pentesting-write-ups/mutillidae/sqlinjections/union-based/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00583.warc.gz
en
0.687339
667
3.1875
3
The Right to Privacy Act, Financial Institutions and Records The Right to Financial Privacy Act or the RFPA for short is a federal privacy law that was passed by the U.S. government in 1978 and went into effect in March of 1979. Much like the Privacy Act of 1974 and the 2014 FISMA Law, the RFPA established the specific regulations and procedures that U.S. federal government agents and employees must follow when looking to obtain personal information from a financial institution in relation to a consumer’s financial documents or records. Moreover, the RFPA also outlines the regulations that financial institutions must abide by when providing the financial information of consumers to federal government authorities at their request. Furthermore, the RFPA also mandates the types of information that must be provided to consumers in instances in which the federal government seeks to access their financial information. Why was the Right to Privacy Act needed? Before the passing of the RFPA in 1978, American citizens were not entitled to be notified when their financial information or records were turned over to government authorities, and had no right to challenge such government access when it occurred. However, this all changed with the landmark case United States v. Miller (425 U.S. 435 (1976). The case of United States v. Miller hinged upon the federal government’s use of financial information and records in the context of criminal investigations, without providing notice to American citizens in relation to the access to these records. More specifically, Mitch Miller of Georgia was being investigated by the Alcohol, Tobacco, and Firearms Bureau or ATF, as well as the U.S. Treasury Department in relation to an undocumented whiskey distillery that he had been alleged running. During the course of the ATF and U.S. Treasury Department’s joint investigation, the federal agencies requested access to Miller’s bank account information and transaction history on the basis of a grand jury subpoena. Miller challenged the legality of such a request, and the case made it all the way to the U.S. Supreme Court, with the Court ultimately ruling that “the records belong to the institution rather than the customer; therefore, the customer has no protectable legal interest in the bank’s records and cannot limit government access to those records”. As a result of this legal situation and decision, the RFPA was passed to regulate similar situations that might occur in the future. What are the requirements that federal agencies must follow when accessing a consumer’s financial records? Under the RFPA, federal agencies looking to access to copies of a consumer financial records from a financial institution must achieve one of the following conditions: - Obtain an authorization, signed and dated by the customer, that identifies the records, the reasons the records are being requested, and the customer’s rights under the act. - Obtain an administrative summons or subpoena. - Obtain a search warrant. - Obtain a judicial subpoena. - Obtain a formal written request from another government agency. This condition is only deemed valid if no other administrative summons or subpoena authority is available at the time of the request. What’s more, a financial institution is prohibited from releasing a consumer’s financial information or records until the applicable government agency said information or records provide written certification confirming that they have complied with the relevant provisions of the RFPA. Additionally, financial institutions must also maintain detailed records of all instances in which a consumer’s financial information or records have been disclosed to a particular government agency in accordance with the consent and authorization of the said consumer. These records must include “the date, the name of the government authority, and an identification of the records disclosed”. Consumers are also afforded the right to inspect these records under the provisions of the RFPA. While the RFPA generally protects the financial information and records of American citizens in the context of government access or disclosure, there are certain exceptions to the law. For example, instances in which a consumer’s financial information is “Requested by a government authority subject to a lawsuit involving the bank customer (The records may be obtained under the Federal Rules of Civil and Criminal Procedure. Conversely, another exception to the law in instances in which financial records are “Requested by the Government Accountability Office for an authorized proceeding, investigation, examination, or audit directed at a federal agency”. What are the penalties for non-compliance under the RFPA? Under the RFPA, American citizens are entitled to collect civil liabilities from government agencies that fail to comply with the law. “These penalties include (1) actual damages, (2) $100, regardless of the volume of records involved, (3) court costs and reasonable attorney’s fees, and (4) such punitive damages as the court may allow for willful or intentional violations”. Alternatively, “. A financial institution that relies in good faith on a federal agency’s certification may not be held liable to a customer for the disclosure of financial records”. Under the RFPA, consumers are entitled to bring legal action against applicable parties up to three years after the date of the violation, or the date on which the violation in question was discovered. The passing of the RFPA in 1978 was a turning point in American history as it pertains to both legalities as well as personal privacy. Prior to the passing of the RFPA, government agencies had the authority and jurisdiction to access the financial records of American citizens at their own discretion. While such power and authority were deemed acceptable at previous points in U.S. history, a government agency having such power in our current digital age would undoubtedly be troublesome to many. To this end, American citizens can rest assured that the federal government will not be able to access their financial records and information without first providing consent, or in the absence of a justifiable reason under the law.
<urn:uuid:77f9f2b9-756e-4aba-b441-4e7e3c53b302>
CC-MAIN-2022-40
https://caseguard.com/articles/the-right-to-privacy-act-financial-institutions-and-records/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00583.warc.gz
en
0.948465
1,201
3.140625
3
Top Use Cases of AR Based Destination Navigator App by Smitesh Singh, on Nov 23, 2021 11:58:15 AM Digital navigation has revolutionized the way people travel. While outdoor navigation has been around for a while, indoor navigation has posed some challenges as satellites cannot track the indoor areas. For indoor navigation, AR based apps have been the holy grail. Augmented reality can be used as indoor navigation technology, and it can provide turn-by-turn directions to locations or objects where GPS and other technologies cannot accurately work. In this blog, we will take a look at how an AR based navigation app can help retailers and enterprises leverage various use cases to improve their CX. Use cases of augmented reality navigation City navigation: The simplest and most obvious case is when a person finds himself in an unfamiliar area and tries to build the route to the final destination. Such apps based on AR maps are meeting the needs of both pedestrians and drivers, a user just needs to clarify in advance the trip format: by car, public transport, on foot, etc. Military & Emergency: The military industrial complex is always one step ahead of everyone. It is no different in case of creative use of Augmented reality for routing. There are many former and frozen war zones littered with mines and other dangerous things that were left undetonated or unspotted. Augmented reality GPS-based routing of the safe route over minefields or other dangerous places can be a solution for civilians who are still living in war zones. Shopping and entertainment centers and malls: The customers visiting the shopping and entertainment center can orient themselves more easily if they have an AR navigation app which suggests to them what is located where. More specifically, inside departmental stores, customers can use AR based navigation to get to the aisles of products they are looking for. Museums, exhibition halls: Today, in order to study the history of the creation of a certain object of art, there is no need to resort to the help of a guide or background material. Modern Android and iPhone augmented reality apps are able to determine the user's position and provide an overview of the ways to reach certain spots to obtain the information on the object to which he directed his smartphone’s camera. Industrial facilities and educational institutions: It all goes in a similar way: the augmented reality GPS apps help to better navigate a particular institution, whether it is a factory or a university complex. Logistics: Augmented reality navigation features can improve logistics in various industries when it comes to visually guided navigation along a route. GPS navigation apps are being widely used for unforeseen cases such as in emergency incidents, they can help make an SOS call if something bad happens to the user such as car accident or theft and guide the helper to reach the incident location. Advertising: Using augmented reality experience, retailers can drive more customers. To display advertisements, AR can be used for ad flashing in front of stores when customers scan their smartphone camera at a store. Route Creation for Car Transportation: Another field where AR Routing can be effectively implemented is the transportation industry. There are numerous sci-fi works that describe smart screens and head-up displays with lots of data and stats. The biggest car manufacturers are trying to implement a smart screen solution for the vehicle's windscreen so that it would include an interface with all the critical stats such as weather details, road conditions etc. right before the driver’s eyes. Combined with automated speed control this can revolutionize the whole concept of transportation. Healthcare: Healthcare system was one of the pioneers in implementing AR and VR solutions, especially for educational purposes. First and foremost, routing can be used in finding the closest hospital and guiding them to it. While its practicality is still limited, it is an option that can potentially save someone’s life and that is always important. On the other hand, routing can be used by patients to navigate in large hospital structures in order to find where their doctors are situated or how to get to a place where their procedure will take place. Navigation and routing seem to be the easiest way of implementing AR Solutions without disrupting the natural state of things i.e., the established business models. It can also greatly expand customer experience and open up new possibilities within a well-developed field. The process of creating the solutions isn't without its challenges, but getting the right guidance from an AR app development company can get you where you want to.
<urn:uuid:201d5700-7f5a-4e10-a95f-08e53a17966e>
CC-MAIN-2022-40
https://blog.datamatics.com/top-use-cases-of-ar-based-destination-navigator-app
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00583.warc.gz
en
0.944904
913
2.515625
3
E-learning is a continually growing trend and topic of discussion. In 2019, the global e-learning market was estimated to be around $200 billion U.S. dollars and it is expected to increase to $370 billion by 2026. Many businesses and educational facilities are including e-learning within their business operations — especially due to COVID-19 — so it becomes increasingly important to understand e-learning and its importance in order to implement it successfully. What Is E-learning? E-learning — or electronic learning — is the acquisition of knowledge that is achieved through varying electronic technologies or digital media. Although many automatically associate e-learning with formal classroom learning (K-12, or post-secondary education), “e-learning” is a catchall term for any method of learning that is delivered electronically. E-learning can be used in a formal academic setting, but it is also used by organizations to conduct business/educate clients or uptrain employees. It is generally conducted online so that learners can access all required learning materials at any time or place that has an internet connection. Similar to most processes, there are advantages and disadvantages associated with such methodologies — these include: - Accessibility: E-learning allows both educators and learners to access their learning materials at any time or any place. E-learning can help individuals in rural areas access different learning opportunities and it creates a sense of flexibility for individuals with complex schedules that they may not have had prior; - Saves time: When you can learn/teach online in the convenience of your own home, you can reduce any time spent commuting to a physical classroom or learning center; - Offers personalized learning/teaching: E-learning is generally flexible, and by controlling your own learning/teaching path and study habits, you can create a personalized learning/teaching experience that works best for you; - Environmentally-friendly: The conventional paper and pencil classroom creates excessive waste. By switching to e-learning, you can take advantage of paperless learning. When you learn from home, you can also significantly reduce the number of transportation-related emissions you put off by eliminating a commute; - Cost-effective: E-learning generally reduces the number of physical education supplies (e.g. paper, pens, pencils, etc.), the need for a physical classroom, and transportation costs. - Lack of social exposure: Social interaction is critical for mental and physical health and e-learning significantly reduces the number of social exposure opportunities, though it does not eliminate them; - Technology issues: Technology is prone to failure, and the more e-learners/educators that there are, the higher the chance of technical issues somewhere throughout the process; - Accessibility: Not all learners have immediate (or any) access to technology or fast/stable internet connection; - Work authenticity: When you are not being observed in a classroom, it can be difficult to proctor the authenticity of a learner’s work. When you have immediate access to the internet, cheating is almost inevitable; - Assessments: It may be difficult to properly assess learners. Aside from the prevalence of cheating, technology-based assessments can be less flexible than traditional assessment methods. Synchronous vs. Asynchronous E-learning There are two primary forms of e-learning methodologies: - Synchronous e-learning: Synchronous e-learning is education that happens in real-time using a virtual platform. This e-learning methodology is most comparable with traditional classroom education. Some common examples of synchronous e-learning methods include video conferences, teleconferences, live chat halls, or live-streamed lectures; - Asynchronous e-learning: Asynchronous e-learning is education that is facilitated on your own time. Course instructors will still provide the necessary task information, educational materials, and assignments/assessments but the timeline for you when you complete the course requirements is generally more flexible. Some common examples of asynchronous e-learning methods include online discussion boards, virtual libraries, self-guided lessons, and pre-recorded video content or lectures. Is E-learning Effective? Since e-learning is relatively new, there are still some concerns regarding its educational effectiveness for both educators and learners. One of the primary concerns is having to rely on imperfect technology. Even though technology can improve a number of areas, there is still always a chance of technological downtime, and that downtime results in reduced educational efficiency. Imperfect technology concerns can be managed using IT solutions designed to serve distributed teams and networks, including remote monitoring and management solutions. This can benefit both learners and educators by offering improved control and fewer worries. Another concern is regarding work-life balance. When your e-learning is home-based, there is an overlap between your work/education and your life and it can prove difficult to balance both respective sides. Research and implement different ways to achieve work-life balance like lifestyle changes, unplugging from technology, or taking a vacation from time to time. Staying Productive While E-learning Remaining productive while e-learning can be challenging for both educators and learners. Below are some tips for staying productive while e-learning: - Create a routine and stick to it; - Create a designated learning area; - Get rid of any distractions in your learning area (e.g. cellphone, television, video games, instruments, etc.); - Set deadlines for specific homework or tasks; - Take breaks from time to time, or between learning activities; - Interact with other learners or educators whenever possible (e.g. study groups, educator chat forums/pages); - Know your tech support options and the process for getting help during and after operating hours. Cybersecurity and E-learning Even though there are numerous benefits to e-learning, when you are online, there are cybersecurity threats that can arise. Every year there is more and more damage caused by cyber crime in the United States, and if you are not careful when e-learning, you could be at risk of cyberattacks. There are some ways to combat cybersecurity threats — examples include: - Work with the schools IT department or managed service provider (MSP) to educate teachers and learners on different threats and best cybersecurity practices — if possible, make cybersecurity education an ongoing thing; - Encrypt any private or sensitive data (e.g. personal information, financial information); - Restrict access to certain files, personal information, client data, or even specific sites; - Keep your technology and software up-to-date; - Deploy a resilient anti-virus software to keep all connected devices and data secure; - Monitor online activity to receive early detection notifications on abnormal activity and address any concerns; - Use two-factor authentication whenever possible; - Provide devices if possible instead of learners/educators using personal devices. Privacy and E-learning In order to focus solely on education, many organizations that utilize e-learning take advantage of monitoring practices. Privacy is a large concern for many, and while some school districts are approving monitoring in certain areas, others believe that monitoring students perpetuates inequality and violates their privacy. For others, the shift to online learning creates privacy concerns centered around teachers who are focused on monitoring products or platforms for ease of use instead of student privacy preservation.
<urn:uuid:684017b9-8574-42e7-9578-3f96c5b99d5a>
CC-MAIN-2022-40
https://www.atera.com/blog/a-guide-to-secure-and-productive-e-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00583.warc.gz
en
0.939457
1,654
3.4375
3
Cross-Site Request Forgery (CSRF) is an attack that tricks the victim into loading a page that contains a malicious request. It is malicious in the sense that it inherits the identity and privileges of the victim to perform an undesired function on the victim’s behalf, like change the victim’s e-mail address, home address, or password, or purchase something. CSRF attacks generally target functions that cause a state change on the server but can also be used to access sensitive data. Browsers usually automatically include with such requests any credentials associated with the site, such as the user’s session cookie, basic auth credentials, IP address, Windows domain credentials, etc. Therefore, if the user is currently authenticated to the site, the site will have no way to distinguish this from a legitimate user request. An attacker can make the victim perform actions that they didn’t intend to, such as logout, purchase item, change account information, retrieve account information, or any other function provided by the vulnerable website. A successful CSRF attack can lead to: - Gaining privileges - Bypassing protection mechanism - Reading application data - Modifying application data
<urn:uuid:42b96c62-7662-4093-bfbe-a1abb38376a8>
CC-MAIN-2022-40
https://appsec-labs.com/portal/cross-site-request-forgery-csrf/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00783.warc.gz
en
0.862272
341
3.0625
3
The primary purpose of a firewall is to apply access and security policies to traffic entering and leaving your networks. Two different firewall services are responsible, depending on the destination of the traffic: - Host Firewall – The host firewall handles local inbound and outbound traffic. The host firewall runs on box level. - Forwarding Firewall – The forwarding firewall service handles traffic passing through the firewall. The forwarding firewall runs as a service on the box. The host firewall runs on the box layer of every CloudGen Firewall and Control Center and cannot be removed. The host firewall handles connections where the target IP address and port number match a listening socket of a service on the firewall. The boxfw is the system process for the host firewall. In addition to managing local traffic, the boxfw also manages other traffic handlers such as SIP, RPC, Timer, Audit, and Sync. Restarting the boxfw service reinitializes the service handlers and reloads the ruleset. The boxfw service is always running. You can have only one host firewall on a system. Examples of connections that are handled by the host firewall are: - An incoming connection from a web browser to the HTTP Proxy service. - An outgoing connection from the HTTP Proxy service running on the firewall to a web server on the Internet. - Outgoing and incoming VPN traffic from the VPN service to the tunnel endpoint. - Outgoing NTP or DNS queries. For more information, see Host Firewall. The forwarding firewall runs as a service on the box. It handles all traffic that does not match a listening socket on the firewall. You can create one (forwarding) Firewall service on each CloudGen Firewall. This service listens to all IP addresses configured for the box and is responsible for all connections that are transferred over the firewall to a remote host. The access rules for the forwarding firewall are maintained in the forwarding ruleset. The forwarding firewall is tightly integrated with all Application Control features, such as the Virus Scanner, Advanced Threat Protection (ATP), Intrusion Prevention System (IPS), or the URL Filter. Examples of connections that use the forwarding firewall are: - A web browser that connects to an external web server without using the HTTP Proxy service. - A ping to an external Linux server. - Traffic coming out of a VPN tunnel. For more information, see Forwarding Firewall. - Only one forwarding firewall service is allowed per CloudGen Firewall. - The firewall handles only IP protocols. Non-IP traffic, such as Spanning Tree Protocol or IPX/SPX, is not forwarded.
<urn:uuid:28e9cd1a-b495-4e0b-8090-5eb8cb53101e>
CC-MAIN-2022-40
https://campus.barracuda.com/product/cloudgenfirewall/doc/79462914/firewall/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00783.warc.gz
en
0.863396
550
2.515625
3
AI is usually seen as a potential gamechanger for keeping a network secure, but could the technology also be used as an attacking tool as well? Neustar’s latest International Cyber Benchmark Index found that many security professionals agree that AI could pose several major problems. More than eight in ten (82 per cent) worry about hackers using AI against their company. They worry this might result in them losing data, (50 per cent). They worry how this might affect their customer trust, if it will hurt business performance and how much would such an attack cost them. “Artificial intelligence has been a major topic of discussion in recent times – with good reason,” said Rodney Joffe, Head of NISC and Neustar senior vice president and fellow. “There is immense opportunity available, but as we’ve seen today with this data, we’re at a crossroads. Organisations know the benefits, but they are also aware that today’s attackers have unique capabilities to cause destruction with that same technology. As a result, they’ve come to a point where they’re unsure if AI is a friend or foe.” “What we do know is that IT leaders are confident in AI’s ability to make a significant difference in their defences,” added Joffe. “So what’s needed now is for security teams to prioritise education around AI, not only to ensure that the most efficient security strategies have been implemented, but to give organisations the opportunity to embrace - and not fear - this technology.” Image Credit: PHOTOCREO Michal Bednarek / Shutterstock
<urn:uuid:42a34390-9869-44a9-851c-93fee39b4f98>
CC-MAIN-2022-40
https://www.itproportal.com/news/security-pros-fear-attack-by-ai-forces/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00783.warc.gz
en
0.960826
351
2.75
3
(This article is part of our ITIL v3 Guide. Use the right-hand menu to navigate.) What is capacity management? ITIL capacity management is responsible for ensuring that adequate capacity is available at all times to meet the agreed needs of the business in a cost-effective manner. The capacity management process works closely with service level management to ensure that the business’ requirements for capacity and performance can be met. Capacity management also serves as a focal point for any capacity issues in IT Service Management. Capacity management supports the service desk and incident and problem management in the resolution of incidents and problems related to capacity. Successful capacity management requires a thorough understanding of how business demand influences demand for services, and how service demand influences demand on components. This is reflected by the three subprocesses of capacity management: business capacity management, service capacity management, and component capacity management. It is required that capacity management develop a capacity plan, which addresses both current capacity and performance issues, as well as future requirements. The capacity plan should be used throughout IT Service Management for planning and budgeting purposes. Capacity management is responsible for defining the metrics to be captured during service operation to measure performance and use of capacity. This includes monitoring tools, which can provide input to the event management process. Capacity management may be called upon to perform tactical demand management, which involves using techniques such as differential charging to change users’ behavior so that demand does not exceed supply. Other activities of capacity management include sizing (working with developers to understand capacity requirements of new services) and modeling (building statistical representations of systems). Capacity management definitions Before implementing capacity management, it’s important everyone is on the same page. One way for an organization to accomplish this is to learn and own the definition. Capacity management introduces new ideas and terms that should be discussed before they are implemented, including component, capacity plan, capacity report, capacity management information system, and performance. A component is the underlying structure behind a service. For example, it is the database behind the application or the server underneath the website. It is a component that must be purchased, built, maintained, and monitored. Improving performance often involves a replacement, upgrade, or load balancing of the individual component. The capacity plan contains different scenarios for predicted business demand and offers costed options for delivering the service-level targets as specified. This plan allows service designers to make the best choices about how to provide quality service at an affordable price point. The capacity report is a document that provides other IT management with data regarding service and resource usage and performance. This is used to help other managers make service-level decisions or decisions regarding individual components. The capacity management information system (CMIS) is the virtual repository used to store capacity data. Dashboards are one way to store and report on capacity data. Performance is how quickly a system responds to requests. For example, how quickly an application processes data and returns a new screen is one indicator of its performance. The purpose of capacity management The purpose of capacity management is to determine how much capacity should be provided based on the information from demand management regarding what should be provided. In particular, capacity management is concerned with speed and efficiency. If IT capacity forecasts are accurate and the amount of IT capacity in place meets business needs, the capacity management process is a success. Capacity management activities This process involves constant measurement, modeling, management, and reporting. More specifically, these activities include: - Designing a service so that it meets service-level agreement (SLA) objectives once implemented - Managing resource performance so that services meet SLA objectives - Assisting with the diagnosis of performance-related incidents and problems - Creating and maintaining a capacity plan that aligns with the organization’s budget cycle, paying particular attention to costs against resources and supply versus demand - Continually reviewing current service capacity and service performance - Gathering and assessing data regarding service usage, and documenting new requirements as necessary - Guiding the implementation of changes related to capacity In practice, implementing this from scratch would involve the same steps as for other projects. For example, implementation might follow these broad steps: - Gather the data Work with business to determine the service-level need. Determine what this means relative to service availability and service capacity. Identify the individual components necessary. Work with demand management resources to predict demand based on user roles. Work with the financial management team to determine the costs. - Design a service and reach agreement Once you’ve identified the services and the level of performance needed, the cost, and the expected demand, you’ll be able to work with ITIL service level management to build an SLA that everyone can agree to. You will also have designed a service at this point. - Build the service The next step is to build the service. This involves purchasing the components and building the IT infrastructure, processes, and documentation necessary to support the new service/s. Capacity management should continue to monitor the business needs and any new data to ensure that the service being built will have the necessary capacity for quality performance. Financial management will be involved at this stage to facilitate purchasing of components and other resources. Once you have built the service, and everyone agrees it will meet demand, capacity, and availability requirements, it’s go-live time. This is when service operation takes over. Capacity management then supports service operation to deliver services that meet targets. - Gather the data Monitoring and managing services and their individual components are most easily done via monitoring dashboards that provide data on multiple components in one location. Gathering the data manually from each service or component adds to the total time it takes to produce service-capacity reports. Capacity management processes This process is built on several sub-processes, including business capacity management, service capacity management, component capacity management, and capacity management reporting. These processes share common activities, such as modeling, workload management, analysis, and optimization.` Business capacity management is the sub-process that turns the needs of the business into IT service requirements. It is involved in service strategy and service design, reviewing the data to ensure that there will be not be any changes in demand before the IT service is implemented. This sub-process works with demand management to ensure that the service is meeting business needs. Other sub-processes make sure that the service meets service-level targets; this sub-process ensures that the service-level targets meet the business needs. A thorough understanding of the business and the service-level agreements is necessary to effectively perform the activities in this sub-process. Service capacity management is the sub-process that focuses on the operation of the service. Unlike component capacity management, this process focuses solely on the service itself. It ensures that the end-to-end service provided meets agreed-upon service-level targets. For example, this process would monitor, control, and predict a ticketing system to ensure it was up and running efficiently. Component capacity management focuses on the technology that provides the performance and capacity to the IT service. Components are things like hard disks, phones, and databases. This sub-process requires knowledge of how each component individually contributes to service performance. It manages, controls, and predicts performance usage and capacity of individual components rather than the service as a whole (as seen in service capacity management). The goal of this sub-process is to reduce the total amount of service downtime by monitoring current performance and predicting future performance. Component capacities are designed around service capacities and not the other way around. Capacity management reporting is the final sub-process. It gathers and then provides other stages with the data related to service capacity, service usage, and service performance. The output of this sub-process is the service capacity report. Capacity management and other ITIL processes Capacity management must interface with other processes within ITIL, including demand management, availability management, service-level management, and financial management. When the business has a service need, it comes from demand management. It’s then relayed to the business continuity management team, which then translates it into an SLA and capacity terms. Service-level management helps with this. Once the service is deployed, service capacity management and component capacity management come in to keep everything at peak performance. Availability management works hand-in-hand with capacity management to keep services running and prevent downtime. Financial management comes into play when individual components must be estimated, purchased, maintained, and replaced. Not working closely with financial management can result in either untimely drops in uptime or organizational budget losses. ITIL capacity management is an important one. With it, your organization can save costs by having the data necessary to make decisions regarding service performance. Rather than being based on gut decisions and guesses, you can use gathered component data to make business cases that win over financial management. What’s more, this process can identify where performance tuning is a better choice than upgrading, thereby saving the organization money. Other barriers, such as performance bottlenecks and early indicators of performance issues, are identified before they become problems. This maintains uptime and increases customer and end-user satisfaction.
<urn:uuid:fefb401d-bd33-4b88-8183-70b706e5a63b>
CC-MAIN-2022-40
https://www.bmc.com/blogs/itil-capacity-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00783.warc.gz
en
0.926215
1,890
2.609375
3
It's that time of year when people dress up and kids over-indulge in candy. Yes, Halloween is upon us! Speaking of which, are you still looking for that last-minute perfect costume? There are plenty of online shops where you can select an outfit that will scare your friends half to death. To stay with our theme, today we are going to dissect a drive-by download that happened while browsing a Halloween online store. This legitimate website suffered a malicious code injection, something very common if you are not running the latest version of your favorite CMS software or are using weak passwords. Malicious code inject The malicious snippet was injected right before the tag and completely on its own (not part of the above script code). This is the work of automated shells looking for files to infect and placing the code at a specific spot. Bad guys love to use rotating DNS providers to play the cat and mouse game with security companies that try to shut them down. Trick or treatThe next stage of this compromise is a little bit more obscure. What we have here looks slightly suspicious but does not ring a bell. To better understand what it does, we need to organize the code into sections: (1) This is a string made of various characters which is defined as a variable called str. The ".split('')" method splits that string into an array of substrings where each character is an element of that array. (2) Another variable (ln) is initialized as an empty string. (3) This is a classic for loop with an embedded if command. The loop is defined by a start and an end value. It starts at 0 and iterates by an increment of 1 before it reaches a maximum value defined by the length of the str variable. (4) Now onto the if statement. The condition is defined as i%2 == 0, an expression that uses an arithmetic operator known as modulus (division remainder). You can try different values using this online script editor. If you play with it you will notice the following pattern: even numbers return a value of 0 while odd numbers return a value of 1. (5) For each time the condition in (3) is met (value is equal to 0), a new character is appended to the ln variable declared in (2), through a concatenate command. (6) Finally, the document.write(ln) command writes the output of the ln variable. Show your face, you scary monster!Now that we know the logic, we can print every second character from that mysterious string to reveal the code behind: If we put all the red characters together we finally obtain this: Now, the real motive behind this code injection is clear: To redirect to an exploit kit landing page! The pattern in the URL matches that of the Neutrino exploit kit. Since Blackhole's fallout, this kit has been very active. The reconstructed attack can also be seen in this Fiddler capture (click to enlarge): (1) The initial compromised website launches a silent call to an external URL. (2) The dynamically created URL is hosting a script that launches an iframe. (3) The exploit kit landing page fingerprints the users' system (plugin detect). (4) A Java exploit is launched. (5) A malicious binary is dropped. If you are running a vulnerable version of Java, a malicious executable will be pulled down and run on your system. Malware authors love writing sneaky little pieces of code to hide. They leverage popular scripting languages that offer them a million ways to wrap their intended payload such that it appears benign. Happy Halloween! (and no treats for you, bad guys!!)
<urn:uuid:daf61c43-3f06-4ede-9a66-4dd2a23ca6ca>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2013/10/looking-for-a-last-minute-halloween-costume-beware-of-spooks-in-your-browser
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00783.warc.gz
en
0.907375
830
2.59375
3
4 Benefits of a Mesh Network Here's how a meshed WAN can make your business more efficient. Articles published March 7, 2017 by Bob Bally In a fully digital work environment, with many employees working from afar, secure, reliable access to information is of the utmost importance. And that requires a safe, strong network. There are three common types of networks — Local Area Network (LAN), Wireless Local Area Network (WLAN), and Wide Area Network (WAN) — and it helps to know the differences between each. - A LAN is a small computer network that can be used to share certain organizational resources, like files, printers, and databases, and can connect to other LANs through a telephone line. - A WLAN provides wireless access within a covered area. This is the type of network used in many homes. - A WAN is useful for organizations operating out of more than one location. This type of network connects multiple LANs through a router and uses MPLS to speed up and prioritize certain types of traffic, like voice or video. Depending on the size and type of your business, a LAN might be sufficient, but many different types of businesses are logical candidates for a Meshed WAN solution. Organizations that have two or more locations and a need to prioritize segments of their data traffic due to latency and/or performance concerns can benefit most from a Meshed WAN. How Meshed WAN Works In a fully meshed network topology, all network devices are connected to one another, establishing multiple routes for information to travel between users and increasing network resiliency. Meshed WAN is the modern implementation of a packet switching network, capable of carrying converged traffic, including Voice over Internet Protocol (VoIP), video, and data. As business networks increasingly carry this mix of traffic, Meshed WAN has emerged as the most efficient way to meet the special delivery needs of each class of data. A Meshed WAN creates a totally private and customized network that is independent of transport medium. Meshed WAN is used as the underlying technology, allowing your Wide Area Network (WAN) to be built on any combination of traditional time division multiplexing (TDM) circuits, Ethernet, or Internet connections. The top four business benefits of a Meshed WAN solution include: 1. Increased efficiency A mesh network operates as the single source for acquisition of necessary connectivity. When you partner with a managed IT provider for a Meshed WAN, you can count on support oversight that covers the entire network solution. That means there’s no need to call the DSL provider in one town and the T1 provider in the other. Meshed WAN manages everything from start to finish and for all providers. This means you have only one call to make for services and support, and you’ll only receive one bill. 2. Reduced leased line costs Internet access can be provisioned directly from the Meshed WAN network, resulting in substantial leased line savings and unlimited scalability. 3. Quality of Service (QoS) management The Meshed WAN solution gives you the option to tailor services to your needs through the addition of QoS management. By employing QoS, you can control the Meshed WAN service. This prioritizes your mission critical traffic, allowing for your key applications to operate quickly and efficiently. Network applications, such as voice and video, are time sensitive and benefit immensely from the QoS that Meshed WAN functionality provides. 4. Increased mobility and connectivity. One of the biggest benefits of a mesh network is that offsite workers and satellite branches can seamlessly join your Meshed WAN through a Web-based Virtual Private Network (VPN). This helps keep communication flowing among team members and data secure, so your business can continue to run smoothly regardless of user location. Stay Connected with a Mesh Network Today, a company can run its entire communications infrastructure on one network. Meshed WAN bridges remote locations, helps you manage traffic to realize the most value from your network, and makes new ways of doing business possible. Because your network effectively runs “in the cloud,” you benefit from professional management, advanced security, and unparalleled flexibility. More than ever, distance barriers are shattered when your employees and customers communicate on one network. Could your organization benefit from a Meshed WAN solution? Read our white paper: Enhancing Information Security In An Unsecure World
<urn:uuid:7fdb8c7d-a72f-447b-984d-12da732084b3>
CC-MAIN-2022-40
https://www.aureon.com/services/it-management/network/sd-wan/meshed/solutions-for-businesses-4-benefits-of-meshed-wan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00783.warc.gz
en
0.932237
925
2.546875
3
The OpenSSL Project has released OpenSSL 3.0, a major new stable version of the popular and widely used cryptography library. What is OpenSSL? OpenSSL contain an open-source implementation of the SSL and TLS protocols, which provide the ability to secure communications across networks. It is the default encryption engine for popular web, email and chat server software, VPNs, network appliances, and is used in many popular operating systems (MS WIndows, Linux, macOS, BSD, Android…) and client-side software. What’s new in OpenSSL 3.0? Before OpenSSL 3.0, the last major release of the library was v1.1.1. A migration guide provided by the OpenSSL Project lists the newly introduced changes in detail, but as a short overview, the new release comes with: - A new license (Apache License v2) - A new FIPS module (FIPS 140-2 validation of the library is in progress, and the final certificate will likely be issued next year, the developers say) - A new Provider concept. “Providers collect together and make available algorithm implementations. OpenSSL 3.0 comes with 5 different providers as standard. Over time third parties may distribute additional providers that can be plugged into OpenSSL,” the migration guide explains - A new, “proper” HTTP(S) client - Support for Linux Kernel TLS - A variety of new algorithms - New APIs “OpenSSL 3.0 is a major release and not fully backwards compatible with the previous release. Most applications that worked with OpenSSL 1.1.1 will still work unchanged and will simply need to be recompiled (although you may see numerous compilation warnings about using deprecated APIs). Some applications may need to make changes to compile and work correctly, and many applications will need to be changed to avoid the deprecations warnings,” OpenSSL committer Matt Caswell noted. “API functions that have been deprecated will eventually be removed from OpenSSL in some future release, so it is recommended that applications be updated to use alternative APIs to avoid these deprecated functions.” The migration guide offers instructions on how to upgrade to OpenSSL 3.0 from versions 1.1.1 and 1.0.2. OpenSSL 3.0 can be downloaded from here.
<urn:uuid:073e9ff9-198a-461e-9094-f84d31aec4d4>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2021/09/09/openssl-3-0/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00783.warc.gz
en
0.900672
494
2.9375
3
New research by Ben-Gurion University (BGU) has highlighted the cyber security and privacy threats posed by drones that are flown over populated areas. Reporting on the findings, Smart Cities World noted that the report recommends additional safeguards are fitted to drones to prevent them from being taken over by “malicious entities” and used for a range of crimes, including cyber crime and terror attacks. The research carried out by the BGU Cyber Security Research Centre explored the issuesfaced by a number of organisations, ranging from the police and military to governments and noted that the biggest difficulties come in areas of unrestricted airspace. “In an unrestricted area, we believe that there is a major scientific gap and definite risks that can be exploited by terrorists to launch a cyber attack,” Ben Nassi, a researcher at BGU Cyber Security Research Centre, told the news provider. Among the measures proposed by the researchers are flying drone identification, as well as a registration system for drones. This isn’t the first time that the cyber security risks of drones have been highlighted. A recent article for Threat Post shared comments from Tony Reeves, a director at consulting and training company Level 7 Expertise, who explained that drones pose threats on a number of levels. For one thing, they are difficult to detect and disable, while they can also be cheap to buy and are not difficult to fly. Mr Reeves revealed that there are “plenty of reports to be found of individuals or organisations building or modifying drones to carry RF-based payloads including wi-fi tracking, capture and access capabilities”. The news provider explained that putting a wi-fi access point on top of a building could enable hackers to “listen into data traffic”, for example. If you’re looking for independent security testing, contact us today.
<urn:uuid:74ff216f-d597-4966-b76c-93945ba536a0>
CC-MAIN-2022-40
https://www.infosecpartners.com/cyber-security-threats-of-drones-identified
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00783.warc.gz
en
0.957472
376
2.546875
3
For those of us in the Northern Hemisphere, autumn will officially begin on September 22nd. However, for many, Labor Day marked summer’s unofficial departure. Today, it’s rather customary for the first Monday in September to be celebrated with boats, backyards and barbecues. Yet, despite the fun-filled summertime gatherings, Labor Day has historically honored the contribution of America’s workforce. Labor Day: Then and now The first ever “workingmen’s holiday” was commissioned by the Central Labor Union in New York City in 1882. These celebrations quickly became commonplace in many states and large cities across the nation. While the festivities varied, all sought to commemorate the time and effort exerted by the nation’s workforce. Just over ten years later, Labor Day was established as a federal holiday in 1894—delivering the two most highly sought-after observances: 1. Due recognition. 2. Sleeping in. It’s been a century and a few decades since the first Monday in September was earmarked as a tribute to America’s workforce. In that time, we’ve come quite a way in improving working conditions, work/life balances and the benefits of employment. Many of these benefits are indebted to legislation, such as the Fair Labor Standards Act (FLSA) and the Occupational Safety and Health (OSH) Act, which guarantee reasonable working conditions. Through the years: Digital impact Yet, the far less grueling labor required among modern jobs is largely brought on by progress in technology… Take a look at how digitization has impacted America’s market over the past few decades: - 1954: General Electric installs the UNIVAC I for payroll processing and manufacturing control programs at its Major Appliance Division plant, marking the first business use of a computer in the U.S. - 1960: American Airlines implements its Sabre flight reservation system, digitally processing 84,000 phone calls per day along with storing reservations, flight schedules and seat inventory. - 1968: U.S. libraries adopt Machine Readable Catalogs (MARC). - 1977: Citibank installs the first ATM. Machines are placed in all its New York branches by the end of the year. - 1979: Federal Express digitizes the real-time management of people, vehicles, packages, and weather scenarios by launching its COSMOS systems. - 1994: Thought to be the first digital transaction, a large pepperoni, mushroom and extra cheese pizza is ordered from Pizza Hut. - 1996: Digital storage becomes more cost-effective than paper-based storage. - 2000: The ESIGN Act—and UETA (1999)—assigns the same legal validity to electronic signatures as traditional pen-to-paper signatures. - 2007: 94% of the world’s information storage capacity is digital. Aiming to automate Fast forward to 2018, you’ll see many electronic and digital benefits comprising “Industry 4.0,” a term coined to define what’s now a significantly automated job market. Not all businesses are fully digital. In fact, less than 40% of organizations in the U.S. have half or more of their business processes online. If this describes your business, department, team, etc… you may not be asking why or if you should embrace digital processes… but how to begin… Start simple… start with eSignature Yet, it consolidates its findings into this simple concept: “… a fully digital business process is what consumers want; it’s all about ease of use.” In other words, your customer—despite industry, product or service—yearns for a digital means of doing business with you. Now, we realize we’re biased… but why wouldn’t they!? It’s convenient… fast… secure… … we could go on, but we won’t. It’s with this consumer vantage point in mind that Aragon highlights electronic signature as the essential foundation of automating your business processes. In the same publication that ranked Nintex AssureSign® as an “innovator” among competitors, Aragon describes eSignature as the precursor for other advanced measures of digital transaction management, such as asset management or workflow content automation. This is likely due to the foundational and far-reaching functionality of eSign software. A capable electronic signature platform like Nintex AssureSign® not only renders legally binding signatures… it digitally transforms the entire lifecycle of nearly any transaction—including those requiring electronic payments! Bringing these transactions to your consumer’s laptop, tablet and even their iPhone can make or break your go-to-market strategy. Ready to take labor out of the equation? This Labor Day, bring digital versatility to your consumers while simultaneously removing the labor from manual, paper-laden processes… Try out Nintex AssureSign® for yourself? Click here to request a free trial.
<urn:uuid:b9b936e4-165c-4c63-b27c-7128a10cf023>
CC-MAIN-2022-40
https://www.nintex.com/blog/ready-to-take-labor-out-of-the-equation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00183.warc.gz
en
0.911711
1,054
2.96875
3
One of the more interesting developments in the tech space taking place right now is the emergence of digital twin technology. For those of you asking What is a digital twin? right now, allow us to elaborate. A digital twin, as per Gartner, is “a digital representation of a real-world entity or system”. More exactly, digital twin technology provides the ability to create a virtual representation that accurately simulates (hence twin) both the physical components as well as the dynamics and behavior of how an Internet of Things device performs and functions throughout the entire duration of its lifecycle. This is achieved by collecting and interpreting vast datasets from deployed sensors in order to realize the desired sameness between the real-world element and its duplicate. Digital twin technology has been in use in the aerospace industry for some time now. However, it’s only recently became a major development thanks to the advent of the Internet of Things and Industry 4.0. The main benefits in a traditional (read equipment/hardware-oriented) deployment include: Almost anything you can think of can have a digital twin – an office building, a water filtration system, footwear, your cat etc. So, the question that naturally arises is – why not the enterprise? The Digital Twin of the Organization is a concept created by Gartner. Quite simply, it is predicated on using a digital representation of an organization (its business model, strategies etc.) to better plan and execute a business transformation initiative. The whole idea behind the digital twin concept, and the reason why it is so useful, is that it offers a virtual model that can be analyzed and tweaked more easily than the real thing. The new insights and efficiencies you uncover this way can in turn be used to improve the organization. Model is the key word here. Models are massless, frictionless, virtually free, reusable, and – importantly – they are also the lifeblood of enterprise architecture. Thus, EA is by default positioned to play a key part in taking the Digital Twin of the Organization from concept to reality. We have been arguing the importance of a model-based approach to business change for quite some time on this blog, now it seems the future is starting to catch up. Let us have a more detailed look at how exactly EA helps, and offer some examples based on the Bizzdesign suite. In a recent blog post, we already outlined the power of this combination of structure and data to realize “BI 2.0”. The Digital Twin of the Organization builds on that. Having a virtual representation of the organization is the first requisite for a DTO initiative. You need to see your enterprise landscape if you are going to optimize it. This first requirement is fully answered by Enterprise Studio, our collaborative business design platform. Enterprise Studio offers powerful, integrated modeling across multiple disciplines (EA, Business Process Management, Business Model & Strategy and others), as well as all the capabilities needed for seamlessly planning and executing meaningful organizational change. Thanks to its intuitive modeling interface, users can build an accurate representation of the enterprise across all relevant levels – Strategy, Business, Applications, Infrastructure. What’s more, it supports Capability-based planning, which enforces a business outcome-oriented mindset and helps to focus the company’s resources on delivering actual business value. It is even possible to go beyond the confines of the organization and model the wider ecosystem in which it functions – for instance, things like suppliers, regulatory constraints, as well as customers and their behavior. Using advanced customer journey maps in Enterprise Studio you can help unlock the value in your products and services by highlighting areas where customer interaction can be optimized to increase customer satisfaction. All these features and more are available in an easily-navigable environment that lets you move across models and drill down on desired artifacts with ease. This facilitates modeling and ensures a steady level of progress during complex projects such as a Digital Twin Organization implementation. Strategy to Operations Alignment Of course, having the digital representation of the enterprise is only the first step. Making an organization more effective and efficient is about finding opportunities for improvement and carrying them out without disrupting the business. This is where having a clear model framework to integrate and make sense of operational data comes in really handy. Enterprise Studio features support for an extensive range of standards and frameworks, including ArchiMate, TOGAF, BPMN, NIST 800-53, Open FAIR and many others. Access to high-quality guidance, support and best practice examples means you do not have to reinvent the wheel and can instead focus on your objectives without petty distractions along the way. Another one of Enterprise Studio’s stand-out features is its powerful analysis engine. Cutting through complexity in pursue of internal coherence and agility requires strong analytical tools with which to uncover opportunities for optimization (be it for business process improvement, or application rationalization etc.). The platform offers users access to a vast selection of analyses. These are an important component of the ‘tweaking’ aspect of the Digital Twin of the Organization, since they help with getting new, valuable insights. Without going into too much detail, the platform supports impact analysis, dependency analysis, process analysis, lifecycle analysis, as well as financial analysis to name but a few. Enterprise architects or other affiliated roles can also execute SWOT, PESTEL, and Porter’s 5 Forces analyses, and a range of other ones, scorecard-related or otherwise. Integrate, Consolidate and Operationalize Data Finally, since having a reliable image of the organization means access to (near) real-time data, to ensure the ‘twin’ quality of the model(s), the Bizzdesign platform integrates some state-of-the-art technology to enable you the highest agility, flexibility and productivity on the market. Horizzon is our cutting-edge collaboration and publication portal, a key component of any DTO initiative. Horizzon can be seen through two lenses. On one hand, it is a fantastic tool for disseminating information across the enterprise and socializing content to all the relevant stakeholder groups in the right format. Its data analysis along with its dashboarding and reporting capabilities are powered by Kibana technology (part of the Elastic Stack), making it a crucial tool for operationalizing data, i.e., getting people to engage with and act upon it. Since enterprise architects do not run the business by themselves, their work needs to be easily understood by management and other business-side stakeholders. Horizzon makes socializing architectural content very straightforward thanks to the visually rich outputs that instantly communicate findings to audiences. As such, it increases the likelihood and speed of making informed, constructive decisions, which has a long-term positive effect on the business. On the other hand, HoriZZon’s high connectivity and integration capabilities, already mentioned in our earlier blog post, allow it to act like a hub. Its Kafka component allows extremely fast data streaming, which makes it a desirable platform for consolidating data from third party tools (for instance, Business Intelligence tools, CMDBs, IT Service Management tools, process mining etc.). Having a portal that features a live image of the organization and presenting insights in the form of signature-ready deliverables brings to life the idea behind Gartner’s Digital Twin of the Organization. It places decision makers in front of a reliable representation of the enterprise and lets them immediately influence the real-life direction of the company and/or the way things are done. Whether it be updating the business model due to new regulation, removing inefficiencies from business processes, or cutting investment from areas that do not support the end business goals, decision makers can carry out real change to the business they run by using this digital model the same way engineers would change the parameters of a water pump in a traditional, industrial digital twin setting. Of course, working to realize the digital twin of your organization is a complex endeavor that requires a powerful platform behind it and a fairly high level of EA maturity. Nonetheless, considering the advantages it stands to deliver, enterprise architects would be well advised to do their due diligence and explore this concept for all that it could bring to their organizations. Done correctly, it can generate meaningful returns, and contribute to the creation of a competitive advantage. The BiZZdesign suite is a leading EA platform that offers market-leading modeling, analytics, and data integration capabilities. To see how it can help you realize the digital business of tomorrow, please get in touch today!
<urn:uuid:ef81829d-d205-4b5c-ad0c-428f87711b39>
CC-MAIN-2022-40
https://bizzdesign.com/blog/the-digital-twin-organization-can-enterprise-architecture-help/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00183.warc.gz
en
0.934255
1,756
2.671875
3
Most large data centers now operate using chilled water cooling, with chillers supplying water to data center cooling units, usually, computer room air handlers, that take heat from the servers and return it to the chillers where the heat is rejected to air and the process starts again. Much of the improvement focus over the past decade has been on the equipment itself, but the chilled water system as a whole is now under the spotlight. There are two main types of architecture: “variable primary” and “fixed primary, variable secondary” deployed, with the US favoring the former and Europe the latter. But a shift is occurring and it has major implications for how data centers in Europe are managed and optimized…. More from Airedale by Modine Conference Session Tech Showcase - Sustainable Power and Cooling
<urn:uuid:39678a72-2df5-4e56-8b75-33e003331482>
CC-MAIN-2022-40
https://www.datacenterdynamics.com/en/whitepapers/managing-change/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00183.warc.gz
en
0.945584
172
2.765625
3
BYOD in schools, Chris Gabriel looks at whether allowing school children to BYOD is a good idea or just a faddy notion. There is an argument that the use of technology and new inventions, such as BYOD, “dulls the memory and results in people seeming to know much, while for the most part knowing nothing”. A chorus of approval might be heard from those who believe a good book, preferably a classic, can teach you all you need to know, and that a well written essay is a hallmark of a clear thinker; unless the book in question is Plato’s Phaedrus. Written around 2400 years ago it made precisely that point about a relatively new technology at the time - ‘writing’ and its partner in crime ‘reading’. The point is that when new technologies appear and threaten to change the way we do things, there are many for whom this is unacceptable. The reason, more often than not, is the naysayers have spent much time learning and developing their modus operandi and, when they are presented with a new and/or easier way of carrying it out, it undermines their investment. The reaction is often negative. The first reaction when pupils brought phones and smart devices to school was to send a swift letter home reminding parents that school is a place for learning so please do not bring these devices to school. Presumably they were teaching our kids communication skills, how to research information and share it, all on tired PCs running Windows XP. Meanwhile their parents were at work grappling with a new smart phone that could handle e-mail, use apps to access the corporate sales database and play Angry Birds on the commute. At the same time, some of those parents were restricting their children’s access to technology; an hour a day, no more. Then some really smart schools clicked. What tools will these young people be using when they enter the workplace? What do they use socially to communicate, research information and share? These schools understood that their pupils already had the user skills and, in many cases, the tools to receive a vast amount of information. They also realised that a pretty cool way of teaching their students would be to use the devices they already use to learn and communicate when not at school. So BYOD in schools was born, and interestingly it seems that this market sector has it addressed in a way that others just do not – remember our BYOD research showing that 78% of organisations do not have a BYOD policy? We will be elaborating on how schools have made BYOD work in further articles, but some of the issues they have overcome are: - Security – child protection, exposure to viruses and malware, network vulnerability, and student etiquette and behaviour including cyberbullying. - Equality – not every child may have access to smart devices. How is this managed sympathetically? - Preparing for BYOD – training teachers, educating parents, developing user policies and managing expectations. - Adopting best practices – finding an integrator, preparing the network, implementing security architecture and network policies, developing a network access strategy, and monitoring and managing activity. The reality is students like using their personal devices, so they become engaged in whatever it is that they’re doing with them — including classwork, which becomes even more interactive when everyone has access to technology. Unlike a school-provided device, the personal device (and the desire to continue using it) goes home with the student. In this way BYOD in schools enables and fosters 24x7 learning. This is not just a fad – a study in the US by the Jane Ganz Cooney Center saw an average 27% increase in vocabulary among five year olds after they used an education iPad app. A similar study showed a 17% improvement among three year olds. One skill of all accomplished academics is not knowing all the information, but knowing where to find it. Library and research skills is a module taught on most degree courses. Go into any modern courtroom and the lawyers will be using electronic devices to find ‘the law’. The medical profession uses smart devices across all areas of practice, and business people access and respond to e-mail on the move and update the company database thousands of miles away from HQ while sitting on a beach. Should we not then be allowing our children to use the tools they are already familiar with to access information in the same way the ‘grown up’ world does? What’s stopping us? In some cases, as we have seen, nothing. Chris will be looking at a school in the UK where all staff and pupils have been provided with tablet devices, in a follow up post we'll also be examining how parts of Arizona, USA have deployed BYOD in Schools.
<urn:uuid:c726f49f-08ef-47ab-b4b6-0cda511ddab3>
CC-MAIN-2022-40
https://insights.logicalis.com/blog/byod-in-schools
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00183.warc.gz
en
0.966827
982
2.5625
3
The findings of the study, by Assistant Professor Boaz Mizrahi and his student Maayan Lupton and Dr. Ayelet Orbach of the Technion-Israel Institute of Technology’s Faculty of Biotechnology and Food Engineering, were recently published recently in Advanced Functional Materials. Fungal infections are common among animals, including humans. One of the primary sources of such infections is Candida, a yeast regularly found in our bodies. Candida exploits abnormal functioning in the organism to spread and harm the host. Most people will experience a fungal infection at least once in their lifetime, in some part of their body — on the skin, in the digestive system or genitals. The frequency of fungal infections is constantly on the rise due to the aging population and possibly global warming. Additional reasons include use of drugs, which suppress the immune system, and the increased use of broad-spectrum antibiotics, which indirectly enhance the proliferation of Candida by disrupting the bacteria balance in the body. In the current standard pharmaceutical model the drug passes through the entire body and portions of it may be broken down in the process, the researchers said. The team studied the option of treating Candida using the Bacillus subtilis bacterium, which naturally produces and secretes substances that inhibit Candida growth, as the bacteria tends to compete with Candida on common growth areas, around the roots of plants in nature. Bacillus subtilis is already used by farmers and growers to fight Candida on their plants. The Technion researchers are now using this bacteria to fight the fungus in animals and hopefully in humans too. “Our first challenge,” said Mizrahi, “was to develop a transport system that would enable application of the live bacteria on the infected lesion without impairing their ability to proliferate and secrete their therapeutic substances in the target site.” To do so, the researchers developed a gel that takes a liquid form in the refrigerator and at room temperature. “Its liquid form allows the gel to better penetrate a deeper layer of the skin,” where the fungus generally resides, said Mizrahi in a phone interview. Within seconds after being applied to the infected area, the gel hardens. It contains food substances, “a sort of a food hamper,” Mizrahi explained, which makes sure that the bacteria stays alive and produces the Candida-destroying molecules. “We have developed a formula that creates a small factory of live bacteria that continue to grow and secrete,” he said. That is the team’s innovation, he said The researchers applied the gel to the skin of mice suffering from a fungal infection, after marking the gel with a fluorescent substance that would allow for monitoring. The formulation penetrated into the skin without reaching the underlying blood vessels, indicating that the effect of the formula is limited to the diseased area, the researchers said. Following up on the procedure, researchers found that the group of mice treated with the Technion-developed bacterial gel showed rapid skin healing, while in the control groups — treated with bacteria-free gel or not treated at all — the infection continued to develop. The researchers said they hope their model of using live bacteria to treat fungal infections and their “minuscule factory” that produces the active substance on the site, will be used in the future to treat a whole range of diseases, including psoriasis, acne, a variety of inflammations and even cancer.
<urn:uuid:2931b948-a86a-4611-9dc1-36ead1d9558c>
CC-MAIN-2022-40
https://debuglies.com/2018/08/24/israeli-researcher-say-they-have-managed-to-cure-fungal-infections-using-bacteria-commonly-found-in-the-upper-layers-of-soil/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00383.warc.gz
en
0.943215
720
3.46875
3
Hacking or Not – Where is the Line? As you may have heard, a critical flaw was found in a web application provided by the state of Missouri to search for educators’ credentials. The flaw was discovered by a reporter, who immediately notified the state after confirming the vulnerability. The reporter also agreed not to report the flaw until the state could take corrective action by disabling the site. Governor Mike Parson (in)famously said that they had directed state law enforcement to investigate the incident, stating his belief the reporter had broken the law. This situation has led to an outpouring of criticism on social media and in the press. BreachQuest has been quoted on the story repeatedly, but in this post, we want to break down what is and isn’t “hacking” (at least in the criminal sense). Is it a hack without intent? Note that I am not a lawyer, and nothing in this post should be construed (misconstrued?) as legal advice. Facts of the Case The reporter searched an online repository that is publicly accessible. Upon viewing search results, the reporter viewed the source of the webpage. This activity is so common that most browsers have multiple hotkeys that allow users to view the HTML source code. In Chrome, a user can press CTRL-U (Command-U on a Mac) to view the source code of the webpage. Additionally, the source can be viewed by using the Developer Tools functionality of the Chrome browser by pressing the F12 key. Governor Parson was quick to note that more than just viewing the HTML source code was required in this case, noting on Twitter, “An individual accessed source code and then went a step further to convert and decode that data in order to obtain Missouri teachers’ personal information.” He then added, “This data was not freely available, and by the actors (sic) own admission, the data had to be taken through eight separate steps in order to generate a SSN.” Governor Parson included screenshots of the relevant Missouri statutes he believed the reporter violated in his research. Because the reporter gave the state time to fix the vulnerability before disclosing it, we can’t independently validate the specific steps required to decode the data and render SSNs. Some reporting hypothesized that the SSNs were stored using base64 encoding. This seems extremely likely, mostly because we see it regularly. Base64 encoding results in what is known as a 7-bit ASCII representation of arbitrary binary data (though technically, each byte of base64 payload only represents 6 bits of data), typically using the characters a-z, A-Z, 0-9, +, and /. Effectively, this means that any data can be encoded such that it can be printed. Base64 is the standard used to encode email attachments, data in HTTP POST requests, and many other implementations. Base64 is easy for the trained eye to identify. First, for any given input data length, the output length will be identical. Assuming the SSNs were encoded individually (as was likely the case), each base64 payload would be an identical length. Next, it’s important to note that because each base64 payload character represents six bits of payload data, each three bytes (characters in this case) of payload data are represented as four bytes of base64 payload. To see this more clearly: - Each byte is eight bits. - Three bytes of input data is 24 bits (3 x 8 = 24) - Each base64 character can encode six bits of input data - Four base64 characters are required to represent this (24 / 6 = 4) Base64 should never be used as an encryption or obfuscation method. But this is an especially serious problem in the case of social security numbers. Until 2011, the first three digits of a social security number were assigned as what was known as an area number. In the case of Missouri, those area numbers were 486-500. In the 100,000 educators in the affected dataset, it would be extremely likely that the majority of them have SSNs where the first three digits are in this range. This means they will encode to the exact same output data. For example, see the figure below. Note that the first four characters remain static (as does the total length of the base64 output). Another dead giveaway that the payload is base64 data are the equal signs at the end. Every base64 encoded payload must end on a four-byte boundary. When the payload doesn’t end on a four-byte boundary, it is padded with one, two, or three equal signs. These are not part of the standard base64 alphabet and are understood to be padding. A nine-digit SSN encoded using base64 will always have two equal signs for padding. This is the equivalent of saying, “we have everything locked in a safe. Here is the safe for you to hold. Here is the combination and instructions for opening said safe. But you definitely cannot see what is inside the safe.” Nobody would confuse this real-world analogy for real data security, and we shouldn’t confuse the digital equivalent either. Why Were the SSNs In the Browser at All? HTTP is a stateless protocol. Every time the browser makes a new request, the server responds as if it has never seen the browser before. Because this doesn’t make for a very rich web experience, web developers need a way to store state in the browser. Thanks to GDPR, CCPA, CPRA, and various other privacy regulations, you’re probably familiar with at least the existence of cookies by this point. A cookie is one method used to store data in the browser, preserving state. The server sets the cookie in the response. The browser passes the cookie data back to the server with each subsequent request. Another method frequently used by web developers to maintain state are hidden form fields. These form fields are encoded in the Document Object Model (DOM) of the webpage – effectively the page source. Hidden form fields can be trivially viewed by examining page source and are prime candidates for inspection. The fact that a hidden form field exists indicates the developer intended to store some state in the variable. Many developers don’t consider that even though the form field is hidden, users can still interact with the data, both to view and change it. As a result, hidden form fields are a very common source of vulnerabilities in web applications. Developers know they must validate user input, but many fail to view hidden form field data as user input, rationalizing that it was created by the server and sent back to the server. To analogize this with a real-world example, consider this the equivalent of giving a user a box to hold that contains a tee shirt. They will take the box in another room out of your view. Later, they will hand the box back to you, at which point you will carry the box through a TSA checkpoint. Would you be willing to trust that nothing has changed in the box and risk a body cavity search? Of course not. So too, should a webserver not blindly trust that no dangerous contents have been placed in the box. Similarly, the box might contain extremely sensitive data, like social security numbers. To align with the probable case that SSNs were base64 encoded, in this example, the SSNs are present in the box but have been printed onto 9 jigsaw puzzle pieces. They are there for anyone who wishes to assemble the puzzle, which is trivial for anyone to do. Would you hand this box containing your personal data to any stranger that requested it, even if they needed to assemble puzzle pieces to understand it fully? Again, of course not. The issue at hand is not the number of steps required to view the sensitive data. The problem is that the sensitive data should never have been in the browser in the first place. Does Intent Matter When Vulnerabilities Are Disclosed? The impact of the flaw is significant. Based on reporting from the St Louis Post-Dispatch, there were likely more than 100,000 SSNs available through the application. Governor Parson stated to the press that the reporter “took the records of at least three educators.” While we can never truly measure intent, the number of records accessed is a strong indicator. If the reporter had accessed all 100,000 SSNs to “confirm the vulnerability”, this might be viewed with suspicion. If the reporter subsequently released the SSNs to the public, this would be ethically questionable. But would even that be breaking the law? That seems unclear. At the federal level, there is precedent for downloading publicly available information and releasing it potentially being a crime (but maybe not). For those not familiar with Andrew Auernheimer (aka “Weev”), in 2010, he and his friend downloaded publicly accessible data for 120,000 AT&T customers and provided it to Gawker. Auernheimer was convicted under the controversial Computer Fraud and Abuse Act in 2012, but his sentence was vacated in 2014 due to an improper trial venue. In 2013, Auernheimer filed an appeal based partly on the fact that the information he accessed was publicly available (even if AT&T didn’t intend it to be). If you’ve ever seen a laptop sticker that says “wget is not a crime” (probably at a security conference), this is what it refers to. After the sentence was vacated, the government did not retry the case. This leaves the question of whether accessing publicly available (but sensitive) data in bulk and providing it to a third party is a federal crime. One court said yes, but lacked venue, and the merits of the case were never tested under appeal. While “is accessing publicly available data a crime” is still an open legal question, that’s not what happened here. The reporter discovered the vulnerability, confirmed it by accessing three records, did not provide the data to external parties, notified the system owner, and didn’t publish information about the vulnerability before it could be mitigated. Had the reporter accessed thousands of records (or enumerated all records), there would be more reason for concern. But here, it’s fairly obvious the reporter did not intend to steal personal data. Intent matters elsewhere in the US legal system; it likely should here as well. Threatening a reporter with legal action is almost always a bad idea and usually creates an unintended Streisand Effect. But more generally, organizations should be careful not to shoot the messenger when security vulnerabilities are disclosed. The question of whether this was a crime might be more black and white if the reporter had enumerated all records before reporting the issue. That governor Parson said only three records were taken seems to contraindicate any malicious intent. Instead of focusing on this so-called “hacking,” Governor Parson should be worried about the security of the state’s applications, particularly those that are available for public use.
<urn:uuid:a5f6700e-7370-42d3-bdb3-ed4667b2cb4b>
CC-MAIN-2022-40
https://www.breachquest.com/blog/hacking-or-not-where-is-the-line/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00383.warc.gz
en
0.954577
2,404
2.625
3
This course introduces Bayesian methods and has an in depth analysis of how to make inferences from small samples. This course includes an extensive collection of Excel worksheets for using Bayesian methods. If you have not yet purchased or enrolled in this course, please click here. To get hands on experience using methods to analyze and make inferences from data. There is an emphasis on Bayesian methods and making inferences from small samples. How to Measure Anything and Decisions Under Uncertainty - Live 2 Hour Online Webinar - 1 Online Review Quiz Recommended Next Courses Calibration, Advanced Calibration, Creating Simulations in Excel, Statistical Methods in Excel: Intermediate Please download the following spreadsheet to follow along in the course examples and to answer review questions.
<urn:uuid:6639041c-ddad-40a7-ac68-403118d29882>
CC-MAIN-2022-40
https://hubbardresearch.com/courses/statistical-measurement-methods-in-excel-basic/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00383.warc.gz
en
0.854191
182
2.84375
3
Originally posted on http://www.howtomeasureanything.com/forums/ on Thursday, April 30, 2009 6:20:57 AM. I want to thank you for your work in this area .Using the information in your book I used Minitab 15 and created an attribute agreement analysis plot. The master has 10 correct and I then plotted 9,8,7,6,5,4,3,2,1,0. From that I can see the overconfidence limits you refer to in the book. Based on the graph there does not appear to be an ability to state if someone is under-confident. Do you agree? Can you assist me in the origin of the second portion of the test where you use the figure of -2.5 as part of the calculation in under-confidence? I want to use the questionnaire as part of Black Belt training for development. I anticipate that someone will ask how the limits are generated and would like to be prepared. Thanks in advance – Hugh” The figure of 2.5 is based on an average of how confidently people answer the questions. We use a binomial distribution to work out the probability of just being unlucky when you answer. For example, if you are well-calibrated, and you answer an average of 85% confidence (expecting to get 8.5 out of 10 correct), then there is about a 5% chance of getting 6 or less correct (cumulative). In other words, at that level is is more likely that you were not just unlikely, but actually overconfident. I took a full distribution of how people answer these questions. Some say they are an average of 70% confident, some say 90%, and so on. Each one has a different level for which there is a 5% chance that the person was just unlucky as opposed to overconfident. But given the average of how most people answer these questions, having a difference of larger than 2.5 out of 10 between the expected and actual means that there is generally less than a 5% chance a calibrated person would just be unlucky. It’s a rule of thumb. A larger number of questions and a specific set of answered probabilities would allow us to compute this more accurately for an individual.
<urn:uuid:c5c5c2b5-4502-43fe-887d-5d054894ba0c>
CC-MAIN-2022-40
https://hubbardresearch.com/the-statistics-behind-the-calibration-scores/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00383.warc.gz
en
0.9672
479
2.859375
3
How will Edge Artificial Intelligence (AI) Chips Take IoT Devices to the Next Level Updated · Jul 05, 2022 WHAT WE HAVE ON THIS PAGE Edge Artificial Intelligence (AI) Chips Will Help IoT Devices to Operate Faster and Smarter in 2022 In recent years, edge computing has been gaining popularity to provide IoT (Internet of Things) devices and AI (Artificial Intelligence) applications with valuable sensor information in a fast and efficient manner. But, to effectively implement these innovational technologies at scale, integrated circuit manufacturers and researchers must first build new, specialized chips that can support their computationally-heavy demands. The Chinese startup Reexen Technology was established in 2018 by Dr. Hongjie LIU, an ETH Zurich graduate. It has now risen to prominence as a worldwide leader in the field of edge-AI ASICs (Application-Specific Integrated Circuits) for medical, industrial, and consumer markets. Although they have a significant impact on our daily lives, it can be challenging to navigate the vast array of terminology used to describe the latest technical trends, like “edge-AI Application-Specific Integrated Circuits” or “embedded DNN functionalities in low-power internet of things (IoT) sensors.” This article includes two primary goals to solve this problem. The first aim is to give a brief overview of the key concepts related to the emerging field of AIoT (“Artificial Intelligence of Things”), which encompasses many of the terms listed above. The second aim is to provide practical examples of how these technologies are implemented in the real world, using Reexen’s work. IoT (the Internet of Things), Edge Computing, and AI (Artificial Intelligence) IoT has emerged as one of the most promising new paradigms during the last decade. Broadly defined, “it is simply a network of intelligent objects that can automatically organize and share information, resources, and data. They also can make decisions and respond to changes in the environment.” This widely publicized idea promises to bring everything in our world together under a single infrastructure, allowing us to communicate and connect with anyone from anywhere around the globe. This has led to the proliferation and development of numerous “smart devices” for many sectors, including energy, industrial manufacturing, urban planning, healthcare, etc. Although there are many definitions for what makes an object “smart,” the most important aspects include their ability to gather information about their environment through embedded sensors. This information must be analyzed promptly. However, large datasets can be quickly generated, especially when many sensor devices are connected within an IoT network. This brings up the question of what type of computing is most suitable for the job. Cloud computing is the most popular option. It outsources the task of managing, processing, and storing data to a network of remote servers located on the Internet rather than a personal computer or a local server. However, while this strategy is suited for specific IoT sectors, it also has several disadvantages, including decreased bandwidth, increased latency, privacy concerns, and the possibility of data loss. Therefore, Edge computing emerged as a promising option for time-sensitive applications in which data is analyzed and processed by small computing devices that are located near the data source-i.e., the sensors. These “edge devices” can open up a wide range of applications that use AI, which has resulted in the development of a new field known as AIoT (Artificial Intelligence of Things). This might be a game-changer since industry experts and researchers predict that Artificial Intelligence of Things systems will soon be able to not only identify failures and events but also gather necessary information and make correct decisions based on that data — all without the need for human intervention. Despite significant advances in this area, several IoT sensor devices still use traditional processor chips. These chips are not well-suited for implementing many of today’s computationally intensive algorithmic programs that sensors need to run on the edge. For example, these include DNNs (deep neural networks) and cutting-edge machine learning algorithms responsible for many recent artificial intelligence breakthroughs like DeepMind’s AlphaGo. As a result, significant efforts are currently being made to build new ASICs, which, as their names suggest, are mainly designed for a specific application or task. This is where companies like Reexen Technology are attempting to build creative solutions for deploying cutting-edge artificial intelligence technology at scale. Reexen Technology and Neuromorphic Engineering As explained by Dr. Hongjie Liu, Reexen Company is involved in neuromorphic engineering- sometimes known as neuromorphic computing. This technique aims to imitate the neural operations and structure of the human brain with hardware and software. Reexen’s goal is to mimic the brain’s functioning, eye, and cochlea in our ears. This is also called “neuromorphic processing and sensing” they are currently developing “mixed sign in- inference sensing or memory computing” solutions. Furthermore, mixed-signal in-memory computing circuits solve latency and energy consumption issues in A/D (analog-to-digital) conversion and data-intensive DSPs (digital signal processors) in two ways. First, unlike the traditional CPUs (Central Processing Units) or GPUs (Graphics Processing Units), which can only process data in the “computer-readable” field, mixed-signal computing circuits can process sensory signals directly in both the analog and digital domains. Second, by now integrating computational cells into memory cells, processing-in-memory solutions can solve the shortcomings of the “von Neumann” architecture of traditional computers’, which expand significant energy and time to transfer data from memory to the central processing unit for computation. On the other hand, inference sensing means that inputs generated from the physical world are processed and transformed on the side of the sensors rather than on a large computer or in the cloud, which is beneficial for a variety of applications, including earphones, smartwatches, smart IoT gadgets, etc. In this case, Reexen Company collaborated with the leading micro-electromechanical systems (MEMS) microphone manufacturer to put its innovative audio-processing chip inside the MEMS sensor itself, allowing the microphone to use keyword detection. This is crucial for many speech recognition applications, including “Hey Alexa,” “Okay Google,” or “Hey Siri.” These words enable digital assistants to respond to users’ queries. Reexen technology is currently working on a vision-processing chip mainly used in AR/VR glasses or smartphones. To summarize, Edge-based computing has proven an appealing solution for IoT devices that provide high-quality and actionable sensor information. It can also save time and reduce energy. However, industry leaders and researchers have been working together to create new chips that can complete more demanding machine learning tasks on devices in real time-either totally or using a hybrid strategy. Reexen Technology is a Chinese startup that develops “mixed signal in-memory computing” and “inference sensing” solutions. These solutions aim to mimic the neural operation and structure of the human brain. For example, this has led to the creation of an innovative audio-processing chip used to construct MEMS microphones with integrated keyword spot functions. It is also being used to develop a vision processing chip for AR/VR glasses and smartphones
<urn:uuid:bb4d9390-bb34-4b5e-8183-1a164590d2f2>
CC-MAIN-2022-40
https://www.enterpriseappstoday.com/news/how-will-edge-artificial-intelligence-ai-chips-take-iot-devices-to-the-next-level.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00383.warc.gz
en
0.931988
1,541
3.09375
3
Using Quantum Computing to Optimize Shipping Routes (SupplyChainBrain) Amy Herhold, Director of Physics and Mathematical Sciences, Corporate Strategic Research with ExxonMobil Research and Engineering, and Jamie Thomas, General Manager of IBM Systems Strategy & Development, discuss the use of quantum computing to optimize shipping routes. Optimal routing of Exxon’s ships includes elements such as weather patterns, inventory levels and length of voyage. In the end, the number of variables involved in such calculations “can quickly swamp what you can do with a classical computer today,” says Herhold. Exxon’s use of quantum computing is still in the exploratory stage, says Herhold. Ultimately, the problem of ship routing will require larger quantum computers and accompany algorithms that can work with the greater number of variables. But the technology is already being applied to multiple industries, and can even be used in tandem with classical computing for decisions such as ship routing. “We’re clearly a few years out from full production,” says Thomas. “What we’re seeing now is a massive amount of experimentation.”
<urn:uuid:4db1766a-8636-4ec2-9883-6956402818ff>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/using-quantum-computing-to-optimize-shipping-routes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00583.warc.gz
en
0.909011
233
2.984375
3
Role of digital signatures in asymmetric cryptography Encryption and decryption Encryption is the process of converting plaintext to encrypted text. Since encrypted text cannot be read by anyone, encrypted text hides the original data from unauthorized users. Decryption is the process of converting encrypted data to plaintext. Basically, it is the reverse of encryption. It is used to decrypt the encrypted data so that only an authorized user can access and read the data. The process entailing encryption and decryption together is called cryptography. Private and public keys in cryptography A key is a bit valued string that is used to convert the plaintext into cipher text and vice-versa. A key can be a word, number or phrase. Cryptography makes use of public and private keys. A public key is issued publicly by the organization and it is used by the end user to encrypt the data. The encrypted data, once received by the organization, is decrypted by using a private key and the data is converted to plaintext. Cryptography uses symmetric and asymmetric encryption for encryption and decryption of data. If the sender and the recipient of the data use the same key to encrypt and decrypt the data, it’s called symmetric encryption and if the keys are different for encryption and decryption then it’s asymmetric encryption. Now the basics are clear, let’s focus on what a digital signature is and how it makes use of asymmetric cryptography for authentication and verification of software, messages, documents and more. A digital signature is a mathematical technique for authentication and verification of software, messages, documents and other things. It also provides message authentication, data integrity and non-repudiation — that is, it prevents the sender from claiming that he or she did not actually send the information. This technique ties a person to digital data, which can be verified by the receiver or by any third party independently. The digital signature is calculated by the data and a secret key known to the signer only. For creating a digital signature, the user first creates a one-way hash of the message/document to be signed and this representation of the message in the form of a hash is called message digest. Now, the user uses his private key for encrypting the hash. The encrypted hash and other information like hashing algorithm used is the digital signature. Steps to create digital signatures These are the steps one should follow to create digital signatures: - As described above, a message digest needs to be computed first. A message digest is computed by applying a hash function on the message/document to be sent. Popular hashing algorithms used for generating message digest are Secure Hash Algorithm-1 (SHA-1), Secure Hashing Algorithm-2 family (SHA-2, SHA-256) and Message Digest 5 (MD5). - This message digest is encrypted using the private key of the sender for creating a digital signature. - This digital signature is then transmitted with the original message to the receiver. - When the recipient receives the message, they decrypt the digital signature using the public key of the sender. - After decrypting the digital signature, the receiver now retrieves the message digest. - Also, the receiver can easily tally the message digest from the received message. - The message digest tallied by the receiver and the message digest received must be the same for ensuring message authentication, data integrity and non-repudiation. Digital signature applications The following are the widely used applications of digital signatures: - Send/receive encrypted emails which are digitally signed and secured - Carrying out safe and secure online transactions - Identifying participants in an online transaction - Applying for tenders, e-filing of income tax returns, registrar of companies and other suitable applications - Sign and validate Word, PDF and Excel document formats The value of encryption Encryption is a valuable way to keep data safe and secure — and is a fundamental aspect of cybersecurity. Digital signatures and certificates, GeeksforGeeks Cryptography digital signatures, Tutorialspoint We've encountered a new and totally unexpected error. Get instant boot camp pricing A new tab for your requested boot camp pricing will open in 5 seconds. If it doesn't open, click here.
<urn:uuid:d1013280-e565-4df3-84c7-bce746fcdb93>
CC-MAIN-2022-40
https://resources.infosecinstitute.com/topic/role-of-digital-signatures-in-asymmetric-cryptography/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00583.warc.gz
en
0.880256
901
4.125
4
In an increasingly technology-oriented world, cybercrime has become all too common for both consumers and businesses. Internet crime takes many forms and includes everything from large-scale data breaches to consumer issues like identity theft and cyberstalking to widespread scams and ransomware. In the third week of National Cyber Security Awareness Month (NCSAM), the National Cyber Security Alliance (NCSA), the U.S. Department of Homeland Security (DHS) and their industry, government and nonprofit partners are highlighting the different types of online crime and how people and businesses can better protect themselves. “As cybercriminals sharpen their hacking skills, we must take stronger precautions to protect our information and all of our connected devices,” said Michael Kaiser, executive director of NCSA. “There are simple things everyone can do to better safeguard their key accounts, devices and apps, like keeping software up to date, turning on strong authentication and exercising extreme caution when reading messages containing links or requests for information.” Tech support scams Tech support scams make up one of the most common forms of cybercrime, and many companies providing technology products and services find themselves targeted by cybercriminals. A new Microsoft new survey offers the following findings: - One in five consumers surveyed admitted to continuing a potential fraudulent interaction when experiencing a tech support scam. - Nearly 1 in 10 have lost money to a tech support scam. - Of those who had continued with a fraudulent interaction, 17 percent were older than 55 and 34 percent were between the ages of 36 and 54. - Fifty percent of those who continued the interaction were millennials (ages 18-34). “Tech support scams are on the rise around the world and demand urgent attention from law enforcement, private industry and individual consumers,” said Courtney Gregoire, senior attorney at Microsoft’s Digital Crimes Unit. “According to a recent survey from Microsoft, two out three people have experienced a tech support scam in the past year, with many falling victim and placing their computers and personal information at risk.” In addition to the rise in tech support-related and other scams, identity theft is a key concern for many – in fact, a 2016 NCSA survey revealed that preventing identity theft was the top online safety topic both teens and parents of teens would like to learn more about. The Identity Theft Resource Center’s (ITRC’s) 2016 Identity Theft: The Aftermath study, which surveyed victims of identity theft in 2015, revealed the following: - The accounts most commonly taken over by thieves included email (11%), payment services (10%), social media (9%) and online banking (8%). Additional compromised account types include online medical portals (5%), health trackers (2%) and gaming (2%). - Nearly a fifth of survey respondents reported significant repercussions when their online accounts were taken over, including job loss (24%) and reputational damage among friends (61%) and colleagues (31%). - Of the respondents who identified experiencing criminal identity theft issues, 30 percent found themselves in need of state government assistance programs to overcome the financial impact of identity theft. “Identity thieves can use a variety of platforms to commit their crimes, including many online platforms. This crime creates not only short-term effects for victims during the time they are remediating their cases – it creates long-term effects as well,” said ITRC President/CEO Eva Velasquez. “When we look at the sheer volume of identity theft it is easy to get lost in the number; we must not forget that behind each percentage and incident we count, there is a person whose life is being affected. This in turn affects families, communities, regions and our country as a whole.” In recent months, ransomware attacks – the “digital kidnapping” of valuable data in which malware accesses victims’ files, locks and encrypts them, and then forces victims to pay ransom to get the files back – have grown more sophisticated and prevalent. The FBI has warned that these attacks are on the rise, and according to Kaspersky Lab, the number of individuals attacked by crypto-ransomware increased 5.5 times from 2014/2015 (131,000) to 2015/2016 (718,000). These threats can be especially damaging to businesses, which may store critical organizational data, intellectual property and consumer information. “Having a backup that can restore the impacted system is a key defense that can help organizations restore normal operations quickly after being impacted by ransomware,” said Kaiser. Fight fraud: Prevention and recovery tips NCSA recommends that both consumer and business audiences take the following steps to prevent and recover from cybercrime such as scams, identity theft and ransomware attacks: - Lock down your login: Fortify your online accounts by enabling the strongest authentication tools available, such as biometrics, security keys or a unique one-time code through an app on your mobile device. Your usernames and passwords are not enough to protect key accounts like email, banking and social media. - Keep all machines clean: Having the latest security software, web browser and operating system is the best defense against viruses, malware and other threats. If you have experienced cybercrime, immediately update all software on every internet-connected device. All critical software, including PCs and mobile operating systems, security software and other frequently used programs and apps, should be running the most current versions. Use security software to scan any USBs or external devices. - Back it up: Make sure you have a recent and securely stored backup of all critical data. - Make better passwords: A strong password is a sentence that is at least 12 characters long. Focus on positive sentences or phrases that you like to think about and are easy to remember. - When in doubt, throw it out: Links in email, tweets, posts and online advertising are often how cybercriminals try to steal your personal information. Even if you know the source, if something looks suspicious, delete it. - Help the authorities fight cybercrime: Report stolen finances or identities and other cybercrime to the FBI Internet Crime Complaint Center (IC3), the ITRC, the Federal Trade Commission (FTC) and/or your local law enforcement or state attorney general as appropriate.
<urn:uuid:12098cd6-1f09-4137-8081-30f238f73de5>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2016/10/18/fight-fraud-scams-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00583.warc.gz
en
0.938142
1,299
2.734375
3
Hackers work hard. A well-protected organizational network has defenses to protect all endpoints, infrastructure, and devices. Yet, cyberattackers work tirelessly to find flaws in software or organizational processes that can be exploited to allow vulnerability and malicious entry. Vulnerabilities are flaws within a network's software, infrastructure, or processes that can be used by hackers to infiltrate a network. Software developers and cybersecurity professionals examine and test software, devices, and processes to uncover potential weaknesses that could be exploited by bad actors. When these weaknesses are discovered, updates and patches are released to repair the flaw. Vulnerability scanning is an important part of effective cybersecurity. Understanding how to perform and use vulnerability scans can provide an important layer of protection to keep your network secure. Our introduction to vulnerability scanning will help you understand what vulnerability scans are, why they're important, and how they can be used as a vital part of the way you protect your network against hackers. What Are Vulnerabilities? Flaws that hackers or research professionals have exposed are called known vulnerabilities. Thousands of types of software exist, and hundreds of various vulnerabilities are uncovered each month. It is impossible for any IT or security team to manually keep track of all the known vulnerabilities that can be exploited within a company network. For this reason, known vulnerabilities often go undiscovered by network users and get exploited by attackers. Vulnerability scans check specific parts of your network for flaws that are likely to be exploited by threat actors to gain access or carry out a known type of cyberattack. When used properly, they can provide an important layer of cybersecurity to help keep your company's sensitive data safe. Like most elements of cybersecurity, vulnerability scans are tools that provide the best results when chosen carefully and used the right way. Yet, with a variety of available options and different types of scans, it can be difficult to know what's best for your network. What Is Vulnerability Scanning? In the same way as network users, IT professionals, and cybersecurity experts learn about potential vulnerabilities, attackers can find information about potential ways to infiltrate business networks. Even worse, threat actors who wish to breach a system can purchase information about exploiting known vulnerabilities, and even zero-day vulnerabilities on the dark web. Known vulnerabilities provide one of the easiest ways for attackers to gain access to organizational networks to perform high-level cyberattacks once the perimeter is breached. Many known flaws have already been exploited by cybercriminals, making it easy for other criminals to use the same methods of attack. Vulnerability Scan as a Test A vulnerability scan is a high-level automated test that searches for known vulnerabilities within your system and reports them. Some vulnerability scans can identify as many as 50,000 known weaknesses that can be exploited by attackers. Vulnerability scans can be performed both inside and outside the network (internally and externally) to reveal different types of weaknesses. External scans are used to identify vulnerabilities that can be accessed from the internet, while internal scans can reveal vulnerabilities that hackers can use to move laterally within the network. A scan might take a few minutes up to hours to complete and will provide a report of known vulnerabilities that need to be addressed. When these scans are performed routinely, they can provide information that helps IT teams and cybersecurity experts protect companies against cyberattacks. While vulnerability scans are automated tests that can typically be performed without interrupting workflow performance, they are not a magic bullet against all known vulnerabilities. To be effective, vulnerability scanners need to know what to scan and when to perform a search. Scans are completed by a vulnerability scanner that must be optimized to check specific areas of your organizational network for known flaws. When the scan is complete, it provides a logged summary of alerts to be investigated. It's important to remember that vulnerability scans are designed only for the detection of flaws that could be exploited by hackers. After a scan is complete, the results must be used to actively eliminate the vulnerabilities through patches, updates, or other cybersecurity measures. What Does Vulnerability Scanning Do for Me? Cyberattackers use a variety of methods to breach systems. Weak passwords, IoT devices, unprotected endpoints, phishing emails, social engineering, etc., are all ways that attackers can take the first step toward launching an attack. Known vulnerabilities are weaknesses that have already been uncovered and made public. These vulnerabilities typically already have a solution like a patch or an update for the software in question. When your network has known vulnerabilities that haven't been addressed, these weaknesses are like an open door for hackers. Over 8,000 vulnerabilities were published in Q1 of 2022. The US government's National Vulnerability Database (NVD) which is fed by the Common Vulnerabilities and Exposures (CVE) list currently has over 176,000 entries. Considering the deluge of information provided in these lists, slogging through a list of known vulnerabilities to then search for potential risks within a network would be impossible. Vulnerability scanning uses available information about known vulnerabilities to automatically check a specific network for risks and provide a detailed report about flaws that exist. IT teams and cybersecurity professionals can use this information to repair the system in a way that eliminates the risk of these vulnerabilities being exploited by an attacker. An actionable vulnerability scan report identifies vulnerabilities that could pose a threat to your system, tells you the severity of each vulnerability, and provides remediation suggestions. The severity of vulnerabilities is typically based on the Common Vulnerability Scoring System (CVSS) provided by the NIST National Vulnerability Database. Scores rank the severity of a vulnerability from 0.0 up to 10.0 with five different severity ratings. - None: 0.0 - Low: 0.1-3.9 - Medium: 4.0-6.9 - High: 7.0-8.9 - Critical: 9.0-10.0 The ratings are based on the factors of exploitability, scope, and impact. Essentially they describe how easily a vulnerability can be exploited, whether it can be spread across the attack surface, and the severity of damage an attack can cause by exploiting the vulnerability. Since the metrics are based on these specific factors, they provide a clear explanation of the dangers of a specific vulnerability. Often, multiple vulnerabilities are revealed with a single scan. The context provided in the report can help IT teams resolve the most critical issues first. Why Is Vulnerability Scanning Important? Three-fourths of attacks in 2020 took advantage of vulnerabilities that were at least two years old. Many of these flaws could have been detected by vulnerability scans long before they were exploited by attackers. In fact, it's possible that many businesses without sufficient cybersecurity practices in place are unaware of the way even the most public attacks are carried out. Consider the WannaCry attacks that first occurred in 2017. Microsoft was aware of the theft of hacking tools targeted at its operating systems and had released patches months before the attacks occurred. While it might not be a surprise that the majority of organizations failed to patch systems within a couple of months, it's startling to consider that 26% of companies remain vulnerable to WannaCry malware because they have not patched the vulnerability it exploits. Systems that connect to the internet are constantly under attack. Threat actors target companies large and small for a variety of reasons. It ranges from entertainment to an interest in sensitive data or major financial gains. The global interconnections achieved by the internet allow hackers from thousands of miles away to infiltrate your system and launch an expensive attack. Hackers can utilize the dark web to sell, trade, and purchase illegal products. With this, cybercrime has become more attainable for various criminals. Data breach costs rose to $4.24 million in 2021. For many companies, the effects of an attack are so catastrophic that they never recover. Hackers are aware of the lucrative potential of launching an attack on any business. The dark web allows even those who aren't particularly tech-savvy to purchase information and codes to successfully launch complex attacks. As a result, hackers continually search for networks that provide easy access through unpatched software or legacy systems. The reality is that if you're not performing vulnerability scans on your network, someone else is. For many companies, the results of a vulnerability scan could be the difference between repairing a flaw and recovering from an attack. Types of Vulnerability Scanners Like most cybersecurity tools, vulnerability scanners are not a one-size-fits-all solution. Different scans target various areas of your network infrastructure, based on your organizational needs. Some companies are forced to depend on multiple vulnerability scanners to provide a comprehensive view of all the vulnerabilities that exist within a network. To determine the types of vulnerability scanners that best fit your needs, it's important to examine the use cases for each type. There are five basic types of vulnerability scanners. A network-based vulnerability scanner is used to search an entire network, including all devices and applications for vulnerabilities. This scanner creates an inventory of devices and the vulnerabilities in each of them. It can be helpful for discovering unknown or unauthorized perimeter points or connections to insecure networks of business partners like vendors and shipping partners. These are used for finding vulnerabilities in workstations and servers. It also checks the security configurations and patch history of a server or workstation. These are used for scanning apps and websites. They are designed to find vulnerabilities in third-party software and programs utilized within your network environment. These scanners work to identify vulnerabilities that can allow attackers to easily breach your system. Wireless scanners identify unauthorized access points in a network and find inconsistencies in security configurations. Your database houses a wealth of sensitive information. Database scanners identify weak points in a database that could allow attackers to access and change or remove data. Additional database vulnerabilities can provide attackers with ways to control data servers or access other areas of the network through lateral movement that begins at the database. Some scanners perform multiple types of scans, while others perform a specific task. Beyond the types of scanners that are available, it's important to consider the types of scans that must be performed to provide comprehensive protection for your entire network, including endpoints like remote devices, and IoT devices. Internal vs. External Vulnerability Scans> An external vulnerability scan is performed from outside your network and targets IT infrastructure that is exposed to the internet. This includes websites, ports, services, networks, systems, and applications that need to be accessed by external users or customers. External scans can also detect vulnerabilities in perimeter defenses like firewalls. Internal scans are performed from inside the network and allow you to detect vulnerabilities that leave you susceptible to damage once an attacker breaches the network. Internal scans can help detect vulnerabilities that can be exploited by insider threats or hackers or malware that has already made it past your perimeter defenses. Many modern attacks begin with a perimeter breach that allows an attacker to move discreetly through an organizational network to reach sensitive data or a higher level of authority. Internal vulnerabilities make such lateral movement possible. If you compare the network vulnerabilities to the physical security system of a company, external vulnerabilities are like blind spots left by security cameras or flaws in your alarm system. Internal vulnerabilities are those that would allow an attacker to have access to your company's most expensive assets once inside the building. From this perspective, it's easy to see why internal and external vulnerability scans are both important. Authenticated vs. Unauthenticated Vulnerability Scans Also referred to as credentialed or non-credentialed scans refer to the level of credentials required to perform a scan. Unauthenticated scans require no credentials and do not provide trusted access to the systems being scanned. These scans provide more of an outside view and would allow users to detect vulnerabilities in the same way they're detected by potential attackers. However, unauthenticated scans provide a limited view of a network's total vulnerability exposure. Authenticated scans require users to log in with a specific set of credentials. These scans provide a user's eye view of the environment for a more complete picture of vulnerabilities. Credentialed scans are typically performed by an impartial viewer with no connections to the network (like a third-party professional) for an impartial view of the entire system. In the same way that both internal and external scans are important, both credentialed and non-credentialed scans have a place in securing your network. Top Vulnerability Scanning Offerings When it comes to vulnerability scanning tools, there is no shortage of options for organizations to try. However, not all scanners provide the same abilities. To determine the scanners best for your organization, it helps to compare some of the most popular tools available. A top-rated vulnerability scanner, Intruder scans your publicly and privately accessible servers, cloud systems, websites, and endpoint devices. Intruder proactively detects misconfigurations, missing patches, application bugs, and more. A 30-day free trial is available. A widely used open-source vulnerability assessment tool, Nessus detects software flaws, missing patches, malware, and misconfiguration errors across several operating systems. With 450 compliance and configuration templates provided, Nessus is an option that might be best used by experienced security teams. Although it only scans web-based applications, Acunteix utilizes a multi-threaded scanner that can crawl across hundreds of thousands of pages rapidly and identifies common web server configurations. A web vulnerability scanner, Burp Suite performs automated enterprise-wide scans that can check for SQL injection, cross-site scripting, and other vulnerabilities. It is also used for compliance and security audit purposes. Although there is a free version available, the Enterprise Edition is recommended for the best results. A website scanner, Netsparker utilizes automated web application security scanning capabilities and can be integrated with third-party tools. Since it doesn't have the range of some other products, Netsparker is a better choice for SMBs instead of large enterprises. How Often Should You Be Scanning? It only takes a single vulnerability to allow an attacker to breach your network, and thousands of vulnerabilities are discovered each year. Unfortunately, a vulnerability scan only provides information about the risks that exist when the scan is performed. While current vulnerability data is important, continual scanning would drain resources, slow systems, and generate a significant amount of false positives. Since emerging threats can be exploited during scan gaps, it's recommended that internal and external vulnerability scans routinely. Routine scans can be automated to run on a schedule like once a month or once a quarter. Some organizations are subject to compliance regulations that state how often vulnerability scans must be performed. Beyond routine scanning, there are other reasons to search for vulnerabilities in your network. For instance, significant network changes could result in the occurrence of new vulnerabilities. After a significant change in your network like adding new servers or system components, upgrading products, adding applications, or altering interfaces, your network should be scanned for vulnerabilities. It's important to repair flaws to eliminate vulnerabilities after vulnerability scans are performed. After patches or updates are applied, another scan should be performed to ensure the remediation of all vulnerabilities. Vulnerability Scanning as Part of a Layered Cybersecurity Solution Cybercrime affects more than 80% of businesses throughout the world today. The rate of cybercrime increases each year and has exploded in recent years due to a variety of factors. The new technology provides ways for businesses to perform more efficiently. It also offers improved ways for hackers to perform illegal actions online. It's estimated that 75.44 billion IoT devices will be installed worldwide by 2025. These devices often have limited security protocols and can act as entry points for attackers to breach a network. Furthermore, cybercrime has become an organized business in which cyber criminals sell and trade various illegal products and services. Known vulnerabilities provide an easy target for cybercriminals and lucrative dark web-based markets allow more people to learn about these vulnerabilities each year. Routine vulnerability scanning can help your organization take the first steps toward creating a secure perimeter to avoid such breaches. However, vulnerability scans are only a single part of a complete cybersecurity solution. Modern systems are continually changing. A scan only represents the known vulnerabilities within a set scope of your network environment at the time the scan was performed. While scanning tools perform automated checks for known vulnerabilities, the results of the scan are limited in scope as well as quality depending on how the scanner is optimized and the depth of the database it draws from. Like most cybersecurity tools, effective vulnerability scanners are not one-size-fits-all preconfigured solutions. They are sophisticated tools designed to be optimized to work with unique business networks. It's likely that your vulnerability scanners will need to be optimized by a security expert and scan reports will need to be reviewed by security analysts. Without a comprehensive cybersecurity solution, vulnerability scans can offer a false sense of security that leaves your organization open to attack. Vulnerability scans reveal known flaws and the threat level they can carry based on the ways hackers exploit them. The reports provided by these scans are designed to create a roadmap for improvement to be carried out by cybersecurity experts. A comprehensive cybersecurity solution uses vulnerability scans alongside tools that constantly monitor your network and provide real-time incident response. At Bitlyft, it's our goal to provide customers with a complete cybersecurity solution that offers the same security as an on-prem SOC to keep attackers from reaching your organization's most critical assets. We provide unparalleled protection for organizations of all sizes by delivering the best people and software to remediate most cyberthreats in seconds. BitLyft offers vulnerability scanning as a part of their complete cybersecurity solution, schedule a needs assessment with our team of cybersecurity experts to learn more.
<urn:uuid:efe3df20-b16c-44e6-85ad-5c6e96429910>
CC-MAIN-2022-40
https://www.bitlyft.com/resources/vulnerability-scanning
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00583.warc.gz
en
0.939522
3,644
3.21875
3
The Diffie-Hellman key exchange was one of the most important developments in public-key cryptography and it is still frequently implemented in a range of today’s different security protocols. It allows two parties who have not previously met to securely establish a key which they can use to secure their communications. In this article, we’ll explain what it’s used for, how it works on a step-by-step basis, its different variations, as well as the security considerations that need to be noted in order to implement it safely. What is the Diffie-Hellman key exchange? The Diffie-Hellman key exchange was the first widely used method of safely developing and exchanging keys over an insecure channel. It may not seem so exciting or groundbreaking in the above terms, so let’s give an example that explains why the Diffie-Hellman key exchange was such an important milestone in the world of cryptography, and why it is still so frequently used today. Let’s say you’re a top secret spy and you need to send some important information to your headquarters. How would you prevent your enemies from getting ahold of the message? The most common solution would be to encrypt the message with a code. The easiest way is to prearrange whichever type of code and key you plan on using beforehand, or to do it over a safe communication channel. Let’s say that you are a particularly bad spy, and you and your headquarters decide to use a weak shift-cipher to encode your messages. In this code, every “a” becomes “b”, every “b” becomes “c”, every “c” becomes “d”, and so on, all the way up to the “z” becoming an “a”. Under this shift cipher, the message “Let’s get dinner” becomes “Mfu’t hfu ejoofs”. Thankfully, in our hypothetical situation, your adversaries are just as incompetent as you are and are unable to crack such a simple code, which keeps them from accessing the contents of the message. But what happens if you couldn’t arrange a code with your recipient beforehand? Let’s say you want to communicate with a spy from an allied nation who you have never met before. You don’t have a secure channel over which to talk to them. If you don’t encrypt your message, then any adversary who intercepts it will be able to read the contents. If you encrypt it without telling the ally the code, then the enemy won’t be able to read it, but neither will the ally. This issue was one of the biggest conundrums in cryptography up until the 1970s: How can you securely exchange information with someone if you haven’t had the opportunity to share the key ahead of time? The Diffie-Hellman key exchange was the first publicly-used mechanism for solving this problem. The algorithm allows those who have never met before to safely create a shared key, even over an insecure channel that adversaries may be monitoring. The history of the Diffie-Hellman key exchange The Diffie-Hellman key exchange traces its roots back to the 1970s. While the field of cryptography had developed significantly throughout the earlier twentieth century, these advancements were mainly focused in the area of symmetric-key cryptography. It wasn’t until 1976 that public-key algorithms emerged in the public sphere, when Whitfield Diffie and Martin Hellman published their paper, New Directions in Cryptography. The collaboration outlined the mechanisms behind a new system, which would come to be known as the Diffie-Hellman key exchange. The work was partly inspired by earlier developments made by Ralph Merkle. The so-called Merkle’s Puzzles involve one party creating and sending a number of cryptographic puzzles to the other. These puzzles would take a moderate amount of computational resources to solve. The recipient would randomly choose one puzzle to solve and then expend the necessary effort to complete it. Once the puzzle is solved, an identifier and a session key are revealed to the recipient. The recipient then transmits the identifier back to the original sender, which lets the sender know which puzzle has been solved. Since the original sender created the puzzles, the identifier lets them know which session key the recipient discovered, and the two parties can use this key to communicate more securely. If an attacker is listening in on the interaction, they will have access to all of the puzzles, as well as the identifier that the recipient transmits back to the original sender. The identifier doesn’t tell the attacker which session key is being used, so the best approach for decrypting the information is to solve all of the puzzles to uncover the correct session key. Since the attacker will have to solve half of the puzzles on average, it ends up being much more difficult for them to uncover the key than it is for the recipient. This approach provides more security, but it is far from a perfect solution. The Diffie-Hellman key exchange took some of these ideas and made them more complex in order to create a secure method of public-key cryptography. Although it has come to be known as the Diffie-Hellman key exchange, Martin Hellman has proposed that the algorithm be named the Diffie-Hellman-Merkle key exchange instead, to reflect the work that Ralph Merkle put towards public-key cryptography. It was publicly thought that Merkle, Hellman and Diffie were the first people to develop public key cryptography until 1997, when the British Government declassified work done in the early 1970s by James Ellis, Clifford Cox and Malcolm Williamson. It turns out that the trio came up with the first public-key encryption scheme between 1969 and 1973, but their work was classified for two decades. It was conducted under the Government Communication Headquarters (GCHQ), a UK intelligence agency. Their discovery was actually the RSA algorithm, so Diffie, Hellman and Merkle were still the first to develop the Diffie-Hellman key exchange, but no longer the first inventors of public-key cryptography. Where is the Diffie-Hellman key exchange used? The main purpose of the Diffie-Hellman key exchange is to securely develop shared secrets that can be used to derive keys. These keys can then be used with symmetric-key algorithms to transmit information in a protected manner. Symmetric algorithms tend to be used to encrypt the bulk of the data because they are more efficient than public key algorithms. Technically, the Diffie-Hellman key exchange can be used to establish public and private keys. However, in practice, RSA tends to be used instead. This is because the RSA algorithm is also capable of signing public-key certificates, while the Diffie-Hellman key exchange is not. The ElGamal algorithm, which was used heavily in PGP, is based on the Diffie-Hellman key exchange, so any protocol that uses it is effectively implementing a kind of Diffie-Hellman. As one of the most common methods for safely distributing keys, the Diffie-Hellman key exchange is frequently implemented in security protocols such as TLS, IPsec, SSH, PGP, and many others. This makes it an integral part of our secure communications. As part of these protocols, the Diffie-Hellman key exchange is often used to help secure your connection to a website, to remotely access another computer, and for sending encrypted emails How does the Diffie-Hellman key exchange work? The Diffie-Hellman key exchange is complex and it can be difficult to get your head around how it works. It uses very large numbers and a lot of math, something that many of us still dread from those long and boring high school lessons. To make things a bit easier to understand, we will start by explaining the Diffie-Hellman key exchange with an analogy. Once you have a big-picture idea of how it works, we’ll move on to a more technical description of the underlying processes. The best analogy for the Diffie-Hellman scheme is to think of two people mixing paint. Let’s use the cryptography standard, and say that their names are Alice and Bob. They both agree on a random color to start with. Let’s say that they send each other a message and decide on yellow as their common color, just like in the diagram below: They set their own color. They do not tell the other party their choice. Let’s say that Alice chooses red, while Bob chooses a slightly-greenish blue. The next step is for both Alice and Bob to mix their secret color (red for Alice, greenish-blue for Bob) with the yellow that they mutually agreed upon. According to the diagram, Alice ends up with an orangish mix, while Bob’s result is a deeper blue. Once they have finished the mixing, they send the result to the other party. Alice receives the deeper blue, while Bob is sent the orange-colored paint. Once they have received the mixed result from their partner, they then add their secret color to it. Alice takes the deeper blue and adds her secret red paint, while Bob adds his secret greenish-blue to the orange mix he just received. The result? They both come out with the same color, which in this case is a disgusting brown. It may not be the kind of color that you would want to paint your living room with, but it is a shared color nonetheless. This shared color is referred to as the common secret. The critical part of the Diffie-Hellman key exchange is that both parties end up with the same result, without ever needing to send the entirety of the common secret across the communication channel. Choosing a common color, their own secret colors, exchanging the mix and then adding their own color once more, gives both parties a way to arrive at the same common secret without ever having to send across the whole thing. If an attacker is listening to the exchange, all that they can access is the common yellow color that Alice and Bob start with, as well as the mixed colors that are exchanged. Since this is really done with enormous numbers instead of paint, these pieces of information aren’t enough for the attack to discern either of the initial secret colors, or the common secret (technically it is possible to compute the common secret from this information, but in a secure implementation of the Diffie-Hellman key exchange, it would take an unfeasible amount of time and computational resources to do so). This structure of the Diffie-Hellman key exchange is what makes it so useful. It allows the two parties to communicate over a potentially dangerous connection and still come up with a shared secret that they can use to make encryption keys for their future communications. It doesn’t matter if any attackers are listening in, because the complete shared secret is never sent over the connection. The technical details of the Diffie-Hellman key exchange Time for some math… Don’t worry, we’ll take it slow and try to make the whole process as easy to understand as possible. It follows a similar premise as the analogy shown above, but instead of mixing and sending colors, the Diffie-Hellman scheme actually makes calculations based on exceptionally-large prime numbers, then sends them across. To ensure security, it is recommended that the prime (p) is at least 2048 bits long, which is the binary equivalent of a decimal number of about this size: 415368757628736598425938247569843765827634879128375827365928736 84273684728938572983759283475934875938475928475928739587249587 29873958729835792875982795837529876348273685729843579348795827 93857928739548772397592837592478593867045986792384737826735267 3547623568734869386945673456827659498063849024875809603947902 7945982730187439759284620950293759287049502938058920983945872 0948602984912837502948019371092480193581037995810937501938507913 95710937597019385089103951073058710393701934701938091803984091804 98109380198501398401983509183501983091079180395810395190395180935 8109385019840193580193840198340918093851098309180019 To prevent anyone’s head from exploding, we’ll run this explanation through with much smaller numbers. Be aware that the Diffie-Hellman key exchange would be insecure if it used numbers as small as those in our example. We are only using such small numbers to demonstrate the concept in a simpler manner. In the most basic form of the Diffie-Hellman key exchange, Alice and Bob begin by mutually deciding upon two numbers to start with, as opposed to the single common paint in the example above. These are the modulus (p) and the base (g). In practical use, the modulus (p) is a very large prime number, while the base (g) is relatively small to simplify calculations. The base (g) is derived from a cyclic group (G) that is normally generated well before the other steps take place. For our example, let’s say that the modulus (p) is 17, while the base (g) is 4. Once they have mutually decided on these numbers, Alice settles on a secret number (a) for herself, while Bob chooses his own secret number (b). Let’s say that they choose: a = 3 b = 6 Alice then performs the following calculation to give her the number that she will send to Bob: A = ga mod p In the above calculation, mod signifies a modulo operation. These are essentially calculations to figure out the remainder after dividing the left side by the right. As an example: 15 mod 4 = 3 If you understand how modulo operations work, you can do them yourself in the following calculations, otherwise you can use an online calculator. So let’s put our numbers into the formula: A = 43 mod 17 A = 64 mod 17 A = 13 When we do the same for Bob, we get: B = 46 mod 17 B = 4096 mod 17 B = 16 Alice then sends her result (A) to Bob, while Bob sends his figure (B) to Alice. Alice then calculates the shared secret (s) using the number she received from Bob (B) and her secret number (a), using the following formula: s = Ba mod p s = 163 mod 17 s = 4,096 mod 17 s = 16 Bob then performs what is essentially the same calculation, but with the number that Alice sent him (A), as well as his own secret number (b): s = Ab mod p s = 136 mod 17 s = 4,826,809 mod 17 s = 16 As you can see, both parties ended up with the same result for s, 16. This is the shared secret, which only Alice and Bob know. They can then use this to set up a key for symmetric encryption, allowing them to safely send information between themselves in a way that only they can access it. Note that although B and s are the same in the example above, this is just a coincidence based on the small numbers that were chosen for this illustration. Normally, these values would not be the same in a real implementation of the Diffie-Hellman key exchange. Even though much of the above data is sent across the channel in cleartext (p, g, A and B) and can be read by potential attackers, the shared secret (s) is never transmitted. It would not be practical for an attacker to calculate the shared secret (s) or either of the secret numbers (a and b) from the information that is sent in cleartext. Of course, this assumes that the Diffie-Hellman key exchange is properly implemented and sufficiently large numbers are used. As long as these provisions are adhered to, the Diffie-Hellman key exchange is considered a safe way to establish a shared secret which can be used to secure future communications. Establishing a shared key between multiple parties The Diffie-Hellman key exchange can also be used to set up a shared key with a greater number of participants. It works in the same manner, except further rounds of the calculations are needed for each party to add in their secret number and end up with the same shared secret. Just like in the two-party version of the Diffie-Hellman key exchange, some parts of the information are sent across insecure channels, but not enough for an attacker to be able to compute the shared secret. Why is the Diffie-Hellman key exchange secure? On a mathematical level, the Diffie-Hellman key exchange relies on one-way functions as the basis for its security. These are calculations which are simple to do one way, but much more difficult to calculate in reverse. More specifically, it relies on the Diffie-Hellman problem, which assumes that under the right parameters, it is infeasible to calculate gab from the separate values of g, ga and gb. There is currently no publicly known way to easily find gab from the other values, which is why the Diffie-Hellman key exchange is considered secure, despite the fact that attackers can intercept the values p, g, A, and B. Authentication & the Diffie-Hellman key exchange In the real world, the Diffie-Hellman key exchange is rarely used by itself. The main reason behind this is that it provides no authentication, which leaves users vulnerable to man-in-the-middle attacks. These attacks can take place when the Diffie-Hellman key exchange is implemented by itself, because it has no means of verifying whether the other party in a connection is really who they say they are. Without any form of authentication, users may actually be connecting with attackers when they think they are communicating with a trusted party. For this reason, the Diffie-Hellman key exchange is generally implemented alongside some means of authentication. This often involves using digital certificates and a public-key algorithm, such as RSA, to verify the identity of each party. Variations of the Diffie-Hellman key exchange The Diffie-Hellman key exchange can be implemented in a number of different ways, and it has also provided the basis for several other algorithms. Some of these implementations provide authorization, while others have various cryptographic features such as perfect forward secrecy. Elliptic-curve Diffie-Hellman takes advantage of the algebraic structure of elliptic curves to allow its implementations to achieve a similar level of security with a smaller key size. A 224-bit elliptic-curve key provides the same level of security as a 2048-bit RSA key. This can make exchanges more efficient and reduce the storage requirements. Apart from the smaller key length and the fact that it relies on the properties of elliptic curves, elliptic-curve Diffie-Hellman operates in a similar manner to the standard Diffie-Hellman key exchange. TLS, which is a protocol that is used to secure much of the internet, can use the Diffie-Hellman exchange in three different ways: anonymous, static and ephemeral. In practice, only ephemeral Diffie-Hellman should be implemented, because the other options have security issues. - Anonymous Diffie-Hellman – This version of the Diffie-Hellman key exchange doesn’t use any authentication, leaving it vulnerable to man-in-the-middle attacks. It should not be used or implemented. - Static Diffie-Hellman – Static Diffie-Hellman uses certificates to authenticate the server. It does not authenticate the client by default, nor does it provide forward secrecy. - Ephemeral Diffie-Hellman – This is considered the most secure implementation because it provides perfect forward secrecy. It is generally combined with an algorithm such as DSA or RSA to authenticate one or both of the parties in the connection. Ephemeral Diffie-Hellman uses different key pairs each time the protocol is run. This gives the connection perfect forward secrecy, because even if a key is compromised in the future, it can’t be used to decrypt all of the past messages. ElGamal is a public-key algorithm built on top of the Diffie-Hellman key exchange. Like Diffie-Hellman, it contains no provisions for authentication on its own, and is generally combined with other mechanisms for this purpose. ElGamal was mainly used in PGP, GNU Privacy Guard and other systems because its main rival, RSA, was patented. RSA’s patent expired in 2000, which allowed it to be implemented freely after that date. Since then, ElGamal has not been implemented as frequently. The Station-to-Station (STS) protocol is also based on the Diffie-Hellman key exchange. It’s another key agreement scheme, however it provides protection against man-in-the-middle attacks as well as perfect forward secrecy. It requires both parties in the connection to already have a keypair, which is used to authenticate each side. If the parties aren’t already known to each other, then certificates can be used to validate the identities of both parties. The Diffie-Hellman key exchange & RSA As we discussed earlier, the Diffie-Hellman key exchange is often implemented alongside RSA or other algorithms to provide authentication for the connection. If you are familiar with RSA, you may be wondering why anyone would bother using the Diffie-Hellman key exchange as well, since RSA enables parties who have never previously met to communicate securely. RSA allows its users to encrypt messages with their correspondent’s public key, so that they can only be decrypted by the matching private key. However, in practice, RSA isn’t used to encrypt the entirety of the communications—this would be far too inefficient. Instead, RSA is often only used as a means to authenticate both parties. It does this with the digital certificates of each party, which will have been verified by a certificate authority to prove that a certificate owner is truly who they say they are, and that the public key on the certificate actually belongs to them. For mutual authentication, each party will sign a message using their private key and then send it to their communication partner. Each recipient can then verify the identity of the other party by checking the signed messages against the public key on their communication partner’s digital certificate (see the above-mentioned article on RSA for more detail on how this works, particularly the Signing messages section). Now that both parties have been authenticated, it’s technically possible to continue using RSA to safely send encrypted messages between themselves, however it would end up being too inefficient. To get around this inefficiency, many security protocols use an algorithm such as the Diffie-Hellman key exchange to come up with a common secret that can be used to establish a shared symmetric-key. This symmetric key is then used in a symmetric-key algorithm, such as AES, to encrypt the data that the two parties intend to send securely between themselves. It may seem like a complex and convoluted process, but it ends up being much quicker and less-demanding on resources when compared to using a public-key algorithm for the whole exchange. This is because symmetric-key encryption is orders of magnitude more efficient than public-key encryption. In addition to the inefficiencies that we just mentioned, there are some other downsides that would come from solely using RSA. RSA needs padding to make it secure, so an additional algorithm would need to be implemented appropriately alongside it to make it safe. RSA doesn’t provide perfect forward secrecy, either, which is another disadvantage when compared to the ephemeral Diffie-Hellman key exchange. Collectively, these reasons are why, in many situations, it’s best to only apply RSA in conjunction with the Diffie-Hellman key exchange. Alternatively, the Diffie-Hellman key exchange can be combined with an algorithm like the Digital Signature Standard (DSS) to provide authentication, key exchange, confidentiality and check the integrity of the data. In such a situation, RSA is not necessary for securing the connection. Security issues of the Diffie-Hellman key exchange The security of the Diffie-Hellman key exchange is dependent on how it is implemented, as well as the numbers that are chosen for it. As we stated above, it has no means of authenticating the other party by itself, but in practice other mechanisms are used to ensure that the other party in a connection is not an impostor. Parameters for number selection If a real-world implementation of the Diffie-Hellman key exchange used numbers as small as those in our example, it would make the exchange process trivial for an attacker to crack. But it’s not just the size of the numbers that matter – the numbers also need to be sufficiently random. If a random number generator produces a predictable output, it can completely undermine the security of the Diffie-Hellman key exchange. The number p should be 2048 bits long to ensure security. The base, g, can be a relatively small number like 2, but it needs to come from an order of G that has a large prime factor The Logjam attack The Diffie-Hellman key exchange was designed on the basis of the discrete logarithm problem being difficult to solve. The most effective publicly known mechanism for finding the solution is the number field sieve algorithm. The capabilities of this algorithm were taken into account when the Diffie-Hellman key exchange was designed. By 1992, it was known that for a given group, G, three of the four steps involved in the algorithm could potentially be computed beforehand. If this progress was saved, the final step could be calculated in a comparatively short time. This wasn’t too concerning until it was realized that a significant portion of internet traffic uses the same groups that are 1024 bits or smaller. In 2015, an academic team ran the calculations for the most common 512-bit prime used by the Diffie-Hellman key exchange in TLS. They were also able to downgrade 80% of TLS servers that supported DHE-EXPORT, so that they would accept a 512-bit export-grade Diffie-Hellman key exchange for the connection. This means that each of these servers is vulnerable to an attack from a well-resourced adversary. The researchers went on to extrapolate their results, estimating that a nation-state could break a 1024-bit prime. By breaking the single most-commonly used 1024-bit prime, the academic team estimated that an adversary could monitor 18% of the one million most popular HTTPS websites. They went on to say that a second prime would enable the adversary to decrypt the connections of 66% of VPN servers, and 26% of SSH servers. Later in the report, the academics suggested that the NSA may already have these capabilities. “A close reading of published NSA leaks shows that the agency’s attacks on VPNs are consistent with having achieved such a break.” Despite this vulnerability, the Diffie-Hellman key exchange can still be secure if it is implemented correctly. As long as a 2048-bit key is used, the Logjam attack will not work. Updated browsers are also secure from this attack. Is the Diffie-Hellman key exchange safe? While the Diffie-Hellman key exchange may seem complex, it is a fundamental part of securely exchanging data online. As long as it is implemented alongside an appropriate authentication method and the numbers have been selected properly, it is not considered vulnerable to attack. The Diffie-Hellman key exchange was an innovative method for helping two unknown parties communicate safely when it was developed in the 1970s. While we now implement newer versions with larger keys to protect against modern technology the protocol itself looks like it will continue to be secure until the arrival of quantum computing and the advanced attacks that will come with it. How will quantum computing affect the Diffie-Hellman key exchange? Quantum computing is an emerging branch of computing that continues to make breakthroughs. The specifics of how quantum computers work are complicated and out of the scope of this article, however the technology does present significant problems to the field of cryptography. The simple explanation is that quantum computers are expected to be able to solve certain problems that are currently not feasible for classical computers. This will open up a lot of doors and bring about new possibilities. Sufficiently powerful quantum computers will be able to run quantum algorithms that can more effectively solve various mathematical problems. While this may sound great, the security of many of our current cryptographic mechanisms rely on these problems being difficult to solve. If these mathematical problems become easier to compute, it also becomes easier to break these cryptographic mechanisms. One of these quantum algorithms is Grover’s algorithm. When quantum computers become powerful enough, it will speed up attacks against symmetric-key ciphers like AES. However, it can easily be mitigated by doubling the key size. The biggest concern is how Shor’s algorithm will affect public-key cryptography. This is because the security of most common public-key algorithms rely on the immense difficulty of solving one of these three computations: - The discrete logarithm problem - The integer factorization problem - The elliptic-curve discrete logarithm problem The specifics of each don’t really matter, but you can follow the links if you want additional information. The important thing is that once sufficiently powerful quantum computers arrive, it will become much more practical to solve these problems with Shor’s algorithm. As these problems become easier to solve, the cryptographic systems that rely on them will become less secure. Public-key cryptography plays a fundamental role in protecting our communications, which is why quantum computing presents a huge challenge for cryptographers. In the case of the Diffie-Hellman key exchange, its security relies on the impracticality of being able to solve the discrete logarithm problem with current technology and resources. However, the threats from Shor’s algorithm loom closer with each advance in quantum computing. It’s hard to come up with a rough timeline of when quantum computing will seriously threaten the Diffie-Hellman key exchange because some researchers are far more optimistic than others. Despite this, replacements for the Diffie-Hellman key exchange and other public-key algorithms are being developed to make sure we are prepared for when the time comes. Potential replacements for the Diffie-Hellman key exchange The danger from quantum computers isn’t immediate, so the cryptographic community is yet to settle on specific alternatives to the Diffie-Hellman key exchange. However, numerous paths are being pursued. These include: We still don’t know exactly how the post-quantum world will look for cryptography, but the security community is actively working on the problems and keeping up with the advances in the quantum computing world. While there will be big changes in the future, it’s nothing that the average person needs to fear—you probably won’t even notice when any changes do take place.
<urn:uuid:58f7187a-9051-41cd-a742-f9e8776f857c>
CC-MAIN-2022-40
https://www.comparitech.com/blog/information-security/diffie-hellman-key-exchange/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00583.warc.gz
en
0.939313
6,708
3.53125
4
The Network layer The Network layer handles connectivity of the HP Switch to the network, and includes packet traffic transmitted to and from the server. Using the tests available in this layer, administrators can determine whether the network link to the target HP Switch is available or not, the bandwidth availability, the rate of packet transmissions to and from the host and and the uptime of the switch. In addition, the administrators can also determine the operational state of the network interfaces and the reason for why the interface is down. Figure 1 : The list of tests associated with the Network layer The Device Uptime and Network Interfaces tests have been discussed in the Cisco Router
<urn:uuid:7625171c-89fc-46bb-adc1-020df31ef703>
CC-MAIN-2022-40
https://www.eginnovations.com/documentation/HP-Switch/The-Network-layer.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00583.warc.gz
en
0.884218
132
2.734375
3
The Environmental Protection Agency has awarded the city $200,000 for an artificial intelligence-based project to identify lead pipes that are endangering the safety of drinking water. The Environmental Protection Agency has awarded Toledo, Ohio, $200,000 to use artificial intelligence to identify lead pipes that are endangering the safety of drinking water. Using funds from EPA’s State Environmental Justice Collaborative Problem-Solving Cooperative Agreement (SEJCA) Program, Toledo will develop a machine learning model that will predict the likelihood of a home having lead pipes. That information will allow the city to identify and prioritize those homes facing serious health risks from pipes that must be replaced. Working with water infrastructure analytics consultant BlueConduit, the University of Toledo, the Toledo-Lucas County Health Department and local partners, the city aims to reduce lead exposure through well-tested, data-driven prioritization techniques, according to the project summary. The partners will assess the probability that a home’s water pipes are lead based on existing parcel and neighborhood-level data and a representative sample of water service lines in the city. A predictive algorithm will more accurately pinpoint the location of lead service lines without having to dig up pipes to determine whether they are copper or lead. It can cost between $3,000 and $10,000 per home to replace lead pipes, with part of the cost coming from the trial and error usually involved in accurately locating lead service lines, according to BlueConduit. This cost savings makes using the technology a priority. With the predictions in hand, Toledo officials will be able to prioritize remediation efforts, guiding decisions on whether homes should receive targeted education, water filters or replacement of their lead pipes. BlueConduit also worked with officials in Flint, Mich., in 2016 and 2107 to deploy a predictive model to more accurately locate homes with lead service lines. Using decades-old handwritten notes, annotated maps and service records for homes in Flint, the company’s founders described how they cross checked that information with data on the age, value and location of homes to build a predictive model to identify lead pipes. “Leveraging new algorithmic and statistical tools, we are able to produce a significantly more complete picture of the risks and challenges in Flint,” they wrote in 2016. The model hit an 80% accuracy rate, the company said, but the project was derailed over objections from members of the public who thought the AI-based model was unfairly ignoring their homes. After city officials stopped using the algorithm and started digging up whole blocks looking for lead service lines, only 15% of the excavated pipes were found to be lead, slowing the replacement program and adding costs. “This project will reduce lead exposure risks for Toledo’s most vulnerable residents by using historical data and technology to target lead service line replacements,” said EPA Region 5 Administrator Kurt Thiede. “We are excited to fund such a worthy project, one that could serve as a model for cities around the country.” Through the SEJCA program, EPA is providing grants over a two-year period to advance collaborative work with communities facing environmental justice challenges. The goal is to further understand, promote and integrate approaches to provide meaningful and measurable improvements for public health and the environment.
<urn:uuid:e4237492-fd55-4dd8-98c7-39042846b760>
CC-MAIN-2022-40
https://gcn.com/state-local/2020/11/ai-helps-toledo-get-the-lead-out/315407/?oref=gcn-related-article
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00783.warc.gz
en
0.944305
672
2.8125
3
This is the second in a series of blog posts that will look at how common objections to the use of Bayesian networks can be overcome by clear thinking and appropriate models. The first post showed that even concepts that seem vague or imprecise can be represented in a probabilistic model. This post addresses another common objection, that the knowledge engineering required to specify a Bayesian network is often a prohibitively expensive task. Specifying a complex Bayesian network does require specifying a large number of parameters, specifically the entries on all of the required conditional probability tables (CPTs). Where do these parameters come from? In some applications it is possible to learn the parameters from data. This can work, but it is only possible when the data sets required for learning are available. Another possibility is that the parameters are defined through knowledge elicitation from domain experts. This can also work, but it may require an expensive effort. Knowledge engineering may require identifying and obtaining access to one or more domain experts, as well as statistics experts who understand the requirements of the Bayesian network. Multiple knowledge engineering sessions may be required to elicit and then refine the values. It is also possible to learn parameter values by combining expert knowledge with available data. At the end of the day the model and the parameters do have to be defined, and some potential users are scared away from using Bayesian networks because this step is perceived to be a prohibitively expensive bottleneck. However, in many cases it is possible to dramatically reduce the knowledge engineering effort to develop a model and define the parameters for a Bayesian network. I will illustrate an approach for this by introducing a toy problem, and defining a small Bayesian network to solve it. The approach has three components: an appropriate model, a recognition that neither perfection nor precision is required and an iterative process that builds, tests and refines the model. The first component is to build an appropriate model. When the problem involves reasoning about things operating in some domain, it often pays to think first about the objects, or agents, in the domain and build a model that represents them, their attributes and the relationships between them. The attributes are typically represented as random variables; the relationships may be random variables or may be represented by the graphical links in the Bayesian network. We do this initially without any regard to what observations we may have or expect to have. Then, once there is a model of the objects/agents in the domain, we extend the model to include the observations that are available to us or that might become available. The second part of the approach is a willingness to accept – even to embrace – simplicity, and a lack of precision. It is not necessary, especially with the first version of a model, to include every possible random variable in the model, or to require precision in specification of the model parameters. It is important to capture significant relationships, but it’s much easier to get a simple model working and then extend it than it is to create a complex model from scratch. The next step in the approach is spiral development. We build a small simple model of an important part of the problem, test it by interacting with it to make sure the model responds in believable ways, then make refinements or extensions until a useful model is achieved. With that introduction to the process, here is the toy problem – which uses the classic ‘blind-men-and-an-elephant‘ example: Four blind men are walking on the savanna in Africa. They encounter an elephant. The first blind man has bumped into one of the elephant’s legs. He explores it with his hands and says: “I have found a tree.” The second blind man encounters one of the elephant’s ears: “No, it is a large palm leaf.” The third encounters the elephant’s trunk: “It is a python!” And the fourth blind man reaches out and finds the elephant’s tail: “No, you are all wrong – it is just a rope, hanging from a tree.” So, how can we combine these observations and reason that this is an elephant? To build a model of this problem, first identify the objects, or agents, in the problem domain. We do want to keep things simple at the start, so we can identify that there is some object that the blind men have encountered, and that there are the blind men themselves. Let’s start with the object we wish to reason about: the object that the blind men have encountered. Its key attribute is its type. So we can start with a random variable that represents the type of the object. The object type is a random variable with multiple states. From the problem description, the possible states include: ‘tree,’ ‘palm leaf,’ ‘rope,’ ‘python’ and, of course, ‘elephant.’ In Netica, a commercial Bayesian-network development package from Norsys Software Corp., it looks like this: Now we consider the blind men. The important attribute for them is their observation of the object. The blind men are the same, so we only need to specify the observation once. The observation is a random variable with four states: ‘tree,’ ‘palm leaf,’ ‘rope’ and ‘python.’ Because it is an observation of the object’s type, we model it in the Bayesian network as a child to the ‘Object Type’ node: The network above still has the default probability distributions assigned by Netica. To complete the model, we need to define the parameters of the local probability distributions. That is, we need a prior distribution across the states of ‘Object Type,’ and a conditional distribution for the ‘Blind Observation’ given the object type. These numbers are not specified in the problem description, so where do they come from? It would certainly be possible to devote considerable time and energy to defining the numbers by reviewing literature, conducting surveys, designing and implementing randomized experiments with blind men and African savannas or interviewing experts. In some problems that kind of effort may be appropriate. But for this model, and especially for the early versions of many models, it is not necessary to agonize over the process of defining the numbers needed for the required probability distributions. A lot of anecdotal evidence from constructing many Bayesian network models suggests that reasonable numbers will give reasonable results. Let’s start with the prior distribution for the object type. What follows is a stream of consciousness thought process that will consider the problem and end up with a prior probability distribution for Object Type: The model is developed from the ‘world’ defined by the problem description. In that world, we can reasonably assume that at least all of those states do exist, so there will be no prior probabilities of zero. We can envision an African landscape, with scattered trees, where some of them are palm trees. There is at least one elephant and elephants are usually together, in groups. And there must be at least an occasional rope hanging from a branch, plus the occasional python. Mentally examining this imagined landscape, we see lots of trees, a number of palm trees with large leaves and a parade of elephants. We probably can’t see any ropes or pythons, but we know that they are there. That suggests there are more trees than palm leaves, more of either of them than elephants and the occasional rope or python. We do not need to specify actual probabilities; just articulating likelihoods for the different types is sufficient. What is important is the ratios between the likelihoods we assign to the different states. Let’s say 40 trees, 20 palm leaves, five elephants, and two apiece for ropes and pythons. (Note that a wide range of different numbers will work for this problem.) In the order that we defined the states, that yields the likelihood vector [40, 20, 2, 2, 5]. We can enter these numbers into the distribution table in Netica, and then use Netica’s Table | Normalize function (which scales them so that they sum to 100%) to turn those likelihoods into a prior probability distribution. (The probability distributions in Netica are typically shown as percentages.) We next need to define the conditional probability distribution for a blind observation given the object type. That is, we must fill out this table: For each row in the table, we must answer the question: What will a blind man observe if he encounters that object type? It would be possible to conceive of extensive experiments to collect data that would answer this question, or intense knowledge engineering sessions to try to elicit probabilities from knowledgeable experts. But often, especially in the early version of a model, it is possible to employ common-sense reasoning to come up with reasonable values for the needed numbers. As we did above, it is only necessary to specify likelihoods for each row. We can later use Netica to convert the likelihoods into probabilities. Again, what follows is stream of consciousness for the kind of thinking that can generate the parameters required: First consider a blind man who encounters a tree. He is likely to recognize through touch that it is a tree, so that outcome should have a large likelihood. Yet all sensors are ‘noisy’ and subject to error – even blind men – so we don’t want to use zero for any of the outcomes. Is there anything that might be confused for a tree? Ok, perhaps a python, if it were hanging from a branch, and was holding still… perhaps that could be confused for a tree, but it wouldn’t happen very often. Now pick some likelihood numbers consistent with that reasoning, say [80, 1, 1, 2]. Next, consider a blind man who encounters a palm leaf. He is likely to recognize that it is a palm leaf. And for this one, there is no other state that might be expected to be confused for a palm leaf. Again, we do recognize that all sensors are subject to error, so we do not wish to use any zeros. We must pick some numbers, so… [1, 80, 1, 1] Now consider a blind man who encounters a rope, hanging from a branch. In this case it is conceivable that a rope could be confused with a small narrow tree trunk. And plausible that a rope could be confused with a python. Still, most of the time we expect that a rope will be recognized as a rope. And again we do not wish to use any zeros. So pick some numbers… [2, 1, 80, 10]. A blind man who encounters a python may be confused in similar ways as with a rope. A python could be confused with a tree, or even more likely with a rope, but most of the time it will be recognized as a python. We need to pick some numbers, so we might select [2, 1, 10, 80] Now we get to the last row of the conditional probability table, where we model the blind man encountering an elephant. How to predict what a blind man will report? One possibility is just to count up the opportunities for the different misclassifications that are described in the problem definition. An elephant has four legs, two ears, one tail, and one trunk. We can use those counts as likelihoods [4, 2, 1, 1]. At this point the table has been filled in with likelihoods: It is not necessary to use these exact numbers. A wide range of numbers will work for this problem. We use Netica’s Table | Normalize function to convert these likelihoods to probabilities (which sum to 100% across each row): At this point the Bayesian network looks like this: We can do the first round of ‘testing’ on this model by successively setting each state in the Object Type, and then each state in the Blind Observation to make sure that these two random variables interact with each other in ways that are expected and consistent with the problem domain. If necessary, make changes to the prior or to the conditional probabilities (or likelihoods) until the model ‘feels’ reasonable. Now we can make three additional copies of the Blind Observation node, to represent the four blind men in the original story. When we apply the evidence reported from the story, we can see that the Bayesian network has indeed identified the object as an elephant! [Note: The example Bayesian network discussed in this post, BlindMenAndElephant.neta, is available for download here. The example runs in Netica, a demo version of which is available for free at the Norsys website that is more than sufficient to run the example Bayesian network.] This Bayesian network was developed for a small yet interesting toy problem, but it has relevance to more complex problems. First, the model was developed logically, starting with a model of the important agents in the domain and their attributes – in this case the object and its type – followed by modeling the observations that are available in the domain – in this case the observations of the blind men. Most importantly, it has demonstrated that at least in some cases it is possible to define parameters of a non-trivial model without an extensive or expensive knowledge engineering process. Reasonable numbers, defined using logical thinking, common sense and an understanding of the domain, are often sufficient to achieve reasonable results. This problem, and this Bayesian network, can also be used to illustrate a common misstep that is sometimes made in Bayesian modeling. Suppose that in our original modeling we had decided to model the observation as the parent, and the Object Type as the child. This may even seem reasonable, because that is the way that we think. If we reason from data to inference, it can ‘make sense’ to build the model that way. And if we do, we get this: The Bayesian network above does not have probabilities assigned; the numbers are just default values from Netica. At first blush, this network may even seem reasonable. But consider what happens when we try to define the probability distributions. Even defining a prior across the states of the blind observations feels awkward. But when we try to define the conditional distribution of the Object Type given four blind observations, we discover that we have to fill in a table with (4 x 4 x 4 x 4 =>) 256 rows. For each row, we have to answer questions like: “If one observations is ‘tree,’ the second observation is ‘rope,’ the third observation is again ‘tree,’ and the fourth observation is ‘palmLeaf,’ then what is the likelihood that the object is a ‘tree’… a ‘palmLeaf,’… a ‘rope,’ etc.? This does not sound like fun! There are many more parameters, and even understanding them well enough to try to specify them is hard. The lesson here is that if defining the parameters of the model is too painful then that is evidence your model is wrong. It is almost always better to model observations as children of the random variable that are being observed. There are some other lessons that can be extracted from this toy problem. First, an astute reader may have asked early on: “Where did the elephant in the model come from? That is, why is ‘elephant’ one of the states of the object?” That’s a valid question, since in a realistic problem we may not know that elephants exist until we encounter one. It’s still possible to use a Bayesian network to reason in such a domain, and it is done by explicitly including the state ‘other’ in the model. For example, in this very problem suppose we had the same four blind men and the same observations, but suppose that the possibility of ‘elephant’ had not already been encoded in the model. Instead, a model can be constructed with five object states: the four that are known – ‘tree,’ ‘palmLeaf,’ ‘rope,’ and ‘python’ – and then a fifth state of ‘other’. The prior distribution of ‘other’ will likely be small, but it should not be miniscule. Then the last row of the conditional probability table for the blind observations will be the probability distribution across the possible observations, given that the object is ‘other’. Without any additional information, we can assign equal probabilities for each observations state. When we apply evidence of the four blind men to this model, we see that the probability of ‘other’ is very high. If the automated system using this Bayesian network was coded to raise an alert when the probability of ‘other’ exceeded some threshold, a human analyst would at some point have a ‘Eureka!’ moment: “Oh! It’s an elephant!” Then the model could be extended to include the object state of ‘elephant.’ At that point, for completeness, the model should have six states for ‘Object,’ including both ‘elephant’ and ‘other’ – to account for future encounters with other unexpected objects – say, hippos, rhinoceroses or giraffes. Finally, note that this model is a very simple fusion system, which infers the presence of some (perhaps rare or unexpected) state of the world by fusing observations from multiple sensors. The sensors here are not even ‘aware’ of some important states of the world (i.e., the elephant). This fusion system could be extended to account for sensors with different accuracies (e.g., some blind men are more reliable than others) or for different types of sensors. This model has a prior distribution across the states of the object, but that model could be extended with additional environment variables that are parents to the Object Type node, which would provide different distributions for different locations in Africa, or different times of year, and so on. Any real-world problems of course will be considerably more complex than this example, with lots of variables and therefore a complex Bayesian network with lots of local-probability distributions that require parameters. But we still have a reasonable prospect of defining a useful Bayesian network if we: - Start small, beginning with simple models of the objects or agents that we wish to reason about, and then add the observations that we may have about those objects; - Use engineering judgment to define reasonable parameters, without worrying about precision in early versions; and - Test and evaluate the model by interacting with it – or with data if available – and refine as necessary. Once the simple model gives reasonable results, we can then iterate to add new concepts and relationships until the model is complete enough to be useful. Ed Wright, Ph.D., is a Senior Scientist at Haystax Technology.
<urn:uuid:ac492b32-2df1-4882-8860-63462aa38a00>
CC-MAIN-2022-40
https://haystax.com/overcoming-objections-bayesian-networks-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00783.warc.gz
en
0.935474
3,962
2.609375
3
Organizations track KPIs and metrics from all aspects of their business, often from millions or even billions of distinct sources. Data analytics is used to make sense of all this data that is being collected, to draw conclusions of what is happening within the systems that are being measured. Correlation analysis is a key function within data analytics. What is Correlation Analysis? Correlation analysis is the process of discovering the relationships among data metrics by looking at patterns in the data. Finding relationships between disparate events and patterns can reveal a common thread, an underlying cause of occurrences that, on a surface level, may appear unrelated and unexplainable. A high correlation points to a strong relationship between the two metrics, while a low correlation means that the metrics are weakly related. A positive correlation result means that both metrics increase in relation to each other, while a negative correlation means that as one metric increases, the other decreases. Put simply, correlation analysis calculates the level of change in one variable due to the change in the other. When two metrics are highly correlated, and one of them increases for example, then you can expect the other one to also increase. Why Is Correlation Analysis Important? Just as you wouldn’t evaluate a person’s behavior in a vacuum, you shouldn’t analyze metric performance in isolation. How metrics influence and relate to one another is incredibly important to data analytics, and has many useful applications in business. For example: Marketing professionals use correlation analysis to evaluate the efficiency of a campaign by monitoring and testing customers’ reactions to different marketing tactics. In this way, they can better understand and serve their customers. Financial planners assess the correlation of an individual stock to an index such as the S&P 500 to determine if adding the stock to an investment portfolio might decrease the unsystematic risk of the portfolio. Technical support teams can reduce alert fatigue by filtering irrelevant anomalies (based on the correlation) and grouping correlated anomalies into a single alert. Alert fatigue is a pain point many organizations face today – getting hundreds, even thousands of separate alerts from multiple systems, when many of them stem from the same incident. For data scientists and those tasked with monitoring data, correlation analysis is incredibly valuable when used to for root cause analysis, subsequently and reducing time to detection (TTD) and time to remediation (TTR). Two unusual events or anomalies happening at the same time or /rate can help to pinpoint an underlying cause of a problem. The organization will incur a lower cost of experiencing a problem if it can be understood and fixed sooner rather than later. How Does Correlation Analysis Relate to Business Monitoring? Business monitoring is the process of collecting, monitoring, and analyzing data from business functions to gauge performance and to support decision making. Anomaly detection is a supplementary process for identifying when a business process is experiencing an unexpected change. As organizations become more data-driven, they find themselves unable to scale their analytics capabilities without the help of automation. When an organization has thousands of metrics (or more), analyzing individual metrics can obscure key insights. A faster method is to use machine learning-based correlation analysis in order to group related metrics together. In this way, when a metric becomes anomalous, all the related events and metrics that are also anomalous are grouped together in a single incident, saving teams from searching through dashboards to find these relationships themselves. Let’s say that an eCommerce company has an unexpected drop in product sales. Using correlation analysis, the company sees the sales drop is tied to a spike in payment errors with PayPal. The fact that these two, clearly related, anomalies happened simultaneously is a good indication to start investigating the PayPal API. Considerations and Challenges of Using Correlation Analysis Correlation is not the same as causation. It’s possible that two events are correlated but neither one is the cause of the other. Suppose you are driving a car and the engine temperature warning light comes on and you hear a strange noise in the engine. The anomalies are related but what is the root cause? The noise isn’t the cause and neither is the overheating. They’re just symptoms of an underlying problem that can point to the cause. A mechanic might look at those symptoms occurring together and suspect an oil leak as the cause of the problem. As this example illustrates, even in day to day life, we resort to correlations, finding commonalities and relationships between symptoms so we can find the root cause. In business monitoring, coupling anomaly detection with automated correlation analysis can help get to the root cause of incidents—but there are challenges to implementation and training. One challenge is that an incident and its symptoms may manifest in different areas of the business that operate in silos. One side of the business may have no visibility into what is affected elsewhere in the company. But correlating the events is critical for root cause analysis. For example, the roaming customers of a Tier 1 telco in Southeast Asia were using far less data than usual. This anomaly was correlated with an increase in DNS failures on the network. The issue with the DNS server prevented some roaming customers from connecting to the telco’s network. The relationship between the two metrics is not an obvious one since the DNS metric is measured in a completely different area of the network. Without correlating the two, the telco’s Network Operations Center would have had a hard time understanding that the roaming incident was caused by the DNS server prolonging customers’ connection issues while traveling. A second challenge is the ability to analyze millions and billions of metrics across the business. There is a technique called Locality Sensitive Hashing that is used to scale up correlation techniques. LSH is an algorithmic method of hashing similar input items into the same buckets with high probability. It speeds up clustering and “nearest neighbor” search techniques in machine learning. LSH is often used in image and video search and in other areas where there is a need to search across massive amounts of data. A third challenge is to keep from correlating metrics that aren’t actually related. These are known as spurious correlations. Common techniques for correlation analysis produce a lot of spurious correlation and should be avoided for purposes of root cause investigation. For instance, suppose a gaming company has multiple games in the market. Their performance metrics may at times bear resemblance, especially as gamers tend to play at the same time. Linear correlation would discover that they are very much related, but an incident in one game is often not related to an incident in the other. Identifying the relationships among data metrics has many practical applications in business monitoring. Correlation analysis can help reveal the root cause of a problem and vastly reduce the time to remediation. And by using these relationships to group related anomalies and events, teams will have to grapple with fewer false positives and get to addressing incidents faster.
<urn:uuid:01c3a55b-b84b-4f8c-9fa6-2083b6bfca7a>
CC-MAIN-2022-40
https://www.anodot.com/learning-center/correlation-analysis/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00783.warc.gz
en
0.947527
1,407
3.015625
3
There are times when your body heat rises. This condition is known as heat stress. The normal temperature is often quoted as 98.6°F, but it can be slightly lower or higher. The average temperature of adults is between 97.8°F and 99.0°F. Dehydration can lower your body’s ability to sweat to cool you down and support a normal temperature. Drinking coconut water is a great way to refresh and revitalize your body. An inflammatory illness, such as an infection. Such an illness can cause you to have a fever, which is one indication that something unusual is going on in your body. Having a thyroid disorder known as hyperthyroidism. This causes your body to produce too much thyroid hormone. Sipping a cup of fenugreek tea may help to bring on a sweat, allowing you to cool off. Eating spicy, oily, or fried food. In addition, nuts, meats, and other high-protein foods can contribute to heat stress. Consuming drinks with caffeine or alcohol. Fruits such as cantaloupe, watermelon, and strawberries are good options. Performing intense physical exercise. This can cause an increase in heat since active muscles and related blood circulation activity create a lot of heat. For more tips, follow our today’s health tip listing
<urn:uuid:a0f87da5-2600-46cc-b20d-822dc35c6524>
CC-MAIN-2022-40
https://areflect.com/2019/07/22/todays-health-tip-10-ways-to-reduce-heat-from-body/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00783.warc.gz
en
0.9481
282
3.390625
3
No matter what role an employee plays in an organization, they should understand essential security awareness topics and how to protect the company (and themselves) from potential cyber-attacks and security breaches. Here are a few of the most critical topics to cover in 2021. Employees should know how to identify potentially dangerous websites and understand the risks of compromised browser security. They should also know how to keep their browser updated to the latest version, as well as how to avoid connections to unsafe Wi-Fi networks. Business Email Compromise Business email compromise, or BEC attacks, happen when an email address is compromised and then used to steal money from a company or individual. Employees should understand how to identify phishing scams and recognize when an email request is suspicious. They should also know how to report a possible BEC attack and the company’s approved processes for authorizing monetary transactions. Employees should not only understand how to create and maintain strong device passwords, but they should also know basic best practices for keeping devices secure. Unlocked and unattended devices put companies at high risk. Removable devices are also a potential source of risk. Educate employees on which media sources are appropriate for use on company devices and how to protect them. Stealing private data and threatening to expose it or blocking company access to data are two methods attackers use to extort money from companies. Ransomware is profitable and, because of that, is becoming more and more common. Many companies pay the ransom and it is never reported; statistics we have on ransomware are underreported, we just do not know by how much. No matter how careful an organization is about security, it’s likely that a data compromise will occur at some point. Make employees aware of the steps to report and mitigate an incident. Time is of the essence when security is breached, and a company-wide understanding of incident response policies will help control potential damages. Access to company information is a privilege and one that should be taken seriously. Information security protects digital assets from compromise and benefits all employees, as well as the business. Employees should read, understand and acknowledge the official Information Security policy and pledge to help keep data protected. With the proliferation of mobile devices and remote work, employees have constant access to sensitive company information. Unfortunately, if a device is stolen, hackers and scammers can launch an attack. Teach employees how to set strong passcodes and protect devices from theft and compromise. Multi-factor authentication, or MFA, uses a multi-step verification process to identify users before they are granted access to applications or services. Ensure that employees understand how to set up and use MFA and its benefits in keeping their accounts and information safe. While many security topics revolve around passwords, it’s worth holding a separate session to equip employees with the right strategies to create strong passwords. They should also understand how often to change passwords and basic policies for password privacy. Password hygiene is quickly evolving so this topic should be revisited regularly. Phishing scams are becoming increasingly sophisticated and harder to identify. These scams occur when an email that looks legitimate is sent to an employee, and they unwittingly click on a link, enter a password, or open an attachment that allows a scammer to access information, launch ransomware, or install another harmful virus. Make sure employees know what clues to look for with phishing and how to defend against it. And, phishing attacks are moving beyond email; Slack phishing attacks are becoming more common. In addition to device and password security, and assuming you still have an office, employees know how to protect the physical office location from entry by unauthorized individuals. If the office requires a badge to enter, make sure employees understand the policy for holding doors open, propping open entryways, and how to report any suspicious activity in the event someone enters the office without appropriate security clearance. If employees work remotely, they should understand how to secure their physical devices when working from home or shared locations (coffee shops, libraries, hotels, etc). Security Awareness for Remote work Having remote employees should impact every one of the above topics. At the very least, each of these topics should give special consideration to remote work. In-office, physical security awareness posters have no impact when employees are not in offices. And in-person security awareness training sessions go away with remote work. Ensure your entire workforce, whether in office or at home or across the globe, has relevant security awareness training.
<urn:uuid:7976d4da-1321-4ae2-946c-9bd4de1b5aa6>
CC-MAIN-2022-40
https://www.haekka.com/blog/security-awareness-training-topics-for-2021
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00783.warc.gz
en
0.942213
944
2.546875
3
It is a brave new connected world out there and there is no shortage of cybersecurity risks associated with everything we do. We can’t even be sure that the technologies that keep as alive and healthy will work as intended if malicious actors set their sights on them. The security challenges associated with healthcare Flaws in medical data management systems and electronic health records can be exploited to steal or modify patient information, vulnerabilities in medical devices and equipment bugs could lead to substandard care and misdiagnoses. Just recently, a group of researchers has demonstrated that it’s possible to create malware that would add fake tumours to medical scan images or remove real ones. An attack is possible because the files containing the images and scans are not digitally signed or encrypted. “An attacker may perform this act in order to stop a political candidate, sabotage research, commit insurance fraud, perform an act of terrorism, or even commit murder,” the researchers noted. Luckily for all of us, there are information security researchers that probe the security of biomedical devices and healthcare equipment, but there are still not enough of them in the medical industry, says Nina Alli, an infosec researcher herself and the Project Manager of the DEF CON BioHacking Village. What are biomedical devices? “There are numerous types of devices that can be categorized as biomedical devices: Electronic Medical Records (EMRs), radiology machines (MRI, CT, XR), heart monitors, pacemakers, fetal monitors, Patient Controlled Analgesic (PCA), Apple Watch and other heart monitoring wearables, ingestible sensors, insulin monitors, DaVinci Surgical Robot, etc. And new types of biomedical devices are constantly emerging,” Alli told Help Net Security. “Numerous universities are working on Translational Medicine programs, which drive students to learn more about patient needs and encourages them to develop new biomedical devices that will help speed up prevention, diagnosis and therapy.” Manufacturers must adhere to regulations and processes set out by the US Food and Drug Administration (FDA) in order for these devices to be approved for use in the US. According to FDA’s numbers, the agency regulates more than 190,000 different devices manufactured by more than 18,000 firms in more than 21,000 medical device facilities worldwide. The current situation With degrees in biomedical informatics and translational medicine and many years of experience in the healthcare field, Alli is dedicated to helping the ecosystem understand the security challenges associated with healthcare and collaborating to devise methods to solve those problems at mass. Currently, the things she worries most about are biomedical device makers that still use hardcoded or default device passwords and don’t set devices’ Wi-Fi and Bluetooth to “off” by default. Every device should automatically change its password once it’s activated and engaged, she maintains, and Wi-Fi and Bluetooth communication functions should only be activated as necessary by the therapist or physician. On the other hand, she noticed positives changes. For example, medical device manufacturers have recently begun building security into their devices from the start, rather than bolting it on post build. They are also more willing to talk openly about security challenges and address them, and understand that having true security researchers helping them develop and check their code can only benefit their product and data security. Another good news is that the FDA has made great strides when it comes to improving the cybersecurity of medical devices and has defined plans to keep at it, especially when it comes to continuous security updating and patching, vulnerability disclosure and response mechanisms. Also, earlier this year, the FDA and the DEF CON Biohacking Village have launched the #wehearthackers initiative. “The goal of this initiative is to encourage healthcare ecosystem stakeholders to work collaboratively with security researchers to ensure their devices are secure. On the day the initiative launched, five major device manufacturers pledged to work with us: BD, Medtronic, Philips Health, Abbott, and Thermo Fisher,” Alli shared. What to expect from the DEF CON Biohacking Village? The DEF CON Biohacking Village is a multi-day biotechnology conference focused on breakthrough DIY, grinder, transhumanist, medical technology, and information security along with its related communities in medical/healthcare ecosystem. The organizers celebrate the biohacker movement with a compendium of talks, demonstrations, and a medical device Capture the Flag contest, which challenges hackers to defend a hospital under siege. The Biohacking Village, in collaboration with I Am The Cavalry, also runs a Medical Device Lab where security researchers can learn and build their skills alongside patients, medical device makers, hospitals, the FDA, and others. Medical device manufacturers, academic institutions, healthcare delivery organizations, and individual security researchers are invited to put medical devices in the hands of security researchers for security testing. “Every year we look at the medical ecosystem and think about ways to make our village more encompassing to show the attendees new technology and methodologies,” Alli explained. As far as the latest trends in the healthcare/biotechnology space, she says she has noticed that there has been quite a bit more DIYBio/Citizen Science emerging. “People are looking to make their own devices to solve health challenges or tinkering with brand name devices to ensure their security. Patients are asking for more control of their data and devices,” she noted. “In recent years, patient healthcare technology literacy has increased, and they are now able to ask great questions about the handling, care, and underlying technology of their medical devices.”
<urn:uuid:0b4a0263-17d8-46e2-b244-8212ac6c7d80>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2019/04/10/hacking-healthcare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00783.warc.gz
en
0.938244
1,166
2.9375
3
Security researchers have discovered a new, sophisticated computer virus, termed by many as the "Son of Stuxnet", which they fear, has the ability to cause widespread damage to critical infrastructure computers all across the globe. Stuxnet, which is called by many as the first ever cyber weapon in human history, was allegedly developed by a group of researchers backed by Israeli defence forces and Mossad. It successfully penetrated into the computer networks in Iran, and caused considerable damage to the country's nuclear research programme. Security experts are of the view that the new Stuxnet clone was either developed by the creators of the original Stuxnet themselves, or by a group who somehow managed to crack into the source code of the notoriously mysterious virus. Either way, it would have taken some serious talent on the part of the creators of this new worm, dubbed Duqu. As of now, Duqu only opens a back door to the infected system, thereby allowing the command and control system to do virtually anything it wants to do with the victim system. The command system is located somewhere in India, according to reports. "The kinds of consequences we could see ... if the computer is told download this file, it will download the file. If the file says shut off this service, and that had an effect on a power plant or a conveyor belt, it would do that," Vikrum Thakur of the security solution provider Symantec said, giving an insight to the potential of this new threat, MSNBC (opens in new tab) reports. Further information can be obtained here (opens in new tab).
<urn:uuid:94ea0bde-8b4f-470e-b2a2-7ef8c8b1f8c7>
CC-MAIN-2022-40
https://www.itproportal.com/2011/10/19/stuxnet-variant-duqu-lurking-wild/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00783.warc.gz
en
0.973027
326
2.53125
3
Human events take place in time, one after the other. It is important to learn the sequence of historical events to trace them, reconstruct them, and weave the stories that tell of their connections. We need to learn the measures of time, such as year, decade, generation, and century. When they listen “Once upon a time in history” they need to be able to ask “When did that happen?” and to know how to find the answer. Let’s discuss a few major Historical events in Today’s History. 1794: Battle of Fallen Timbers In the early Republic, the United States Army suffered some of the most devastating defeats in its history. While the Continental Army of the War for Independence fared well against the European strategy employed by the British redcoats, later particularly in the war, the Indian warriors along the American frontier confounded many of the early senior officers. Two separate expeditions into the Northwest Territory, escort by BG Josiah Harmer and MG Arthur St. Clair, were ambushed and nearly wasted by Indians, primarily from the Miami tribe, with covert British aid. This period represented some of the darkest days in the history of the United States Army. Eventually, a senior American officer emerged to lead the Army to victory and end much of the threat posed to American settlers northwest of the River Ohio. MG Anthony Wayne, who had already established himself as one of the premier American officers in the Continental Army, was given command of the Army and led it once again into the Indian Territory. Under the leadership of Wayne’s, however, the results were much different. At the Battle of Fallen Timbers in August in the year 1794, near present-day Toledo, Ohio, Wayne, and his combined force of regulars and mounted Kentucky militia, routed the Indians and largely eliminated the Indian threat in the Northwest Territory. WHAT WAS INVENTED/DISCOVERED TODAY IN HISTORY? 1911: First around-the-world telegram is sent On this day in the year 1911, a dispatcher in the New York Times office sends the first telegram around the world via commercial service. Exactly sixty-six years later, the National Aeronautics and Space Administration (NASA) sends a different kind of message–a phonograph record containing information about Earth for extraterrestrial beings–shooting into space aboard the unmanned spacecraft Voyager II. The New York Times decided to send its 1911 telegram to determine how fast a commercial message could be sent around the world by telegraph cable. The message, reading simply “This message sent around the world,” left the dispatch room on the 17th floor of the New York Times building in New York at 7 p.m. on August 20. After it traveled more than 28,000 miles, being relayed by sixteen different operators, through San Francisco, the Philippines, Hong Kong, Saigon, Singapore, Bombay, Malta, Lisbon, and the Azores among other locations the reply was received by the same operator 16.5 minutes later. It was the fastest time achieved by a commercial cablegram since the opening of the Pacific cable in the year 1900 by the Commercial Cable Company. On the same day in the year 1977, a NASA rocket launched Voyager II, an unmanned 1,820-pound spacecraft, from Cape Canaveral, Florida. It was the first of two such crafts to be launched that year on a “Grand Tour” of the planets, organized to coincide with a rare alignment of Jupiter, Saturn, Uranus, and Neptune. Aboard Voyager II was a 12-inch copper phonograph record called “Sounds of Earth.” Intended as a kind of introductory time capsule, the record included greetings in sixty languages and scientific information about planet Earth and the human race, along with music like classical, jazz and rock ‘n’ roll, nature sounds like thunder and surf and recorded messages from President Jimmy Carter and other world leaders.
<urn:uuid:5c830977-a756-4de8-84cd-c3e5d84ff4e1>
CC-MAIN-2022-40
https://areflect.com/2020/08/20/today-in-history-august-20/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00783.warc.gz
en
0.953559
815
3.734375
4
History deals with man’s struggle through the ages. History is not static. By selecting “innumerable biographies” and presenting their lives in the appropriate social context and the ideas in the human context, we understand the sweep of events. It traces the fascinating story of how humanity has developed through the ages, how man has studied to use and control his environment, and how the present institutions have grown out of the past. Let’s discuss a few major Historical events in Today’s History. 1776: British forces defeat Patriots in the Battle of Long Island During the American Revolution, British forces under General William Howe defeat Patriot forces under General George Washington at the Battle of Long Island, (also known as the Battle of Brooklyn or the Battle of Brooklyn Heights) in New York City. On 22th August, Howe’s troops landed on the south beaches of Long Island, hoping to capture New York and gain control of the Hudson River, a victory that would divide the rebellious colonies in half. On 27th August, the Red Coats marched against the Patriot position at Brooklyn Heights, overcoming the Americans at Gowanus Pass and then outflanking the entire Continental Army. Howe failed to follow the advice of his subordinates and storm the redoubts at Brooklyn Heights, and on 29th August General Washington ordered a brilliant retreat to Manhattan by boat, thus saving the Continental Army from capture. At the Battle of Long Island, the Americans suffered 1,000 casualties to the British loss of only 400 men. On 15th September, the British captured New York City. 1916: Romania’s Entry into the War and Defeat by the Central Powers On August 27, 1916, Romania finally declared war on Austria-Hungary. However, the Romanian army was not well prepared. The lack of equipment and qualifications, combined with infrastructural deficits, especially the poorly developed rail network, caused many difficulties. After initial successes, the swift conquest of a large portion of Transylvania, the offensive came to an abrupt halt. The massive counter-attack launched by German, Austrian-Hungarian, and Bulgarian army pushed the Romanian armed forces on to the defensive. By the end of the year 1916, more than half of Romania, including the main city, Bucharest, was in the hands of the Central Powers. King Ferdinand, who had been regent of the Balkans since 1914, had to escape to Iaşi, although the Germans and Austrians were unsuccessful in forcing Romania to its knees.
<urn:uuid:2f128093-9682-4dbd-962f-e3524a55edd9>
CC-MAIN-2022-40
https://areflect.com/2020/08/27/today-in-history-august-27/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00783.warc.gz
en
0.954803
535
3.1875
3
What is an SMB Relay Attack? SMB Relay Attack – A Corporate Nightmare. Protecting against SMBs and MITM Attacks “Going full ninja” is becoming a major nuisance for SMBs. Companies on the rise tend to put cybersecurity on hold – huge mistake! From ransomware to your run-of-the-mill phishing email, everything’s set out to get you. It’s not paranoia – just stating the obvious. So, what’s this about going, full ninja? Well, it has something to do with today’s topic – the SMB relay attack. Sounds fancy, but truth be told, anyone with access to Kali and some basic Metasploit skills can orchestrate this type of cyberattack. Is it an article-worthy subject do? I believe it is. You see, SMB relay attacks do work and they can be devastating. MITMs (Man-in-the-Middle attacks) are never good news. But that’s a story for another time. Let’s talk about SMB Relay attacks. What is an SMB? No, it’s not the acronym for Small to Medium-Sized Business or Super Mario Brothers. It stands for Server Message Block, a network file-sharing protocol that operates on the Application and Presentation Layers, but heavily reliant on lower-level protocols (i.e. TCP/IP and NetBIOS). The SMB protocol allows a client (i.e. your machine) to communicate with a server and, by extension, with the other network-based resources. It’s also called a server\client protocol. SMB governs everything from internetwork file-sharing to doc-editing on a remote machine. Even the “out of paper” alert you receive on your computer when trying to print a document is the work of the SMB protocol. The Server Message Block uses TCP port 445 for connection and, of course, data transmission. If the resource requested is located on the web, the address resolution is handled through the DNS. For smaller networks, the address resolution mantled is passed to the LLMNR (Local Multicast Name Resolution). Now, how this works is that the client can only ‘talk’ with the server after completing a three-way ‘broshake’. I won’t bother going into the technical details of this process, but I’ll give you a quick run-down of the process: - NetBIOS session established between the client and the server, - Server and client negotiation the SMB protocol dialect, - Client logs on to the server with the proper credentials, - Client will connect to a shared resource hosted on the server (i.e. wireless printer), - Client opens a file on the share, and, - Client reads or edits the requested resource. That would be a top-level overview of what happens during a regular SMB exchange. How does an SMB Relay Attack Happen? The SMB Relay attack abuses the NTLM challenge-response protocol. Commonly, all SMB sessions used the NTML protocol for encryption and authentication purposes (i.e. NTLM over SMB). However, most sysadmins switched to KILE over SMB after research proved that the first version of NTLM is susceptible to Man-in-the-Middle attacks, the SMB Relay attack counting among them. Now, in normal client-server communication, there are a series of requests followed by responses. The idea behind an SMB Relay attack is to position yourself between the client and the server in order to capture the data packets transmitted between the two entities. As to the purpose of this action, it’s easy to guess – capture password hashes, bit of conv from IMs, and other types of info that can be used to dupe the server – one goal to rule them all. Now, to understand what happens during an SMB relay, I’ve decided to take the highwayman’s high way and include a step-by-step example. Obviously, I’ll leave out some of the details. After all, we’re not hackers, and we don’t intend on taking on the hacker’s hat (i.e. the black one, of course) anytime soon. Enjoy! Step 1. Scanning the network. A tool like NMAP is used to scan out the network for shares and IP addresses. Alternatively, you can use Metasploit to quickly map out network shares. Kind of useless if you don’t know the target’s credentials, but still a great go-to solution. Now, if you feel lucky, you can also use Windows’ Explorer to discover network shares. This only works only if the hosts have enabled the access-based enumeration features. Step 2. Using Metasploit or a similar tool, to conduct the attack. Remember that the purpose of this endeavor is to capture and ‘listen’ to enough auth packets in order to trick the server into believing that the attacker is actually the user. Step 3. If the server’s running NTLM version 2.0, you would need to approach this differently, and that way would be the Impacket (i.e. collection of network protocols). Step 4. The payload’s created with msfvenom. After that, we can use Metasploit to commence the Meterpreter session. Be warned – your payload is doomed to fail if the target machine doesn’t have administrator rights to the duped server. Step 5. Once the payload’s delivered, you would have gained access to the shell. That’s it! You’re in and can do whatever you want (or not). Protecting your assets against SMB relay attacks So, what can one do to protect your corporate assets from this type of MITM attack? Believe it or not, SMB relay attacks are a corporate nightmare since most servers run on legacy. Not to worry; everything can be fixed. On that note, here’s a couple of advice on how to keep your network and endpoints safe. 1. Remove the first version of SMB Besides the fact that this protocol belongs in a museum, not a modern corporate network architecture, it’s very unreliable security-wise. The best way to go about this would be to ditch SMB1 and replace it with SMB 3.0 or higher. Microsoft’s SMB 3.1.1 released a while back has tons of new security-centric features including integrity check and AES-128 encryption. Microsoft’s TechCommunity forum has a great and detailed tutorial on how to remove SMB1. 2. Regulate outbound SMB destinations A firewall with advanced control is the best way to restrict the outbound SMB destination (i.e. ensuring that it doesn’t point to a hacker-controlled server). Heimdal™ Next-Gen Antivirus & MDM (also part of Endpoint Security Software) packs advanced firewall features, that will not only give you granular control over what happens inside and outside your network but will also prevent MITM attacks, including SMB relays. - Next-gen Antivirus & Firewall which stops known threats; - DNS traffic filter which stops unknown threats; - Automatic patches for your software and apps with no interruptions; - Privileged Access Management and Application Control, all in one unified dashboard 3. Implement UNC Hardening Back in 2015, Microsoft introduced UNC Hardening in SMB comms to bolster security. What UMC does is to ‘force’ SMB to use client-defined security rather than relying on the server’s requirement. To enforce UNC Hardening, please consult Microsoft’s MS15-011 article, under the “Configuring UNC Hardened Access through Group Policy.” SMB relay attacks don’t have the same potency as ransomware such as Ryuk or RobbinHood, but they can provide the necessary ‘backdoor’ to those two and others. As always, play it safe, keep your apps and software up-to-date, and employ great cybersecurity.
<urn:uuid:327b4276-ae4a-4924-acd3-73a99409b1ad>
CC-MAIN-2022-40
https://heimdalsecurity.com/blog/what-is-an-smb-relay-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00783.warc.gz
en
0.910919
1,766
2.90625
3
Technology Can Play Vital Role in Shaping Developing World’s Growth From cloud computing to artificial intelligence, technology revolutionizes how the world economy functions. But while these shifts are enriching many in the advanced economies, the developing world is at risk of being left behind. To improve the global economic prospects and avoid a deepening of inequality, developing-country policymakers must take seriously the implications of these shifts for their economies and their countries’ position in the global economy, news outlets reported. For years, the “digital divide” was narrowly defined in terms of Internet connectivity. But today, it manifests itself in the way businesses in rich countries use technology to strengthen their control of global value chains and extract a larger share of the value-added created in the developing world. Consider, for example, how recent innovations threaten the export-oriented industrialization strategy that has fuelled many countries’ development in recent decades. By using abundant and low-cost labor, developing countries were able to increase their share of global manufacturing activities, creating jobs, attracting investment and, in some cases, kick-starting a broader industrialization process. But, for the firms that took advantage of the opportunity to reduce costs by shifting manufacturing to the developing world, there was always a trade-off: offshore production meant limited ability to respond quickly to shifts in consumer demand. Now, technology may offer another option. By investing in “additive manufacturing”, robots and other non-human tools, companies could move their production sites closer to their final markets. Adidas, for example, is employing some of these technologies to bring footwear “speed factories” to Germany and the United States. Similarly, as digital technology facilitates the cross-border sale of services, and protections for domestic service providers become increasingly difficult to enforce, domestically oriented services in developing countries will face growing global competition. While such shifts remain nascent, they represent a long-term threat to the development strategies on which many countries in the global South rely. With advanced and emerging economies moving fast to capture new opportunities created by technology, the digital divide is widening at an accelerating pace. For example, China, which used a protectionist industrial policy to nurture domestic digital giants like Baidu and Tencent, is now supporting these firms as they move deeper into development of new technologies and try to expand globally. Similarly, the European Union is supporting technology investments through its “digital single market”, and through new policies in areas like venture capital, high-capacity computing and cloud computing. Indeed, plans for a “European cloud” have been put forth. There are very few, if any, comparable frameworks currently in place in the global South. This must change, but how? Development strategists often suggest that poor countries cannot afford to dedicate resources to the digital economy. While that is true to some extent, failing to account for technology-driven economic trends will merely exacerbate the problem. In fact, such trends should be at the center of national development strategies. Moreover, at a regional level, there is a need to analyze technology-driven economic shifts and design policies that take advantage of the opportunities they represent, while coping with the associated challenges. In Africa, for example, ongoing efforts to develop regional trade links and boost industrial co-operation–including frameworks like the Continental Free Trade Area initiative and Agenda 2063–should include a focus on digital transformation strategies. Discussions on this front should be informed by lessons from other regions, such as the EU. Call for New Tools This should occur in the context of broader efforts to help local firms expand and become more competitive internationally. Too often, excitement for Africa’s innovative startup ecosystem masks the challenges, such as small and fragmented domestic markets, that could impede long-term success. Digital technology has already been put to good use in many parts of the developing world. Data-driven farming techniques are helping growers achieve higher yields, while mobile finance is broadening financial inclusion in poor communities. But these innovations will not be enough to prevent developing countries from falling behind in the global economy. To catch up with the global North, policymakers will need new tools. To invest in those tools, developing countries will also need support from international organizations. For example, the ongoing World Trade Organization discussions about the rules that will govern the digital economy should be expanded to include strategies for leveling the global playing field. Overcoming the resource constraints that limit developing countries’ investment in the digital economy will not be easy. But failing to do so, will carry a steeper price. As leaders in the developing world seek to position their countries for sustainable growth, they must think globally and locally, without losing sight of the role that technology will play in shaping the economy of tomorrow.
<urn:uuid:0654f810-60db-441b-bdb2-fbeec1003de8>
CC-MAIN-2022-40
https://meltechgrp.com/technology-can-play-vital-role-in-shaping-developing-worlds-growth/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00183.warc.gz
en
0.933138
985
3.28125
3
The human brain is made up of approximately 100 billion neurons. These excitable little electrically charged cells process and transmit signals all around the body, giving us the ability to sense, contract muscles, release hormones, store memories and learn new skills - and that's just the start. Each one of these 100 billion neurons make approximately 1000 connections, using synapses, to other neurons, adding up to a vast and complex network of 100 trillion data points that largely do the data storage work. So how much data does this really equate to? Pioneering new research by the neuroscientific community has determined that the human brain could store up to 1 petabyte. That's 0 bytes. Let's put this gigantic number into perspective. 1 petabyte is equal to 4.7 billion copies of Harry Potter and the Philosopher's Stone. Watching your favourite shows on Netflix non-stop for 13.3 years. Listening to Spotify for almost 3,000 years. Printing 48,000 miles worth of photos. That's almost enough to wrap around the earth's equator twice. Storing the DNA of the entire population of the USA and cloning them twice. What's incredible is that our human brains can store all of this information using only 20 watts of continuous power. The equivalent of a dim light bulb. When it comes to girls vs boys, it's the ladies who come up trumps. Women are not only better at remembering but also win at recalling faster, more accurately and in far greater detail. Research suggests this is down to a variety of factors, including how memories are formed in childhood, the traditional caretaker role that women often take on, which helps them to hone this skill, and biological differences that emerge as male and female brains age. But if our brains have such great capacity, why do we forget? According to memory researchers, the sheer amount of storage capacity available is irrelevant to memory recall. Forgetfulness and slow recall are a consequence of the storage process rather than storage space itself. This is because the human brain's storage process is slower than real-time world experience. Imagine the brain as a music player that stored every song ever produced in history. In order to hear your favourite song you would still need to download it to the device, decide the track and pull up the song before you could dance along. This is how we can imagine the brain and memory. Meanwhile our brains are also working hard to take in new information alongside performing the basic motor functions that our bodies need to stay alive. Memory recall is far more complex than merely attempting to remember where you left your keys. That reminds me. Where are my keys? Sadly, our brains don't only forget memories but can also deteriorate leaving irreparable damage. The ability to access information in the brain can be hindered by injury and disease. Nerve fibres in the brain become fragile and can even disconnect, sealing themselves off. These fibres, otherwise known as axons, are vital for carrying information around the body and the damage that follows can often be the leading cause of Alzheimer's and dementia. But......we can fight back! Although there is currently no cure for many brain deterioration diseases and the best way to keep your brain in check is by living a healthy lifestyle. Maintaining a nutritious diet rich in omega-3 and antioxidants, cutting our alcohol intake, getting regular exercise and sleeping well are the best ways to look after our body's most precious organ. If our brains continue to progress... What will the brain of the future look like? If you think the concepts behind Total Recall and the Matrix were the stuff of science fiction, think again. Feeding knowledge directly into your brain could be a reality within our lifetime, with researchers claiming to have developed simulators than can teach new skills in short periods of time. Studies have used electric signals from trained pilots and 'uploaded' the data into novice subjects to assist them with learning how to fly a simulator plane - with staggering results. Although on a much smaller scale than sci-fi movies, this medical tech is just the start of things to come. We'd love for you to like and share using the buttons below. If you would like more information on Alzheimer's and dementia, visit the Alzheimer's Society for advice and support.
<urn:uuid:a8e4f6e1-f66b-4825-80d8-2dade4891af0>
CC-MAIN-2022-40
https://www.comms-express.com/how-much-data-can-the-brain-store/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00183.warc.gz
en
0.94513
901
3.75
4
Internet Connection Issues Troubleshooting Your Internet Connection If You Can't Access the Web We'll assume that your Internet was working at an earlier point. If you are setting it up for the first time, the steps listed below were not designed with that purpose in mind. You need to follow the instructions that came with your router or modem. I've included a series of definitions for the terminology used on this page. Reboot the Computer The first step should be rebooting your computer or device to see if that fixes the problem. You'd be surprised how often that simple step resolves issues. If restarting your computer or device doesn't work, you'll have to check out each potential problem area to see if it restores access. Check These Areas The most likely problem areas related to a loss of Internet assess are one or more of the following: - your computer is disconnected from the network (check the network settings; - a proxy has been added to your browser or operating system; - your high-speed modem and/or router needs resetting or is disconnected; - a disconnected network cable (if your computer is wired); - your computer needs to reboot; - your firewall or security software is misconfigured; or - Your software is misconfigured for access to the Internet. Progress through the suggestions on this page to test alternative solutions. I've presented them in the order I'd likely progress if I were to be assessing the problem and looking for solutions. If the issue is with your ISP or (rarely) a regional access issue, the resolution is beyond your control. You'll just have to wait for your ISP or the Internet structure to repair the problem. Everything Connects via a Network Everything about Internet connection issues relates to how the various networks are operating. Unless the problem is directly related to your computer or device (tablet, smartphone, virtual assistant, Smart Home appliance) then it will involve either your own network or one that is further along the chain. I'm going to use the terms computer and device interchangeably, because all that differs is how they are configured to connect to the Internet. A network is a collection of computers and other devices connected and talking to each other. - The most immediate network is the one inside your home or business (your home network). - The next is your connection to your ISP (another network). - Your ISP connects to the Internet (a world-wide network) through a regional collection of related networks. Let's have a look at how each of these may be involved in the chain of connections from your computer to the website or service you're trying to reach. Your Home Network Your most basic home network is your computer connected directly to a modem provided by your ISP (usually via either a router or a router/modem combination). The router provides access to all other connected devices connected to your network as well as to the Internet. Whether these devices can talk to each other (i.e., share information) depends upon how the network and the devices are configured. Your network should be secured using secure passwords for both your router and your WiFi. Your ISP (Shaw, Telus, Rogers, etc.) provide you with your connection to the Internet via their own network (which includes all their customers' networks). Your ISP then connects through a network of additional connections to the Internet (designed originally to withstand a nuclear attack by switching automatically to whatever routing is available). Public networks include free community WiFi networks, coffeeshop WiFi, public library networks and other similar Internet connection that you don't control. You may be connected using your own laptop, tablet or smartphone or you may be using a public computer (such as those provided by a library or school). If you're having difficulty connecting on a public network, you'll need to talk to the staff to determine how to resolve the issue. Sometimes the staff have no control. You Can't Trust Public Networks If you are using public access from a connection that you don't control (something other than your home network) or one that isn't secured properly (you haven't changed the default passwords or enabled security) then you are placing your computer and data at risk. Everyone on an insecure (public) network such as a coffee shop can potentially “see” the information you are sending and received on that WiFi service. All it takes is some software that is easily obtained on the Internet. Even if you're using a gated network (one that requires you to sign on), unless you control that network, you can't trust it. NEVER do Internet banking or similar risky activity on a public network. Cellular networks are those provided by cellular ISPs. These networks are separate from the typical home or business network and usually have relatively small data caps. Cellular networks are fairly reliable (the number of cell towers and their location determines the strength of your signal) but do sometimes go down. Cellular service is more secure than free WiFi. However, just like your home network, everything you do on your cell is visible to your cellular provider unless you use a VPN. Other than ensuring your cellular service is turned on for your device, there is little you can do to resolve connection issues other than to move to an area with better reception or call your cellular provider for assistance. Securing Your Network It is important that you secure your own network. It is beyond the scope of connection issues, but there are resources on this site that will help you to do that. At the very least, you should change the default passwords used to configure your router and connect to your WiFi. Troubleshooting Access Problems Where I refer to your router this may be configured as a separate high-speed modem connected to an external router or as an all-in-one combined modem/router supplied by your ISP (most common). If the devices are separate then both need to be reset when you are instructed to reset your router in the steps listed on this page. - Turn off the modem first, then the router; - Use the reverse sequence when restoring power. There is No Internet Access Try the following series of steps, in order, to see if this fixes your problem. You can stop when you resolve the issue(s) you are having. - Check the network icon (or wireless connection settings) to see if you have Internet access. Ensure that your network adapter is not turned off. - Check for changes to proxy settings. - Check the network cables if your computer is wired to the router. - Reset your router. - Check your firewall or security software. There are specific troubleshooting steps for ZoneAlarm issues. - Check your browser access issues or email problems. The next few sections will expand these steps into a series of instructions. Where Linux is indicated, I've based these on Linux Mint, the version I'm currently working with. Check the Network Check the network connection on your computer. This connects other computers in your network as well as providing access to the Internet via your ISP. Depending upon your operating system and your settings, there may be a network icon at the top or bottom of the screen or it may be hidden. Your Internet connection can include either or both wired and wireless connections (see terminology). Whichever you're using, there is likely a router involved, whether it is your home network or a public network such as at a coffee shop or a business network, or a community wireless network). If you're not using your own network, you'll need to speak to the person responsible for that network for details on how to fix any issues. Check the Wireless Settings If you're connected wirelessly you'll see a listing of available wireless networks. The wireless network you're currently logged into (if any) should be indicated. Most networks are protected by a security protocol and a password. - You'll need to verify that your connection is strong enough and that the settings don't indicate any problems. - If you're having difficulties connecting or if there is a problem with the connection, you'll need to diagnose it. - If you don't control the network, you'll need to ask for the password and may need additional help diagnosing the problems. - Some public networks are heavily used and can be very slow even when everything is working fine. Check the Wired Settings If you're connected via CAT5 or CAT6 network cables, you should check the following: - Check the cables to ensure that they aren't unplugged or damaged. - Be sure that the network adapter isn't disabled. - You may need to reset the router then reboot your computer. Network Settings by Operating System The following are specific to each operating system. If you're isn't listed, look for your computer or device documentation. Windows 10 has changed the way that these settings work over time, so you may see something different than what is indicated here. Windows 10's network icon is on the right side of the taskbar in its default configuration. The icon changes from a globe to a computer to a WiFi icon depending upon your connection and its status. Click the network icon to see the status of your Internet connection(s) and to connect to listed WiFi networks. Look for the word “connected” for both LAN and WLAN connections to ensure they are working correctly. Clicking on Network & Internet settings brings up the Status page. At the top is a diagram of your network status: There should be solid lines between your computer, the network and the Internet as shown above (a private LAN connection — yours could display different icons). Through the various settings on this Status page, you can - Label the network a metered connection if you have a limited data plan. - You can also change other properties here and troubleshoot problems using the network troubleshooter. - Enable or disable a network adapter. - Add a VPN or manage other services. A VPN may disable your connection to the Internet if it is disconnected (a security measure to protect your privacy). Reconnecting or turning off the VPN should resolve any issues. Open the Network Preferences from the WLAN icon or look in the Systems Preferences to see your network connections. You may have active connections for Ethernet (LAN) and/or WiFi (WLAN). If everything is normal, you should see “connected” indicated in the appropriate location(s). If not, click on Assist Me at the bottom then Diagnostics on the dialogue box that appears. Follow the instructions for the connection that is having problems. There are two areas dealing with your network connections: - under Administration (Network: configure network devices and connections); and - under Preferences (Network Connections: manage and change your network connection settings). You'll need to unlock the Network Settings with the Administrator password to make changes. - 13 tips to troubleshoot your Internet connection. - How to solve Internet problems. - How to fix your Internet connection in Ubuntu Linux (Mint is based upon Ubuntu). More advanced or adventurous users can try using network commands to troubleshoot your network: - 13 Linux network configuration and troubleshooting commands. - Linux network commands used in network troubleshooting. iOS or Android Mobile devices can connect via both wireless networks and cellular networks (smartphones and cellular-capable tablets). At least one must be enabled and have access to an available network to use the Internet. - Look under setting for the WiFi and cellular (where available) settings. - Ensure that WiFi or cellular is enabled. - Ensure that airplane mode is NOT on. - Ensure that Do Not Disturb is NOT enabled. Check the Proxy Settings Most users should not touch the proxy settings, leaving them at the default which is System Settings. Changing the proxy settings can disable Internet access and is something that malware and other malicious programs do to maintain control of your computer. Browser Proxy Settings Each browser has proxy settings but most users should leave these settings alone. System Proxy Settings If you're in an office where your computer is provided by your employer you'll want to verify the settings with whoever is responsible for the network. It is generally not recommended that users change these, but it is possible your Internet connection isn't working because something else changed the proxy settings such as malware or a program installed by a scammer (more here…). - Windows users will find these in the Network & Internet settings. Click on the Proxy tab. Only Automatically detect settings should be checked. Uncheck Use a proxy server then verify that you have Internet access. - Mac users will find these in the in the System Preferences. Click on Network then Advanced and choose the Proxies tab. Normally none of the options should be checked other than Use Passive FTP Mode at the bottom. My computer also has *local, 169.254/16under Bypass proxy settings for these Hosts & Domains. - Linux users will find these settings in the Network Proxy Preferences (click on Preferences then Network Proxy). The default should be Direct internet connection. Check the Cables The troubleshooter may prompt you to check the router settings, but first you'll need to ensure that the network cables are firmly attached and that your modem is connected to either the cable outlet or the phone line (depending upon which ISP's service you're using) and that the cables are not damaged. - Check the connections at both ends of all the wires. This may sound silly, but things get pulled or simply break. - Check the connection to the cable jack or phone line as well as the CAT5 or CAT6 network cables between the modem and/or router as well as the computers. - On most systems there should be a green LED lit if the network cable connection is working. Try replacing the cables. If the connector retainer (a small, springy plastic that holds the cable firmly in place) is broken or has lost its ability to retain a firm connection then the connection may be weak or intermittent. Reset the Router If instructed by the network troubleshooter (or if you've completed the steps above) you'll need to ensure that the problem isn't with your router. Recycling Power to Your Router Start by recycling the power to your router (and modem if they are separate): - Turn off the power to the modem (then the router), and wait for two minutes. - Turn the modem on and wait for the lights to settle (you should see a steady light on the modem) then turn on the router. - Wait 30 seconds. - Turn your computer on. This process will force a new IP lease from your ISP and everything should now work. Recycling the power is necessary because your ISP (Shaw, Rogers, Telus, etc.) changes dynamic IP addresses every so often, disabling those that have been running for too long. - This will allow you to turn off the power to both the modem and router with a single switch. - Don't use the $10 variety — replacing your computer, modem, router and associated gear will cost more than that. Try Without the Router If you continue to have problems and you have a separate modem you can try your modem without the router. If the Internet is accessible, try to run it with the router again. If that fails, proceed to the next step in resetting and setting up your router. Resetting Your Router If you continue to have problems, you should try resetting the router. - Factory settings are the defaults that came with your router. Resetting your router will remove any customized settings. - Make a note of any existing settings before resetting your router (if possible). Many provide a method of saving settings to your computer. - Most have a recessed reset button. To restore factory settings, hold down the button for a minute or two with the tip of a ball point pen or paper clip. Configuring the Router You will then have to configure your router to set up your network and connect to your ISP. - Ensure that your computer is connected to the router with a network cable during the setup process. - Never alter your router settings while connected through a wireless connection — you will lose access to the router when it reboots during the setup process. You may wish to have some professional help to ensure you retain the maximum security and correct settings for your network. At the very least you should read the manual provided with your router so you understand the process and what each of the settings will change. - You can obtain the instructions for your particular router from the manufacturer's website or from the documentation that came with your router. - Never retain default settings as this compromises your network security. - Change the default settings (especially the password) to protect your network from malicious attacks. Check the Firewall If you continue to have problems connecting to the Internet, check the firewall for issues. Be sure that software is not misconfigured. I'm assuming that you've tried resetting your router then rebooting your computer before looking at this section. The firewall's job is to protect your computer from unauthorized access. If there is a problem with the firewall settings, then your Internet connection may not be working or the firewall may not be protecting your computer from threats on the Internet. A paid security suite will generally provide better protection against a multifaceted attack. Software with Access Issues If the access issue is with a specific piece of software (i.e., everything else has Internet access) then the challenge is figuring out why. The most likely culprit is a firewall setting that prevents access. Check your software documentation or the manufacturer's website for details on how to troubleshoot your particular program. Check the Security Software This section refers to ZoneAlarm as an example. Your security software may operate differently but you should be able to duplicate the following steps. Check your software settings and any logs to see if a particular program is blocked or if all Internet access is disabled. Your product manual or the company's website should give you more information. Avoid Multiple Security Programs Do NOT run multiple security programs. If you have more than one antivirus program running at the same time–or more than one firewall–you're asking for trouble. Two such programs, trying to do the same thing at the same time, will slow down your system. Worse, they can cause conflicts. Incorrect Settings Block Access If you have not configured it properly, your Internet service might not work or a particular program may not have access. Recent versions of ZoneAlarm are much easier to configure and require less hands-on management. Test Using Another Device Before proceeding, try testing the connection using another computer or mobile device that you know is working. If that device has access, you know the issue is not your Internet service. If ZoneAlarm is incorrectly installed or misconfigured (or not running at all), uninstall then reinstall it. Uninstalling ZoneAlarm should remove any corrupted settings. See Uninstalling ZoneAlarm for instructions. If you have manually deleted portions of the program you may have to reinstall ZoneAlarm before you're able to uninstall it. Testing Without ZoneAlarm Reboot then briefly test to see is your Internet connection is restored without ZoneAlarm running. - Remember, without security software, your computer and data are vulnerable. - Test by loading a safe site with your browser. - If that works, verify with other programs that were unable to connect. - Break your connection as soon as you're able to verify connection status. Do not reinstall ZoneAlarm until you've resolved all problems with access. If your connection is working, reinstall ZoneAlarm using the most current version. - Be sure to download and install the same version. The licence for one product won't work on any other. - You may need to do a clean install to remove any corrupt settings. Testing with ZoneAlarm Once you've reinstalled your ZoneAlarm product, repeat the access test to ensure everything is working correctly. Hardware or ISP Issues If your tests without ZoneAlarm installed didn't restore access you need to look elsewhere for a solution. If you have followed the steps to this point and you still have a problem, you'll need to call your ISP to verify service or to repair the issue. Testing Elsewhere (with ZoneAlarm) You can take your computer to another location where you know the Internet is working (reinstall your security software first).
<urn:uuid:1dd39696-7c16-4669-b99b-f54900e5431e>
CC-MAIN-2022-40
https://www.russharvey.bc.ca/resources/internet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00183.warc.gz
en
0.92024
4,326
2.71875
3
With over 4.2 billion users surfing the internet, it may seem nearly impossible to protect your sensitive information from hackers and thieves. Devices connected to the internet often house our personal and professional lives. If a cyber criminal compromised this sensitive information, there could be disastrous consequences. However, the government dedicated the month of October as Cybersecurity Awareness Month in 2004 to educate internet users on how to navigate the web safely. Cybersecurity Awareness Month During October, the United States Department of Homeland Security and the National Cyber Security Alliance provide easily accessible information and tips to all netizens on how they can protect themselves online. The campaign is vital because it gives everyone in the cyberspace tools and knowledge to thwart cybercriminals. Although society is quickly shifting to a more digitalized world, it is surprising that the majority of users are blissfully unaware of the threats they can encounter online. Sharing too much information on social media sites, sending unencrypted personal information electronically, and making purchases on unsafe websites are common ways that criminals gain access to sensitive information. “While the speed at which technology and information move can expose us to new risks online, it also enables a level of sharing and cooperation that can make us more resilient to cyber threats… National Cybersecurity Awareness Month isn’t just about understanding the risks, but also emphasizing our collective power to combat them.” – FBI Cyber Division Assistant Director Matt Gorham. Furthermore, basic cyberattacks on personal devices can give criminals access to large businesses and organizations. For example, a hacker gained access to an employees computer information that eventually led to a cybersecurity breach against the United States Office of Personnel Management. The attack cost the company $21.5 million. Therefore, it is imperative that every individual takes proper cybersecurity precautions. As the government further educates internet users on cybersecurity risks, people are installing more security software. The number of software downloads increased significantly since 2004. Additionally, more cybercriminals are being reported and convicted. For example, the government arrested a cyber criminal who attempted to access university databases, 74 arrests of members of the overseas transnational criminal networks and a North Korean regime programmer who conspired to conduct multiple damaging cyber attacks resulting in extensive data and money loss, hardware destruction, and the loss of other resources. With increased awareness, internet users can stay safe and protect their private information. Cybersecurity is a responsibility that lies on everyone’s shoulders. If we all fulfill our duty, we can significantly decrease the number of information breaches. For more information about cybersecurity, visit our website https://dynagrace.com/. Resources: https://www.normantranscript.com/opinion/happy-cyber-security-awareness-month/article_2864473b-205f-51c1-b06a-a793e2ffb5c9.html, https://www.fbi.gov/news/stories/ncsam-2018, https://www.army.mil/article/211977/keep_info_safe_during_cyber_security_awareness_month, https://www.wombatsecurity.com/cybersecurity-awareness-month?utm_term=%2Bcyber%20%2Bsecurity%20%2Bawareness%20%2Bmonth&utm_campaign=Security+Awareness+Month&utm_source=adwords&utm_medium=ppc&hsa_acc=8253056476&hsa_net=adwords&hsa_cam=1566723829&hsa_ad=295401163184&hsa_kw=%2Bcyber%20%2Bsecurity%20%2Bawareness%20%2Bmonth&hsa_grp=60967832564&hsa_mt=b&hsa_ver=3&hsa_src=g&hsa_tgt=kwd-368130389969&gclid=Cj0KCQjw6rXeBRD3ARIsAD9ni9DBJ8oOpYn7F2kz2M5GzZfQUn5qnX-DKgeh8mJkvHtEdgtNDSPMCfQaAvIeEALw_wcB Picture Resources: Featured Image: https://www.flickr.com/photos/79061493@N04/10442488614/in/photostream/, https://www.flickr.com/photos/conquest-uk/30363339915/, https://www.flickr.com/photos/theboeingcompany/8090236600/
<urn:uuid:abaf444b-0a0c-425e-9618-aa5240d0966d>
CC-MAIN-2022-40
https://dynagrace.com/cybersecurity-awareness-month/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00183.warc.gz
en
0.85849
985
3.625
4
The National Institute of Standards and Technology (NIST) created many of the password best practices you probably loathe — using a combination of letters, numbers, and special characters. The NIST now says those guidelines were ill-advised and has changed its stance. From complexity requirements to minimum lengths, creating a password for a new online account can be bothersome. If your business is constantly experiencing this issue, single sign-on (SSO) can help. This technology is secure, easy to manage, and eliminates the need to remember a long list of usernames and passwords. An average enterprise uses over a thousand cloud services. Even if small businesses use just a few dozen apps, securely managing account logins is still a huge problem for both users and administrators. Single Sign-On (SSO) is an excellent solution to this issue, so let’s dive into how it works. In 2003, a manager at the National Institute of Standards and Technology (NIST) authored a document on password best practices for businesses, federal agencies, and academic institutions. More recently, however, the institute has reversed its stance. No matter how valuable your cloud subscriptions are, each new set of login credentials users are forced to create and memorize adds another level of inefficiency. With something called Single Sign-On (SSO), you can create one user profile that logs you into all your online accounts. For years, we’ve been told that strong passwords include three things: upper and lower-case letters, numbers, and symbols. And why wouldn’t we when the National Institute of Standards and Technology (NIST) told us they were the minimum for robust passwords? Here’s why and how it involves you.
<urn:uuid:be7c8c4c-e98f-41ec-b921-d0f5339e6ba8>
CC-MAIN-2022-40
https://www.datatel360.com/tag/single-sign-on/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00183.warc.gz
en
0.931946
357
2.546875
3
As a data scientist, some of the most important and interesting aspects of my profession include identifying causal relationships, performing “what if” analyses on different scenarios, and overall seeking to answer questions. After reading a recent news article on how large US healthcare providers are using data collected from consumers such as food and lifestyle purchases to assess whether or not someone is more or less likely to get sick, I think we need to bring those same critical thinking skills I use in my job to bear on what are very serious privacy concerns surrounding the use of people’s personal information. FREE Download: CISO Data Breach Guide Under the guise of trying to improve people’s health, there are so many “nanny state” red flags mentioned in the article I read, it’s hard to know where to begin. For example, in talking about applying a risk score to patients, a chief clinical officer of analytics and outcomes for a healthcare provider explains how his company has plans to pass patient scores to doctors and nurses who can then reach out to the most high-risk patients and suggest treatment before they fall ill. Exactly what does “reach out” involve? He is also quoted as saying, “What we are looking to find are people before they end up in trouble.” What if that person doesn’t want some bureaucrat to find them? What if they want to be left control of their own medical health? As if in response to those questions, the officer goes on, “We are looking to apply this for something good.” That really says it all, doesn’t it? What may seem to be “something good” is in reality a Pandora’s box of unintended consequences including, but not limited to, flagrant constitutional violations of people’s privacy. It is one thing to aggregate data and perform analytics in order to make assumptions about certain demographic groups. From a pure data science perspective, using big data analytics can certainly provide some interesting information to support or refute diagnosis or predict the success or failure rates of a particular treatment with regards to external stimuli. However, that’s a far cry from using specific, detailed behavioral information about an individual and their purchases to formulate a medical “pre-treatment.” One woman mentioned in the article with Type 1 diabetes has received phone calls from her insurance company to discuss her daily habits. Do you want to have this conversation with some unknown person on the other end of the line at your insurance provider? This is outrageous and clearly falls into the “none of their business” category. Today, credit card companies and retailers are able to sell your private information to data brokers. To be realistic, most of us know this happens on a daily basis. However, there seems to be an ethical line that is being crossed with the example above. How in the world would we even vet this data? Having information correlated across all these domains will present a clear and present threat to privacy, with minimal, if any, value added to the individual. Not only that, it presents opportunities for both government and individuals to misinterpret people’s data. For example, how can someone evaluate another person’s smoking or drinking habits based upon their purchasing behavior alone? That there would be ample room for subjective analysis constitutes a significant threat to the consistency of these assessments. While most people recognize that we will never again have the degree of privacy we once did even just a few years ago, they probably don’t understand the extent to which information is gathered about them in today’s world. From cell phone call histories to camera snapshots to credit card records, there really is no such thing are privacy anymore. As we have seen with the recent Supreme Court decision on warrants and cell phones, the digital age means we need to rethink privacy and how we protect our personal data. An article in the MIT Technology Review suggests that a code of ethics is needed to govern big data, outlining some thought-provoking tenants that should be adopted. Implementing such a framework would be difficult. Big data means big business and big money, after all. As a result, we have to ask ourselves two important questions: What is our privacy worth to us? And have we already crossed the point of no return? By Dan Nieten, CTO, Red Lambda About Red Lambda Red Lambda is a pioneering technology company that has developed a next generation IT security and analytics solution for Big Data environments. In an industry yearning for innovation, Red Lambda and its flagship solution MetaGrid offer organizations around the world a new way to combat exponentially multiplying network security threats. Challenging the status quo, Red Lambda has torn through unchartered territory, creating in MetaGrid what one Fortune 500 CTO refers to as ”nothing short of revolutionary technology…game-changing software.”
<urn:uuid:41d5b9df-5bc5-4f49-9b23-90af7716ce67>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/articles/just-can-use-big-data-analytics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00384.warc.gz
en
0.950551
1,015
2.53125
3
What is spectrum sharing? Virtualization has revolutionized IT operations in companies around the world. Previously hardware-based functions exist now in software, enabling multiple end-users to share a common hardware platform, leading to economies of scale and optimal use of assets. This provides significant flexibility and agility as needs change. The submarine cable industry is now looking to this technology to make more efficient use of cables. Similar to the widespread Virtual Machines in use in just about every IT shop around the globe, Spectrum Sharing utilizes virtualization to partition optical spectrum in a submarine optical fiber pair among multiple different end-users. As such, each end-user sees only its dedicated ‘virtual’ fiber pair, which is a subset of the overall spectrum of the same, shared physical fiber. Spectrum Sharing can work on standard C-band cables, as well as newer and wider-band cables supporting both C-band and L-band on the same cable. Virtualizing a fiber pair is more practical on newer, uncompensated submarine cables that support wider repeater bandwidth, thus yielding more available spectrum to partition among more end-users. The shift to Spectrum Sharing is another step in the journey of the submarine industry, which has evolved from offering sub-lambda (electrical) services, to wavelength-based (all-optical) services, to an entire dark fiber pair. Spectrum Sharing enables end-users to buy or lease capacities greater than a few wavelengths, yet less than acquiring a full, and very expensive, fiber pair—something few end-users could afford or need. In addition to enabling greater Submarine Line Terminating Equipment (SLTE) choice to end-users, a key benefit of Spectrum Sharing is the ability to take advantage of rapid advancements in SLTE modem technology. With Spectrum Sharing, end-users can enjoy the flexibility to increase the capacity of their optical spectrum partition with upgraded SLTE technology at any point in the future. At the same time, cable operators face monetization opportunities and challenges because they now market upgradeable THz rather than the fixed Tb/s they’ve been accustomed to selling for so long. This will require a change in service provider point of view. While providers focus on how to monetize this new approach, they can’t lose sight of security—a critical concern for users. To securely and reliably implement Spectrum Sharing services, the underlying SLTE technology must incorporate effective optical power management to ensure changes that happen on one end-user’s spectrum do not affect other end-users sharing the same fiber pair via leased channels or spectrum. Secure isolation of multiple end-users sharing the same fiber pair must also be incorporated into the SLTE so one end-user never sees another’s data. Vendors understand these absolute requirements and have built in the necessary safeguards to ensure security and privacy are implemented. Spectrum sharing is the logical partitioning of optical spectrum on a submarine cable for different end-users, such that each end-user has its own ‘virtual fiber pair.’ How Ciena helps Ciena, with deep expertise in both terrestrial and submarine networks, provides the technology that enables Spectrum Sharing. With solutions from Ciena, cable operators can mix and match building blocks to create purpose-built network solutions that can utilize both C-band and L-bands. While adding that flexibility, Ciena continues to drive change via Open Cables, so cable operators can choose best-in-breed SLTE and wet plant technology for optimized submarine networks. Every day, submarine cables carry more than US$10 trillion in transactions—the very definition of critical infrastructure. Bandwidth consumption will grow at more than 40 percent CAGR over the next few years in all regions. So, utilizing a solution like Ciena’s GeoMesh Extreme enables submarine providers to get more out of their existing infrastructure. GeoMesh Extreme helps providers overcome the challenges of submarine networks with four categories of available building blocks—all of which can be mixed and matched to address specific business needs. - Optical: Submarine WaveLogic Ai, Liquid Spectrum™, WaveLogic Photonics, C+L-band support, and Integrated Test Capabilities - Switching: 5430 Packet-Optical Platform, 6500 Packet-Optical Platform, 8700 Packetwave® Platform, and OneConnect intelligent control plane - Management: OneControl Unified Management System, Blue Planet V-WAN for agile connectivity, Blue Planet Manage, Control and Plan (MCP), Blue Planet Multi-Domain Service Orchestration (MDSO), and Blue Planet Analytics - Services: Cloud-based SLA Portal, PinPoint Coherent Optical Time Domain Reflectometer (C-OTDR), Managed NOC, Network Health Predictor, Topology Discovery, and Alarm Correlation How GeoMesh Extreme is being used and rolled out Looking to utilize Spectrum Sharing on C-band and L-band, submarine cable operators have turned to GeoMesh Extreme for its unique architecture that leverages both submarine and terrestrial technologies. In addition, GeoMesh Extreme provides a wealth of other benefits to submarine networks, such as the analytics and machine learning capabilities that come with a Software-Defined Network (SDN). Other GeoMesh Extreme features and services include: - SLA Portal, which dramatically improves customer satisfaction and retention by providing transparent visualization of service performance. Customers can self-diagnose network service health and verify SLA performance assurance. - PinPoint C-OTDR, which provides visibility into the performance of multiple segments and systems of submerged plant. It also enables remote access to C-OTDRs in various sites from a centralized Network Operations Center (NOC)/data center. - Ciena’s Managed NOC services, which extend your customers’ business with the networking skills and experience required to manage their network infrastructure, provision bandwidth growth, and minimize network downtime that impacts critical business processes. - Network Health Predictor, which utilizes big data analytics to enable you to proactively identify and address areas where network issues and faults might occur - Topology Discovery, which ensures you can utilize the network to maximum capacity by revealing actual network connectivity, stitching circuits, and identifying stranded bandwidth. Alarm Correlation, which groups events to reduce the number of issues you need to investigate. Because it identifies related alarms and targets them simultaneously, you don’t spend as much time troubleshooting.
<urn:uuid:bb343bb2-de29-4634-b8fd-4422f3a49343>
CC-MAIN-2022-40
https://www.ciena.com/insights/what-is/What-Is-Spectrum-Sharing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00384.warc.gz
en
0.893684
1,324
2.609375
3
Two astronauts will walk in space today to upgrade the International Space Station’s datacomms. Their efforts will mean that data collected in science experiments conducted aboard the ISS will no longer be sent to Earth via hard drives carried by returning astronauts. The space walkers are expected to take six hours to install the ColKa (Columbus Ka-band), a fridge-sized terminal funded by the UK Space Agency and built by MDA UK. The ISS Columbus module, launched in 2008, currently has lousy data comms to ground stations on Earth. Hence the physical transfer of data by hard drives. However, arrival is contingent on the return schedule of the astronaut, which may result in many weeks’ delay. With the new set-up, results are delivered to scientists a day or two after the data is recorded. Data transmission is asynchronously bi-directional. ColKa promises speeds of up to 50 Mbit/s in downlink and up to 2 Mbit/s in uplink. This will allow high data volume downlink, including video streaming. Speed is limited by the ISS-Earth comms infrastructure components. The terminal itself is capable of speeds of up to 400 Mbit/s downlink and 50 Mbit/s uplink. ColKa will send signals from the Station, which orbits at an altitude of 400km above Earth, even further into space, where they will be picked up by EDRS satellites in geostationary orbit 36,000 km above the surface. From there the data is transmitted data to a ground station at Harwell Campus, Oxfordshire. Then the signals are transferred to the Columbus Control Centre and user centres across Europe.
<urn:uuid:455db0df-44a4-496a-a04d-3375b6524499>
CC-MAIN-2022-40
https://blocksandfiles.com/2021/01/27/international-space-station-comms-upgrade/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00384.warc.gz
en
0.939691
345
2.765625
3
Thank you for Subscribing to CIO Applications Weekly Brief The Top 8 Ways Ai is Changing the Foodtech Industry Among the various manufacturing industries worldwide, the food technology industry is one of the most important. Artificial intelligence, machine learning, and deep learning have primarily automated several complex issues that the industry faced. Fremont, CA: By increasing production and utilizing various fitting algorithms to boost sales, artificial intelligence and data science have improved the quality of restaurants, cafes, online food delivery systems, and food outlets. Top 10 Artificial Intelligence Innovations in the FoodTech Industry Protein Substitutes: The primary sources of alternative proteins currently available are cultured meats and plats. They are not only nutrient-dense, but they also use fewer resources. These products also reduce overall consumption costs because they are only needed for minor dietary requirements and health monitoring. 3D Food Printers: 3D food printers enable personalized diets and alternative protein-based meals that provide our bodies with proper nutrition. Even though material extrusion is one of the most common types of food printing methods, companies have begun to develop food products using laser, inkjet food printing, and bioprinting methods. Smart Food-Waste Tracker: According to studies, much food produced globally is lost or wasted. As a result, recent AI innovations have developed technological solutions to reduce and track food waste globally. Food producers, restaurants, hotels, and smart cities can benefit from these monitoring solutions to reduce food waste. Automated Kitchens: The concept of using robots in the kitchen may seem far-fetched, but it is now a reality. Restaurants have opted for mechanical tools and machines to alleviate work pressure in recent years. A restaurant or any kitchen that uses robots to cook and prepare food is called an automated kitchen. Autonomous Food Serving Robots: A food serving robot serves foods and beverages autonomously. These robots can help waiters carry dishes and develop creative ways to keep diners happy. This innovation has dramatically benefited restaurants and hotels. Nutraceuticals: The emergence of the coronavirus pandemic has caused people to place a greater emphasis on personal nutrition and healthy eating habits. This has elevated nutraceuticals to the forefront of food industry innovation. Scientists studying nutraceuticals have proposed that these products provide health benefits against oxidative stress-related disorders such as allergy, diabetes, and immune diseases. Forward Osmosis: Osmosis has several life-saving functions applied to food preservation. It is a promising membrane technology widely used in the food industry for liquid food. Ghost Kitchens: Ghost kitchens are food prep operations or restaurants that do not have waiters, dining rooms, takeaways, or other public presentations. These kitchens can only be purchased online. Restaurants that appear as ghost kitchens on online food delivery apps generally appear as regular restaurants. See Also : Food Sustainability Companies
<urn:uuid:43a32a3e-5d45-4a56-8b3b-5de3de3101cb>
CC-MAIN-2022-40
https://www.cioapplications.com/news/the-top-8-ways-ai-is-changing-the-foodtech-industry-nid-9676.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00384.warc.gz
en
0.938167
588
2.78125
3
What Is IoT Security? Security in IoT is the act of securing Internet devices and the networks they’re connected to from threats and breaches by protecting, identifying, and monitoring risks all while helping fix vulnerabilities from a range of devices that can pose security risks to your business. What are the Biggest Challenges with Internet of Things Security? Along with understanding "what is IoT security," it's important to note the biggest challenges facing IoT security. IoT devices were not built with security in mind, leading to potential vulnerabilities in a multiple device system. In the majority of cases, there is no way to install security software on the device itself. In addition, they sometimes ship with malware on them, which then infects the network they are connected to. Some network security doesn’t have the ability to detect IoT devices connected to it and/or the visibility to know what devices are communicating through the network. How can today's IoT information security requirements be addressed? IoT and security requirements can only be accomplished with an integrated solution that delivers visibility, segmentation, and protection throughout the entire network infrastructure, such as a holistic security fabric approach. Your solution must have the following key abilities: - Learn: With complete network visibility, security solutions can authenticate and classify IoT devices to build a risk profile and assign them to IoT device groups. - Segment: Once the enterprise understands its IoT attack surface, IoT devices can be segmented into policy-driven groups based on their risk profiles. - Protect: The policy-driven IoT groups and internal network segmentation enable monitoring, inspection, and policy enforcement based on the activity at various points within the infrastructure. How Fortinet Can Help? The number of IoT devices being deployed into networks is growing at a phenomenal rate, up to 1 million connected devices each day. While IoT solutions are enabling new and exciting ways to improve efficiency, flexibility, and productivity, they also bring a new risk to the network. Frequently designed without security, IoT devices have become a new threat vector for bad actors to use when launching attacks. We have already seen several attacks leveraging these distributed, seemingly innocent devices. To provide protection in the age of IoT, network operators need to have the tools and skills to: - See and profile every device on the network, to understand what IoT devices are being deployed - Control access to the network, both connecting to the network and determining where devices can access - Monitor the devices on the network to ensure that they are not compromised and to take automatic and immediate action if they are Fortinet provides these capabilities through our Network Access Control (NAC) product, FortiNAC. Fully integrated into the Security Fabric, FortiNAC delivers the visibility, control, and automated response needed to provide security in a world of IoT devices.
<urn:uuid:28ea161a-28a0-4cd8-a0bb-33503158da14>
CC-MAIN-2022-40
https://www.fortinet.com/resources/cyberglossary/iot-security?utm_source=blog&utm_campaign=iot-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00384.warc.gz
en
0.948928
576
2.9375
3
Traditional common sense tells us that there is someone monitoring and looking at every security camera we notice, if not in real-time, then they are watching CCTV recordings later. The truth is, we are seeing more and more systems that might not have a person behind them at all. As we see more surveillance systems rely on artificial intelligence technology, we will be able to reduce the amount of trained security personnel involved in the security industry while still increasing the likelihood of spotting a crime and security breach. What are the main advantages of systems that utilize artificial intelligence, and why are they becoming so prominent? Read on to learn more! How Artificial Intelligence-Based Security Systems Work Some types of artificial intelligence-based security systems use something known as machine learning. This allows the cameras to learn and memorize what is normal for and then send alerts based on abnormal activities. It is also possible to program in the types of triggers for the system, for example, a retail store might want to have their security system check for any loiterers specifically and receive alerts when somebody is loitering. One other smart security system, like Gatekeeper’s intelligent vehicle occupant detection system, is capable of comparing what it sees with a reference image to accurately figure out whether or not it is a close match. Why Count on Artificial Intelligence? Artificial Intelligence systems offer some profound benefits. For example, they can operate around the clock without ever getting tired or taking breaks, which means you get more reliable coverage whenever you need it. Artificial intelligence systems are also able to analyze countless amounts of footage through complex algorithms, which allows them to catch details that the human eye could miss. These systems also allow for a more real-time action response, whereas a lot of times, CCTV footage is just used as evidence much later on. Systems that use artificial intelligence might be able to help stop a crime while it is occurring. As with many new technologies, there are worries about artificial intelligence-based cameras. The first is continuing to improve machine learning and ensuring that these setups are actually sending the correct alerts to security personnel and not overlooking something that might be dangerous. Groundbreaking Technologies with Gatekeeper Gatekeeper Security’s suite of intelligent optical technologies provides security personnel with the tool to detect today’s threats. Our systems help those in the energy, transportation, commercial, and government sectors protect their people and their valuables by detecting threats in time to act. From automatic under vehicle inspection systems, automatic license plate reader systems, to on the move automatic vehicle occupant identifier, we offer full 360-degree vehicle scanning to ensure any threat is found. Throughout 36 countries around the globe, Gatekeeper Security’s technology is trusted to help protect critical infrastructure. Follow us on Facebook and LinkedIn for updates about our technology and company.
<urn:uuid:f9cc4c17-3481-4304-baee-f525be780f8e>
CC-MAIN-2022-40
https://www.gatekeepersecurity.com/blog/artificial-intelligence-behind-security-cameras-ever/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00584.warc.gz
en
0.943825
572
2.53125
3
As an author and Virtual Capital Ventures partner William Mougayar has pointed out, the old question “is it in the database?” could soon be replaced by “is it in the blockchain?” Blockchain has moved on from its bitcoin origins and is making inroads into non-fiscal applications, including direct cyber security. Technology providers and early adopters, as well as the growing army of cryptocurrency investors, have put the spotlight on this distributed security technology. But just how secure is Blockchain and the cryptocurrencies that depend on it? In this article I want to look at the technology and its uses, as well as some of the ways that cryptocurrencies have been stolen. Finally, we will also look at some of those aforementioned non-fiscal applications to get a measure of blockchain’s security away from cryptocurrency. Satoshi Nakamoto devised blockchain in 2008 in order to support bitcoin, the first cryptocurrency. The key business problem was to prevent a currency holder from spending their cash more than once. The technology he envisaged would allow the community to supervise transactions. Once approved, a sequence of unique transactions would be frozen in a block and subsequent blocks would link back to it, forming a secure chain (hence the term ‘blockchain’ was born). Blockchain is a replicated database then, which allows consensus on transactions without reliance on a central server. Being both distributed and decentralised means it can continue to function if one of the nodes fails, making it robust in the face of tampering, security vulnerabilities or coordinated cyber attacks. Blockchain is now being used to record transactions far beyond its original purpose. For example, the Swedish land registry authority has been testing the use of blockchain for recording property transactions; the perceived benefit being to make the process more efficient, particularly to eliminate paperwork while improving security. Despite being underpinned by Blockchain, cryptocurrencies like Bitcoin aren’t invulnerable to hacks and theft. Below are some examples of high profile cases. In January this year, $500M was stolen from the Japanese cryptocurrency exchange service Coincheck. It was later reported that cryptocurrency “coins” had been stored in a “hot” wallet. Whereas most exchanges hold funds offline, Coincheck had opted against doing so, which is a very basic security precaution. The theft was carried out by the acquisition of a user’s private key, resulting in calls for greater use of multi-signature (multisig) transactions. However even multisig is prone to manipulation as Bitfinex found when it lost $65M in 2016. In December 2017, Slovenian-based crypto-mining marketplace, NiceHash, was hacked and more than $60M was stolen from the NiceHash bitcoin wallet. In the attack, the stolen coins were moved to an external wallet. No clear explanation for the theft has been reported; consensus on the forums is that this was an inside job and blames incompetence in the company. Magic: The Gathering Online Exchange (Mt. Gox) The biggest hack in Bitcoin history took place in 2014 and hit Mt. Gox, the Tokyo-based bitcoin exchange and the largest in the world at the time (the company handled 70% of all bitcoin transactions worldwide). In 2014 the company revealed that 600,000 bitcoins (about 6% of the global supply) had been stolen, a figure worth at the time around $460 million at the time. 200,000 of the coins have since been recovered but the rest remain missing. A number of reasons have been put forward as to why Mt. Gox was susceptible to attack but it was clear that even basic software development procedures were not implemented. These included: Multiple developers were working on the code, but there was no version control for the software There was no prescribed testing policy (or at least not until too late) Code changes had to be approved by a single signatory (the CEO) Investigations have found that the company’s huge and rapid growth had caused internal issues, with staff talking of a “disorganized and discordant organization, with poor security procedures.”. Ethereum DAO Hack Another eye wateringly huge theft was from Ethereum, an alternative cryptocurrency. Unlike bitcoin, Ethereum is used to run decentralised applications with investors, buying tokens in return for “Ether”, and developers paying for services to fund the platform. In 2016 a hacker stole around $50M from the fund by exploiting a loophole in the smart contract software, known as Decentralized Autonomous Organization (DAO). The hack involved repeatedly exchanging the same tokens for Ether (before the transaction could be registered). There has been a lot of commentary on the Ethereum hack and the wisdom of the DAO (which is stateless and therefore not prone to government regulation). Lessons have been learnt but the hack shows the vulnerability of such systems. The vulnerability of blockchain is perhaps not of huge concern to those who don’t have large amounts of money tied up in cryptocurrency but when we see it being used for non-fiscal applications, it’s security should be something that concerns all of us. There are a growing number of non-fiscal applications for blockchain. Guardtime Federal is a leader in the use of blockchain for cybersecurity and last year, Lockheed Martin, the world’s biggest defence contractor, announced a partnership with Guardtime Federal as part of its security approach. Similarly, Verizon has this year opted to implement Guardtime’s KSI blockchain capabilities. REMME uses blockchain and sidechain technology to implement a distributed and highly secure database. Clients such as energy suppliers and health information services use its authorisation platform to manage access control while defending against cyber attacks. These are not cryptocurrency applications but they manage highly sensitive data and critical access control and, as such, need to provide the highest possible resilience to cyber attacks. As citizens who may rely directly or indirectly on these services, we can only hope that these organisations are carrying out due diligence, but also, importantly, in the supplier and its internal procedures. Whilst the various cryptocurrency hacks are clearly cause for concern, it seems clear that none were caused by systemic faults in the underlying blockchain technology they rely on. Rather it seems that design issues, lack or process, human error and weak senior management have left these blockchain systems open to attack. There are lessons to be learnt here and if blockchain is to become truly trusted by all of us, then we will need a new generation of cyber security experts to build the protocols and processes that will ensure its proper use. As Tim Berners-Lee once said “we can’t blame the technology when we make mistakes.”
<urn:uuid:fea17368-b885-40d0-a49e-18a08835734c>
CC-MAIN-2022-40
https://www.cybersecurityjobs.com/how-secure-is-cryptocurrency-and-blockchain/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00584.warc.gz
en
0.964101
1,394
2.875
3
Every day, leaders of large cities grapple with knotty, complex problems like decaying public transportation infrastructures, aging utility lines, urban blight, neighborhoods that are vulnerable to the effects of climate change, and other multi-faceted socio-economic challenges. Increasingly, municipal leaders are turning to urban analytics, data collection, and advances in sensor technology to help solve the problems of modern cities in bold, transformative ways. So-called smart city initiatives are getting lots of attention in the marketplace as well as from the federal government. Many visionaries have asserted the transformational power of the Internet of Things, marked by the increasing ubiquity of sensors that collect and in some cases share or communicate data that can be used in almost infinite ways. The Obama White House announced just last month more than $80 million dollars in federal investment on smart city program incentives emphasizing features related to climate, transportation, public safety and innovative delivery of city services. According to a White House report, released in February 2016, “Information and communication technologies (ICT), the proliferation of sensors through the Internet of Things, and converging data standards are… combining to provide new possibilities for the physical management and the socioeconomic development of cities. Local governments are looking to data and analytics technologies for insight and are creating pilot projects to test ways to improve their services,” the report states. “Even though the applications are complex and varied, the goal is simple: whether it’s water or energy, commuter time or taxpayer’s money, better data collection and use of information can help us build and adapt systems that use our resources much more wisely than we have in the past,” said John West, SC16 General Chair from the Texas Advanced Computing Center. “In many ways, we are at the leading edge of a new era in city design, and we need massive programming acumen and computing power to help bring it to fruition,” according to West. “Smart city initiatives are highly integrated and complex problems to solve – exactly the kind of challenges that we HPC systems experts are equipped and excited to support.” As old systems become obsolete, the most visionary urban planners are taking the opportunity to design the future, not just repeat and rebuild the past. So how are we making our cities smarter? Here are just a few examples: Automation of code inspection functions Imagine if all the aging bridges in a city were equipped with sensors that measure and transmit their “shake” data to the postal truck that travels over them each day, and that data is then collected and used to make decisions about which bridges take first priority for repair or replacement. Similar systems could help with pavement crack/pothole detection, and other types of urban blight indicator tracking. Trial projects of this kind are underway in Illinois, Pennsylvania and Maryland. Resource and climate tracking Streetlights use significant energy and are a source of light pollution, both of which can be mitigated by incorporating LED lights equipped with sensors that allow them to operate specifically how and when they are needed. Sensors placed inside water pipes can detect volumes and patterns of usage, helping utilities and consumers plan, shift and anticipate. Sensors in flood-prone areas could give advance warning of damaging flood conditions before they have developed to the point where they impact public safety, creating an early warning system for flash floods. Other systems are being tested to monitor air quality throughout a city more comprehensively and automating the process of pinpointing the sources of damaging pollution. Enabling transportation improvement and reinvention Moving people around is a hot spot of potential for the future of urban centers. Smart cities aren’t just about the sensors that have proliferated with the Internet of Things. They also feature adaptations like bike loan programs with accompanying apps for end users to maximize green transportation; fresh approaches to upgrading bus systems with dedicated lanes and loading zones in urban centers and yes, mobile phone apps to help users maximize their use of public transportation; and even brand new species of clean, efficient public transportation like closed-loop driverless public transportation systems that can operate safely and quietly in a wide range of weather conditions, independent of human intervention.
<urn:uuid:7dabc05c-0d6c-4b26-bf17-a8e1bf2af5b0>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2016/10/28/smart-city-initiatives/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00584.warc.gz
en
0.939926
853
3.046875
3
COBOL, arguably one of the earliest of the 3rd generation programming languages, is still used extensively. In particular, IBM mainframe platforms are still home to a significant number of applications originally developed in this language that continue to power many Fortune 500 companies today. Furthermore, several shrink-wrapped software packages, originally implemented decades ago in COBOL, are still used, e.g., PeopleSoft, emPath, Logo, Lawson ERP, etc. These facts are more a function of history than any widespread acceptance that COBOL provides the basis for modern computing applications. First developed in 1959 by Grace Hopper for the U.S. Department of Defense, the language’s intent was to provide a transportable and more easily readable programming language that would work across multiple platforms. While COBOL is often criticized for being overly verbose, its syntax enabled many programmers to understand it simply by virtue of their command of the English language. In contrast, other programming languages required developers to learn a new syntax. Without debating the merits of either approach, suffice it to say that COBOL became the lingua franca for many platforms. Its acceptance by IBM as the primary language for applications on its mainframe platform cemented its place in the annals of computer history. COBOL’s Continued Relevance For several decades, the amount of COBOL used by organizations around the world was estimated to be approximately 200B lines of code (LOC). This number never had any real facts to back up the assertion. Still, it became the accepted estimate, no matter how unreliable it may have been. In a recent Micro Focus survey, the amount of COBOL in the marketplace was estimated to be an astounding 775-800B LOC. While their interpretation that this shows COBOL’s usage to be “still growing” is a dubious, self-serving conclusion, it indicates a significant market opportunity for COBOL modernization. The real question is whether the desire for modernization is equally as significant! COBOL, Mainframes, and the Cloud There is little argument that many business-critical applications originally developed in the COBOL language continue to run commercial and government organizations around the world, particularly those on IBM mainframes. There is also little debate that the amount of interest and financial investment in both public and private cloud platforms dwarfs the investment in mainframe hardware and software or applications. The technology of these cloud platforms is dramatically different from those of the mainframe, and the COBOL language is a non-player in these environments. As organizations wish to leverage the benefits of the cloud, they are forced to consider what to do with their existing COBOL-based application portfolios. There ARE options to move these COBOL applications to the cloud using solutions from LzLabs or Micro Focus. These applications will run, and there ARE benefits to those approaches, but they do not leverage the real power of cloud architectures. Additional Challenges of COBOL Modernization The modernization of the COBOL programming language is only part of the problem when attempting to leverage the cloud. Transforming the syntax of COBOL to that of another procedural language is readily automated. The problem is more complex when an architectural change drives application modernization to an object-oriented language. Many applications are tightly bound to the technological dependencies of the platform upon which they run, particularly the IBM mainframe. These dependencies include data typing, data/file structures, runtime APIs, and pre-relational DBMS navigation. Therefore, converting a COBOL program involves more than transforming the language – it also requires a shift from the dependencies on the underlying platform runtime software. These challenges have caused organizations to embrace the easiest and most common strategy for application modernization – PROCRASTINATION! Organizations continue to depend on their COBOL programs. It’s difficult for many CIOs to accept the risks associated with dramatic changes to the technological underpinnings of their applications and, in particular, those that are characterized as “mission-critical.” Yet the risks of inaction are NOT zero, and time is not the friend of the procrastinator. Arguably, the risks continue to grow in the face of declining skills availability, the exponential pace of digital transformation options, and the new powers of modern computing platforms. This will be the discussion of subsequent writings in this series on COBOL modernization. – Guest content from Dale Vecchio, Mainframe Modernization Thought Leader Find Out More Watch A Modernization Dilemma: Cloud Application – Or An Application In The Cloud
<urn:uuid:d09f192b-2749-4fce-8962-4f6ffcef9d31>
CC-MAIN-2022-40
https://cloudframe.com/the-cobol-situation-so-much-code-so-little-modernization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00584.warc.gz
en
0.928942
951
3.09375
3
By familiarizing yourself with following software, you will not only have a better understanding of the vulnerabilities inherent in 802.11 networks, but you will also get a glimpse at how a hacker might exploit them. These tools can even be used when auditing your own network as we will see later. Most serious hackers and network auditors use the open-source operating system Linux as the platform from which they launch attacks and perform analysis. This section highlights some of the more popular tools, mostly Linux, that can be used to search out and hack wireless networks. The home page for the free cracking application, AirSnort, plainly states, “AirSnort is a wireless LAN (WLAN) tool which recovers encryption keys.” AirSnort operates by passively monitoring transmissions, computing the encryption key when enough packets have been gathered. In even more simplistic terms, AirSnort is a program that listens to the wireless radio transmissions of a network and gathers them into a meaningful manner. After enough time has passed (sometimes in a matter of hours) and data are gathered, analytical tools process the data until the network security is broken. At that point everything that crosses the network can be read in plain text. The authors of this fully functional encryption-cracking tool have maintained from the first days of release it would expose the true threats of WEP encryption. Jeremy Bruestle, one of two lead programmers for the project, has truly recognized the inherent dangers of WEP. He states during an interview in 2001, “It is not obvious to the layman or the average administrator how vulnerable 802.11b is to attack. It´s too easy to trust WEP.” AirSnort is not the only open-source tool used for wireless cracking but the first publicly recognized freeware to put the power of an intellectually skilled-criminal into the hands of a neighbor, who just got the cheapest deal from the local ISP. WEPcrack, simultaneously being developed along with AirSnort, is another wireless network cracking tool. It too exploits the vulnerabilities in the RC4 Algorithm, which comprise the WEP security parameters. While WEPcrack is a complete cracking tool, it is actually comprised of three different hacking applications all of which are based on the development language of PERL. The first, WeakIVGen, allows a user to emulate the encryption output of 802.11 networks to weaken the secret key used to encrypt the network traffic. Prism-getIV is the second application that will analyze packets of information until ultimately matching patterns to the one known to decrypt the secret key. Thirdly the WEPcrack application pulls the two other beneficial data outputs together to decipher the network encryption. Kismet is an extremely useful tool that supports more of an intrusion detection approach to the wireless security. However, Kismet can be used to detect and analyze access points within range of the computer on which it is installed. Among many other things, the software will report the SSID of the access point, whether or not it is using WEP, which channels are being used, and the range of IP addresses employed. Other useful features of Kismet include de-cloaking of hidden wireless networks, and graphical mapping of networks using GPS integration. Ethereal is a pre-production network capturing utility. Currently capable of identifying and analyzing 530 different network protocols, Ethereal can pose a substantial threat through the discovery and detection of any network communication. One of many network analyzers, this application arguably does the most comprehensive job of seeing and recognizing everything that goes by its sensor. Known as a packet injection/reception tool, Airjack is an 802.11 device driver is designed to be used with a Prism network card (mainly Linux hardware). Other names include wlan-jack, essid-jack, monkey-jack, and kracker-jack. This tool was originally used as a development tool for wireless applications and drivers to capture, inject, or receive packets as they are transmitted. It’s a fundamental tool used in DoS attacks and Man-in-the-Middle attacks. Its capabilities include being able to inject data packets into a network to wreck havoc on the connections between wireless node and their current access point. A common hacking use for this tool is to kick everyone off of an access point immediately, and keep them logged off for as long as you like. Without the Layer-1, frame level authentication on all 802.11a/b/g networks, a computer running Airjack would passively assume the identity of an access point and then once inside of the channel of communication between node and AP, Airjack would begin sending dissociate or deauthenticate frames sequentially at a high rate. The users’ networks network cards interpret this as their AP and they drop their connection. HostAP is really nothing more than a firmware for Prism cards to act as an access point in any environment. With multiple scanning, broadcasting, and management options, HostAP can lure disconnected clients into a connection with the HostAP user’s computer and engage into whatever activities suitable to that situation. This is a very common tool used with growing compatibility where it will be ubiquitous with any Open Source OS in the near future. Dweputils is not one application but a set of applications that together comprise a larger threat to wireless networks of any character. Dweputils is a set of utilities that can completely inspect and lock-down any WEP network. Dwepdump is a packet-gathering tool, which provides the ability to collect WEP encrypted packets. Dwepcrack then gives you the power to deduce WEP keys with a variety of frequently employed technique. Finally dwepkeygen, a 40-bit key generator, can creates keys that aren´t susceptible to the Tim Newsham 221 attack with a variable length seed. AirSnarf is an access point spoofing tool based off the simplest way to dupe users into handing over their sensitive information to rouge hackers. Quite simply this application mimics a legitimate access point. The method of attack is broken down into recreating an identical logon webpage that would normally be displayed by the AP. The user is bumped off the network and forced to re-login or is caught before they login the first time. The simple trick convinces them into voluntary sending their login information to the hacker who can then use it at their disposal. It is extremely simple yet effective. All the details of the AP connection are legitimate to the unsuspecting user within their network configuration. They never realize this has happened in some cases as you then authenticate them to the network and allow them to pass through your computer. This is the primary tool available for Windows users to detect 802.11 networks. It does not have any cracking tools that are inherent in the software package but can be used in conjunction with numerous other tools to find and hack a wireless network. NetStumbler is perhaps the least dangerous application discussed here, but the first challenge of any hack is finding where and what you are hacking. Also referred to as the “aRe yoU There” network tool, THC-RUT, combines detection, spoofing, masking, and cracking into the same tool. Many see it as the, “first knife used on a foreign network” boasting its brute force all-in-one capabilities. Resources in the tool included spoofing Dynamic Host Configuration Protocol (DHCP), Reverse Address Resolution Protocol (RARP), and Bootstrap Protocol (BOOTP) requests. Hotspotter is another rouge access point tool that can mimic any access point, dupe users to connecting, and authenticate with the hacker’s tool. This, again, is done with a deauthenticate frame sent to a MS Windows XP user’s computer that would cause the victim’s wireless connection to be switched to a non-preferred connection, AKA a rouge AP. This sort of trick is a passive approach that seeks to identify the probe frame sent by any Windows XP machine looking for its preferred network containing exploitable information. LEAP stand for Lightweight Extensible Authentication Protocol, which is intellectual property of Cisco Systems, Inc. This is a broadly used protocol for authentication on Cisco Access points with inherent weaknesses. ASLEAP is able to use hashing algorithms to create brute force attacks to recover passwords, and actively deauthenticate users from the AP making them reauthenticate quickly to expedite the process of hacking. This is another tool in the arsenal of hackers with an ever-shrinking learning curve. IKECrack is an open source IKE/IPSec authentication crack tool. It uses brute force dictionary based attacks searching for password and key combinations to Pre-Shared-Key (PSK) authentication networks. With repetitive attempts at authentication with random passphrases or keys this crack tool undermines the latest WiFi security protocol. Copyright 2005 Bradley Morgan, invulnerableit.com
<urn:uuid:6cf0c5e7-8cd1-4b12-a20d-d7608f18dae2>
CC-MAIN-2022-40
https://it-observer.com/wireless-cracking-tools.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00584.warc.gz
en
0.928983
1,890
2.734375
3
As part of sustainability imperatives and growing concerns about degrading air quality, many cities are accelerating programs to boost the adoption of transportation electrification, in partnership with electric utilities and EV OEMs. While electrification strategies and EV targets have already been formulated at a national level by many countries, many cities across Europe are introducing emission zones, initially aimed at banning older diesel vehicles but ultimately expected to culminate in zero emission zones and EV-only city centers. This report provides detailed insight into electrification market trends, regulation, adoption barriers, technologies, and key players. In particular, the issues of putting in place an adequate EV charging infrastructure is discussed in terms of funding, the deployment of fast and wireless charging stations, the integration of micro-grids, capacity requirements of public grids, and V2G load balancing technologies. Electrification adoption forecasts for both EVs and charging stations are also included, both in terms of vehicles and mileage.
<urn:uuid:20919f16-7d98-46a5-9176-866769c3e2f3>
CC-MAIN-2022-40
https://www.abiresearch.com/market-research/product/1027475-smart-cities-and-transportation-electrific/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00584.warc.gz
en
0.94159
187
2.71875
3
Data drives modern society. There isn’t any doubt about it, but if there was, you can just look at today’s headlines about stolen data or massive data breaches from some of the biggest companies in the world. Since data drives the world, then it stands to reason that data security is (or should be) the primary concern for most businesses. But think about this. When you take pictures on your phone or shop online, you are exposing your data to theft or ransom. That isn’t meant to scare you, just to emphasize that no one is outside the need to protect their data. In this article, we are going to talk about data safety, specifically by having a backup solution. Whether it is a local hard drive or a fully decked-out cloud backup subscription, there is a plan for you. Why Do You Need a Backup Solution? Why do you need a backup solution? Let us count the ways… - Data loss happens. Even under the best of circumstances, data can get lost. Hard drives fail, good data gets overwritten by bad data, or unforeseen issues can arise to make data inaccessible. The tricky thing about data loss is that once it’s gone, it’s probably gone for good unless a trained IT expert can retrieve the data from a damaged or defunct drive. - Data theft happens. Even the most secure and hardened data storage solutions can be cracked, leaving your existing data easy pickings for hackers. More importantly, stolen data can be wiped or corrupted easily, depending on what the intruder wants to do. More recently, methods of holding data hostage through data encryption have raised awareness for backups. These methods typically involve a hacker putting encryption software on an unsuspecting computer (or even a series of computers on a network) and encrypting the main hard drive. The thing about encrypted data is that it is completely unreadable without the encryption key. So, the hackers encrypt the data with a key and threaten to delete that key if they aren’t paid a ransom. In these cases, the data is rendered useless and, unless the owner has a backup, it’s simply gone. - Hardware isn’t perfect. While it typically happens along a scale of years or decades, hard drives can fail. And once they fail, it could take an experienced computer forensics expert to retrieve data contained on the drive, if they even can. These different scenarios are applicable in both residential and commercial contexts. This means that data backup is essential, no matter if you use a computer for mission-critical business data or if you’re just storing pictures on a home computer. What Kind of Data Backups Can I Use? Typically, there are two primary forms of data backups that you can use to keep your data safe. - Local backup storage uses local media to store data. Crude forms of local storage can be CD-R storage or external hard drives, and more advanced solutions automate data backups to local server machines or network-attached storage. - Cloud backup copies data to an external server, allowing accessibility through cloud software. Each version has its own advantages and disadvantages, and each provides options for addressing your storage needs. Local Storage: Control Over Your Data Local storage solutions are just that: data stored locally to your location. Common and basic forms of local data backup involve using some form of media to make additional copies of your data that you can access in your home or office. Common forms of local backup have evolved over time. Initially, magnetic tape was used to store data as backups so that, if errors occurred during regular use, system administrators could rewind their systems without too much of a disruption. This method also included backups like magnetic floppy disks that you would commonly see in the early days of home computers. However, magnetic storage isn’t the most cost effective and the technology has fallen by the wayside as file sizes started to increase and backup requirements demanded larger storage media. Two developments spurred the rise of local storage as the go-to for backups. - Platter-based hard drives grew in storage volume while dropping in cost. Over the decades, HDD storage sizes climbed from MBs to GBs and all the way up to TBs, while the cost-per-GB continued to drop. This meant that having a dedicated drive or a backup external drive was more and more attractive to commercial and home users. - CD-R technology, including CD burners and recording software, emerged as a way for consumers to easily store data on CD media. CD-R storage was a cheap way to store several gigabytes of data on a single disk, making archiving data easy. Likewise, attaching external drives to a network (or just to a computer through a USB cord) gave people tons of storage for a low up-front investment. Local storage gives users control over their data. That is, they own the media their data is stored on. They can access it, copy it, and move it on their own terms. Sometimes this is incredibly important. For example, businesses that need rapid access to data, or multiple redundant copies, might opt for more complete storage solutions like RAID or NAS setups. There are several limitations to local storage, however. For starters, local storage doesn’t set itself up. Someone must know what they are doing. And if they don’t know what they’re doing, then setting up a reliable local backup solution can be extraordinarily difficult. This also means that if there isn’t a specialist on board, then somebody has to perform the backup themselves (unless they have some local backup software set up). Most importantly, however, is that local drives can make it easier for backups to fail due to unexpected damage. If a flood, power surge, or fire occurs on a home or office, then the primary data, along with any local storage media located in the same building, are at risk. At that point, it doesn’t matter if there is a backup, because it is all at risk. Cloud backups add another dimension to backup storage because they put backup data on external servers that are typically managed by another company. True cloud backup provides several advantages over local data storage: - Cloud backups are typically maintained on several, decentralized servers for redundant data protection. This means that if a single server containing your data fails, then your data won’t be lost. - Cloud backups provide you with multiple copies of data, often with automated software handling data transfers. That means you can have local data and remote data backups syncing in real time. - Cloud storage and backups provide options to secure data across several applications. Whether it is simply storing data directories for redundancy, backing up information for public access, or synchronizing data across SaaS services, cloud backup gives you a lot of flexibility. If you’ve ever used services like Google backup on Drive or Dropbox (either consumer or business versions), then you have some idea how this works. You are probably also familiar with the accessibility that cloud backup provides—with a cloud, you often have the option of accessing the data remotely from any location through an app or a web interface. And new cloud backup software exists to link your cloud accounts with your PCs. Many major cloud providers provide this software free to consumers. One place where cloud backups can fall behind local backups is security. If you store data to a cloud that has a public-facing access interface, you are inherently trusting that your data will not be hacked or stolen. Most cloud companies use advanced security measures as well as provide encryption services, but the most private form of data backup is a server that doesn’t face public access. Backup Solutions and Return on Investment Phishing scams become more common during the holidays primarily because more of us are receiving gifts and newsletters from the businesses we follow online. A “phishing” scam is an effort by hackers to pose as official entities, like businesses or banks, to trick you into providing your personal information. Phishers count on the fact that many consumers, presented with an official looking letter, will fall for the trick. This is only exacerbated by the fact that many email programs will hide the actual email address of a sender and replace it with a name. If you receive strange emails from a company that you shop with, always double check the sender. If it isn’t sent by someone at the domain of the business, then it is a scam. Otherwise, common sense rules here. No bank of business will ask for personal information via email, nor will the ask you to verify anything unless you specifically asked for it. If you receive any email out of the blue that asks for financial information or location, it’s most likely a scam.
<urn:uuid:fa99b6c9-bd93-4942-a2dd-289db948c66d>
CC-MAIN-2022-40
https://bristeeritech.com/it-security-blog/selecting-the-right-data-backup-for-your-home-and-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00584.warc.gz
en
0.937214
1,807
2.53125
3
In Managing Security in the Age of Zero Trust, NetCraftsmen introduces Zero Trust as a data-centric approach to security. This involves identifying the data assets and adjusting or creating an Enterprise Information Security Policy (EISP) that protects data and takes a risk-based approach to security. So, what exactly is a “risk-based” approach from a technological perspective? From a security management standpoint, there is a risk-based methodology called the CIA Triad: Confidentiality, Integrity, and Availability (CIA). Confidentiality means that only authorized users and processes should be able to access or modify data. Integrity describes that data should be maintained in a correct state, and nobody should be able to improperly modify it, either accidentally or maliciously. Finally, Availability describes that an authorized user should be able to access data wherever and whenever they need it. Confidentiality is often simplified to mean encryption. But there are three separate technology areas: encryption at rest, encryption in transit, and emerging technologies applying encryption during processing (a.k.a. confidential computing). This oversimplification is an artifact of pre-Zero-Trust siloed thinking. In this older technological paradigm encryption was deployed piecemeal on the infrastructure: - Encryption of Data at Rest: by Storage Engineers using the encryption technologies supported by the various vendor choices - Encryption of Data in Transit: by Network Engineers using such technologies as MACsec or WAN tunnels with IPSec, iWAN, DMVPN, or other SD-WAN technologies - Encryption of Data in Use: an emerging technology called Confidential Computing that closes gaps in data security while data is in use However, confidentiality has always involved privileged access – verifying that the user accessing the data has the right to see or modify it. So, the older operational approach separated out the infrastructure work and user access technology as independent issues. As a result, to maintain data confidentiality, an enterprise required multiple independent groups to be firing on all cylinders to function correctly. The Zero-trust approach with confidentiality is to integrate the approach across all these silos. This means implementing least privileged access technologies such as role-based access controls (RBAC) and even attribute-based access control (ABAC), an emerging technology standard that can apply context to the permissions. Loss of confidentiality is defined as data being seen by unauthorized users. As a result, most of the cyber incidents in the press are examples of confidentiality breaches. To fight this, we need authentication, authorization, and encryption. Authentication includes a huge number of technologies and techniques, but it can be satisfied with Multi-Factor Authentication. This can consist of a combination of at least two of the following: - Something the user knows (e.g., password, pin or account number) - Something the user has (e.g., key or security token) - Something the user is (e.g., biometrics) - Somewhere the user is (e.g., location validated by GPS) Authorization involves ‘need to know’ mechanisms, and sometimes this is as simple as having separate user IDs for Admin access. However, authorization can be more complex, and this is where the NIST standard on ABAC was developed. This permits policies that differentiate not just on ‘read and write’ access or specific data sets, but they can accommodate dynamic rulesets based on location or even on a risk score that looks at a series of risk-based attributes. Encryption seems straightforward but can be very complex. Consider that many current data centers use overlay technologies that do not support encryption. While this may be viewed as a problem, it can normally be worked around using hardware technologies such as MACsec (802.1AE). The trick is to step back and look at the problem holistically. However, encryption requires the management of a lot of keys. As a result, you really need to think through the process and make sure your plans involve a comprehensive view of key management. But confidentiality technology alone cannot solve all issues. NetCraftsmen does a lot of work in healthcare and the infrastructure we develop often supports electronic medical records (EMR) systems. Many of these are old and cannot differentiate access to patient data as required by HIPAA regulations. As a result, if you can see and modify records for one patient, the only thing preventing you from looking up data on someone you are not treating (and therefore not authorized to view) is an HR policy. In these cases, the policy might be enforced through the examination of log files. While after the fact, the presence of a forensic trail would be a powerful incentive to prevent snooping. No single company has a complete product or even product set for confidentiality, let alone Zero Trust, but perfection is the enemy of progress. As a result, we should be looking for solutions that improve the current situation and move us forward. In our work, we are big fans of MFA and, for our own systems, use Okta, but we also support DUO and other vendor solutions. For identity-based secure access and segmentation, we are partnered with Elisity but also work with traditional vendors such as Cisco, Illumio, Palo Alto, and Zscaler. Ongoing Call to Action EISPs and the downstream technological policies need to be living systems and kept up to date as the business evolves and changes. As a result, a governance process needs to be established to tie the senior management team with the technology teams tasked with protecting and managing the firm’s data assets. For a practical view on including the CIA Triad within your security practice you can read our blog on this subject: Architecting an information security program for the Enterprise. As always, NetCraftsmen consultants are here to assist and guide your journey to a more secure future. This article is part of an on-going series on network security. Links to the other members of the series:
<urn:uuid:0f7748c1-5c55-4260-a051-fc9bf231afd4>
CC-MAIN-2022-40
https://netcraftsmen.com/cia-triad-part-1-confidentiality/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00584.warc.gz
en
0.939534
1,260
2.75
3
Martin Salinga (left) and I collaborated on the research. Phase change memory (PCM) is an emerging non-volatile memory technology that could play a key role in future computing systems. In collaboration with RWTH Aachen University, my team and I at IBM Research-Zurich went in the opposite direction of the mainstream PCM research by using only one single chemical element—antimony (Sb)—instead of the typical material cocktail. This approach promises not only to make it far easier to miniaturize PCM devices, but also to increase the data density of memory chips and the power efficiency per operation. Our work is being featured on the cover of the August issue of the peer-reviewed journal Nature Materials. PCM at a small scale PCM works by reversibly and rapidly switching a phase change material from a crystalline state with high electrical conductivity to an amorphous state with low conductivity. An electrical pulse thermally induces the transition between the states. Naturally, smaller amounts of material need less heat, and therefore less electrical energy. In the past, research on phase change materials mainly focused on adjusting their physical properties by adding additional chemical elements into the alloys. However, this resulted in very complex compositions that were difficult to create and maintain in memory devices of only a few nanometers in size. At such a small scale, the local variations in the composition can limit the cyclability or lifespan of a device as the distribution of the relevant atoms in the cells can change as a reaction to operation conditions in strong electric fields and high temperatures. Off the beaten track Cover of Nature Materials, August 2018 (Image provided by XVIVO Scientific Animation) All of these reasons prompted me and my fellow scientists from IBM Research-Zurich and RWTH Aachen University take a different approach using antimony as a valid alternative to conventional approaches. Antimony is semi-metallic in its crystalline phase and semiconducting as an amorphous thin film and shows a large contrast in resistivity between these two states. It also crystallizes easily and quickly, making it ideal for a PCM in a highly-confined structure – a structure which usually slows down the crystallization kinetics. Instead of fine-tuning new phase change material compositions, we will focus on the effects of material interfaces and confinement with PCM using only one single element. The challenges ahead A key challenge going forward will be to adapt to the amorphous state of antimony, which only remains stable for thousands of seconds at room temperature. The indications are that the retention time can be increased, for instance, by further reducing its film thickness, confining antimony in all three dimensions, and designing better confinement materials. My prediction is that the first applications to benefit from a ‘monatomic PCM’ could be in the areas of memory-type storage class memory and in-memory computing, which are considered central to future computing systems for artificial intelligence. These are applications where the retention time is not so critical as in the case of conventional storage applications. IBM researchers first used the term storage-class memory in 2008 to describe a group of new memory technologies vying to fill the cost-performance gap between DRAM and HDDs. Storage class memory could accelerate several data-centric workloads such as database analysis. PCM could also serve as elements of a computational memory unit where certain computational tasks are performed in place within the memory, unlike conventional computing systems where the memory and the processing units are separated. PCM is also widely explored as elements of neuromorphic hardware. Recently, we at IBM Research showed several promising demonstrations of PCM-based computing for artificial intelligence and machine learning.
<urn:uuid:b371097a-e17f-4021-bb56-eab63cd929e3>
CC-MAIN-2022-40
https://www.ibm.com/blogs/research/2018/07/phase-change-memory/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00584.warc.gz
en
0.93397
758
2.921875
3
Adaptive Learning Technology When I was in grade school in the 1960s, I learned to convert fractions into decimals. The examples in the text book were baseball batting averages. Being an avid baseball fan, I took to the lesson like a duck to water, and began calculating batting averages for me and all my friends as we played together. To this day, I know that 1/7 is .143, and I’ve often amazed people that I know instantly many fractions to three decimal places. I wonder, however, for those students who did not follow baseball, if this arithmetic lesson was drudgery? Would examples in their area of interest have produced a better understanding of the lesson? One of my colleagues, a Southerner born and bred,once shared with me that while he was studying economics at Michigan as an undergraduate, he had a perfect score on a microeconomics exam except for one question that he left blank. He told me his economics professor asked him, “John, why did you leave that question blank?” John told me the question was “what is the effect on the supply of bagels if the price of lox goes down?” John explained, “I had no idea what a lox was and how it could have anything to do with a bagel. Now if the question had been about chitlins or boudin or ambrosia, I would have been able to answer.” How many students don’t ‘get’ examples or cultural references in lectures or exams? Context – so important for learning. Many students are dutiful, and spend the time necessary to understand material regardless of whether it seems relevant to them. Many give up, however, or, as in the second example, just don’t understand the reference. More importantly, how much better a learning environment for all students if they can be presented with examples and context that are meaningful and interesting to them. Learning takes place more easily and remains longer (as in the case of me learning batting averages from fractions) if the learning can be placed in a context that’s of interest and relevance to the student. This flexibility and the ability of a good adaptive learning technology environment to also be always available and ever patient makes this technology a potential “game-changer” as a tool to assist faculty by enhancing the learning environment and help students be more successful Imagine if a student could select the context for practicing their lessons in which examples and problems would be delivered. What if a student could select environmental examples, or urban, or rural, or health-related, or education-related, or social – depending on their interest and background. While a faculty member can provide a varietyof examples, s/he is limited by time and personal experience in what to share with students. That’s where adaptive learning technology comes into play. This instructional technology has the potential to become the “killer app” in the learning environment. More importantly, this technology has the potential to engage students in ways not possible by a single faculty member or even a team of faculty. Examples and exercises in the student’s area of interest can engage the student and make the material more relevant. For example, are some students most interested in the medical perspective of biology or the environmental, or the animal, or the behavioral? This technology does not replace faculty, but enhances their practice set environment. While most applicable in STEM (Science, Technology, Engineering and Mathematics) subjects, this learning technology can conceivably be used for any discipline. Even in the humanities, the technology could presentseveral different perspectives: historical, political, economic, philosophical, or social, as well as the various interpretations that could not all be included by the professor. Of course, context is only one possible dimension. Some students thrive not only with different examples and contexts, but some need more or less practice than others to master a skill or understand a concept. The beauty of learning technology is that a student can repeat exercises until they get it. This flexibility and the ability of a good adaptive learning technology environment to also always be available and ever patient makes this technology a potential “game-changer” as a tool to assist faculty by enhancing the learning environment and help students be more successful. At Eastern, faculty are already experimenting with and exploring adaptive learning software that begins to address this type of instructional technology. All of it is very much in its infancy at the moment. The higher educationtechnology association, EduCause, has an entire initiative on Teaching and Learning with a section on adaptive teaching and learning. EduCause reports that adaptive learning are already experimenting increasingly across disciplines. As might be expected, most textbook publishers are also experimenting with this type of learning technology. When the technology matures enough to be generally available and powerful, it will have an enormous impact on teaching and learning.
<urn:uuid:e09f10a3-6be0-46d0-896f-eb2e80390071>
CC-MAIN-2022-40
https://education.cioreview.com/cxoinsight/adaptive-learning-technology-nid-30538-cid-27.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00584.warc.gz
en
0.969517
1,006
2.9375
3
Is your data protected? Both data privacy and data security are critical to mitigate financial, reputational, and compliance risks for enterprises. These terms are often used interchangeably, and to much confusion. Understanding the similarities and differences between data security and data privacy is key to establishing a more robust compliance program. So how are data privacy and security distinct? At the highest level, data privacy focuses on governing internal data access and ensuring the people represented by the data have control over their information. Data security, on the other hand, focuses on unauthorized access to data. In this blog, we’ll compare and contrast data privacy and security, and make the case that both are essential and complementary for an effective data governance program. What Is Data Privacy? Data privacy ensures data is used responsibly, and that personal information is used in a way that is authorized, fair and legitimate. Privacy laws, policies, and procedures protect data during collection, storage, and processing activities. These policies may be internal to an organization or driven by regulating agencies. Data privacy is most notable in its protection of personally identifiable information (PII), which includes: - Individual Name - Individual Address - Email address - Social security number - Credit card or bank account information - IP address Personal information is defined in a data framework within the asset-protected privacy rules, processes, and technologies. Such rules are useful because they define what makes certain information personal or identifying (and clarify which data need to be removed or personalized for it to be anonymized). Recent rules around PII are mostly driven by consumers who value information privacy. They want to exercise their right to control their private data: Who uses it, when, and how. In response, local and federal regulatory bodies have established data protection and privacy laws that require organizations to protect and properly manage PII. Some of the most notable regulations are: - The European Union General Data Protection Regulation (GDPR) - California Privacy Rights Act (CPRA), expanding the California Consumer Privacy Act (CCPA) - Gramm–Leach–Bliley Act (GLBA) In general, these regulations require organizations to have policies explaining why they collect PII and how they plan to use it. If a business sells PII, data leaders need to make sure that consumers have the ability to opt out. Most of these regulations also cover the third-party management and processing of data. As part of managing contracts, data leaders are responsible for monitoring how those outside parties protect PII – and will often go so far as to include clauses about this in contracts. Privacy in Practice Although PII privacy is driven by consumers and enforced by regulatory bodies, organizations shouldn’t approach privacy with reluctance or treat it as just an add-on. In Privacy by Design — The 7 Foundational Principles, Ann Cavoukian, the former information and privacy commissioner of Ontario, Canada, recommends having privacy “embedded into every standard, protocol, and process that touches our lives” with a universal framework embodying the following principles: - Proactive not Reactive — Anticipate and prevent privacy-invasive events before they happen. - Privacy as the Default Setting — Ensure personal data is automatically protected — by default. - Privacy Embedded into Design — Make privacy an essential component of the system’s core functionality. - Full Functionality — Use an approach where both privacy and security are achieved, rather than having them at odds. - End-to-End Security — Maintain secure information management throughout the entire lifecycle. - Visibility and Transparency — Establish accountability and trust, as well as openness and compliance. - Respect for User Privacy — Empower data subjects to actively manage their own data. This data privacy framework will enable the authorized, FAIR (i.e, following fair information practice principles), and legitimate processing of personal information. What Is Data Security? Data security is a broad function that at its core is chartered to protect data. The role of data security has changed over time; it was originally focused on the physical security of hardware and electronic access to it; today the focus has shifted to the need to secure data with a deeper understanding of the data itself. Data security consists of the policies and processes for preventing unauthorized access to systems, networks, and applications that maintain data. More broadly, you must have controls in place to protect sensitive data from malicious attacks and data exploitation. It is critical that firms view data security as part of governance, risk management, and compliance (GRC). In Data Protection, Governance, Risk Management, and Compliance, author David Hill argues that data security must evolve, and discusses the need to expand data security from an infrastructure specific capability to more of an information-centric capability that is “good to the last bit.” As part of a robust data security program, you must establish internal policies and procedures to mitigate the risks of a data breach. Some mitigation controls that help protect sensitive information include: - Multi-factor authentication (MFA) prevents access to resources until a user proves their identity using a combination of methods, such as entering a password plus a code provided via text message. - Access controls controls user access to data through permissions. - Network security prevents unauthorized access at the network level. - Encryption involves using mathematical algorithms to “scramble” data to make it unusable even if someone gains unauthorized access. - Monitoring activity looks for abnormal activity across systems and networks that may indicate a data breach. - Incident response puts into action a set of people, processes, and technologies to investigate, respond to, and restore systems when unauthorized access occurs. It may also be useful to think of data security in terms of stages, which have evolved over time with advancing technology. The Privacy Engineer’s Manifesto1identifies these stages as: - Firewalls. In the early days of computing, firewalls prevented unauthorized access to or from a private network. - Net. With the rise of the internet, concerns around spam and identity theft gave rise to early online privacy measures. - Extranet. Portals enabled access and self-service features to the few, and firewalls grew more porous as the web transformed from pure publishing to a collaborative, interactive platform. - Access. Social networks, blogs, and smartphones democratized content sharing — and increased privacy concerns and corresponding regulations. - Intelligence. Information is tailored to the individual. Examples include driving apps that provide real-time conditions (and updates based on traffic) and shopping apps that provide local price comparisons. Next-generation approaches to data privacy and security will further integrate data intelligence into processes to ensure access is tailored to user permissions. What Are the Differences Between Data Privacy and Data Security? Despite their differences, data privacy and data security are interlinked. IT leaders generally view data privacy as a sub-component of data security. And more recently, data governance leaders are making data security a central focus of their responsibilities. To illustrate the subtle differences between data privacy and data security, consider a bank vault. A bank vault has both security and privacy measures in place to protect the contents within. Security features thwarts external threats. Guards, an alarm system, and the vault’s lock represent security features. Privacy measures prevent internal threats. Those may include protocols that limit employees’ access to the vault or knowledge of its contents. Privacy measures can also mitigate external threats, so if personal information is stolen, its value is restricted by anonymization. Taking a wider view, the primary differences between data privacy and data security are: - What you protect data from: Data security focuses on unauthorized access to data no matter who the unauthorized party is. Data privacy ensures that sensitive data is used legally, so that personal information is processed in a way that is authorized, fair and legitimate. This ensures information privacy, so that the owner of sensitive data provides consent to use the information while maintaining compliance with the practices that protect it during processing, storage, and transmission. - Who protects the data: Data security focuses on using tools and technologies, like firewalls, user authentication, and network limitations. Data privacy focuses on individuals within the organization who are responsible for protecting data while also informing data subjects about the types of data that will be collected, the purpose of collection, and whether or not data should be shared with third parties. - How they fit together: Data security is a prerequisite for data privacy because you need to keep unauthorized users away from that data to prevent a malicious attack. Data privacy adds an extra layer of protection by ensuring that people authorized to access systems use data responsibly. What Are the Similarities Between Data Privacy and Data Security? While they have several significant differences, the fact that data security is fundamental to data privacy also means that they have many similarities. In fact, most privacy laws include data security protections and best practices. If you do business in a region or industry, or manage a particular type of data, then you must comply with those laws. Compliance risk is a commonality between data security and data privacy. Whether you’re a retailer, healthcare provider, or financial institution, you have to follow your industry’s compliance mandates or else risk fines and penalties. Compliance regulations mandate both data security and privacy protocols that organizations must follow, and include: - General Data Protection Regulation (GDPR): Created the international standard for protecting European Union consumers’ privacy by defining who needs to be protected (data subjects), types of protected personal data, and how to use data security technologies as part of data privacy initiatives. - California Privacy Rights Act (CPRA): Updated the California Consumer Privacy Act (CCPA) to incorporate technical security controls as part of protecting consumer PII. - Health Insurance Portability and Accountability Act of 1996 (HIPAA): Established the Security Rule and Privacy Rule for managing Protected Health Information (PHI), creating an overlap between the administrative controls used for both. - Payment Card Industry Data Security Standard (PCI DSS): Established detailed steps for protecting cardholder data that include network security, encryption, and access controls. - ISO 27701: Expands ISO 27001 to cover privacy controls establishing Privacy Information Management enhancing the existing Information Security Management System (ISMS). - NIST 800-5 Rev. 5: Provides a catalog of security and privacy controls for information systems and organizations to protect organizational operations and assets, individuals, other organizations, and the Nation from a diverse set of threats and risks. - SOC 2: Defined by the American Institute of Certified Public Accountants (AICPA), System and Organization Controls (SOC) 2 covers Privacy as one of its five Trust Service Principles. In fact, you might need to comply with multiple mandates. A doctor’s office that collects payments by credit card needs to comply with both HIPAA and PCI-DSS. Data tokenization helps manage both data security and privacy by pseudonymizing sensitive information. Basically, this means processing information in a way that requires additional context to identify the data subject. For example, many companies that need to comply with PCI DSS will use asterisks to replace part of a credit card number. This removes the information for data-at-rest and helps you limit user visibility. Often, data tokenization is combined with data encryption to create a complete data security and data privacy compliance posture. The Role of Data Governance in Data Privacy and Data Security According to the Data Governance Institute, data governance is “a system of decision rights and accountabilities for information-related processes, executed according to agreed-upon models, which describe who can take what actions with what information, and when, under what circumstances, using what methods.” An organization’s approach to privacy is defined by data governance, EG, how information is gathered, managed, and used. In this way, data governance is fundamental to your data security and privacy initiatives. A compliance-focused governance program typically arises due to compliance concerns.2These may stem from privacy, security, or access management & permissions concerns, or a need to adhere to contractual, internal, or regulatory requirements. Often, a code for this sort of project will make data stewards accountable for protecting sensitive data, and require that they: - Assess risk and creat controls to manage types of risk - Enforce compliance requirements, from regulatory to architectural and contractual - Assign duties, clarify stakeholders, and set a decision-rights frameworks The Business Case for Data Privacy and Data Security Risk prevention and mitigation for both data privacy and security offer several business benefits. When you reduce risks, you limit the financial loss that compliance violations can cause while increasing customers’ trust in your business. On the data security side, you also protect your business from incurring costs from activities, like notifying customers that a breach occurred or rebuilding your brand after a data breach is made public. Data governance with a data catalog provides a framework to manage data security and privacy at scale. In short, you need to know all the sensitive data that you store, process, and transmit, what technologies use it, who accesses it, and what access they have. With a data catalog, you’re able to effectively manage your data privacy and security compliance. How to Ensure Data Security and Data Privacy with Alation Key data governance features support data privacy and security while mitigating risk. Alation extracts data to catalog your entire data environment. This creates a single location with a holistic view of all data. This makes it possible to apply the principles of data governance and privacy to all enterprise data. It does this with a suite of key features, which include: - Classification and tagging. Stewards can organize data by domain, and tag sensitive or private data accordingly. Masking features can then conceal PII from data users who do not have access permissions. - Policy center. Governance leaders can create policies that guide appropriate usage of private or sensitive data. A data catalog will surface those policies to enforce secure, compliant usage of that data at point of consumption. - Stewardship workbench. This feature empowers stewards to curate data at scale with help from AI and ML. With this workbench, stewards can apply privacy settings across multiple datasets simultaneously. With Alation Data Privacy and Compliance, policies are transparently managed to protect sensitive data. Business users can create definitions of data types and categorize them according to compliance requirements. This allows you to apply data privacy controls, like assigning responsibility or data masking. Alation also allows you to leverage autonomous data stewardship, giving your teams the ability to use data without creating data security and privacy risks. With data risk audit and reporting capabilities, Alation gives you real-time visibility into compliance by tracking data usage to monitor for policy violations that may lead to potential fines and penalties. Alation also boasts rigorous privacy and security certifications for our cloud platform, so your cloud migration is secure and protected. For more information, request a free demo to learn how the Alation data catalog supports your organization’s data privacy and security initiatives. 1. Dennedy, Michelle, et al. The Privacy Engineer’s Manifesto Getting from Policy to Code to QA to Value. Apress, 2014
<urn:uuid:9f280a7e-1490-4962-92f2-a0601e19d74b>
CC-MAIN-2022-40
https://www.alation.com/blog/data-privacy-vs-data-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00584.warc.gz
en
0.909114
3,200
2.890625
3
A smokeless method of vaporizing and then inhaling pot packs a much more powerful punch than simply smoking weed, researchers say. That could raise safety concerns for users—driving, for example. Marijuana vaporizers heat pot to a temperature just below combustion, allowing people to inhale the intoxicating chemical THC from the plant material without breathing in any smoke. This method produced much more intoxication in a small group of test participants than smoking the same amount of marijuana through a typical pot pipe, according to the report published online Nov. 30 in JAMA Network Open. The study participants also had more adverse effects associated with their pot use when they used vaporizers, and had more pronounced impairment of their ability to think and control their movements, the researchers said. “It’s often a fine line between someone getting the drug effect they desire and having a drug effect that’s too strong, and maybe produces paranoia and adverse effects that are uncomfortable for the person,” said lead researcher Tory Spindle. “That sort of thing might be more likely with vaporizers,” he added. Spindle is a postdoctoral research fellow at Johns Hopkins University School of Medicine, in Baltimore. These vaporizers aren’t to be confused with “vaping”—a term used to describe electronic cigarettes. Survey data has shown that vaporizing is becoming a more popular method of using pot, particularly in states that have legalized recreational use of the drug, Spindle said. “It heats it to a temperature that doesn’t reach combustion,” Spindle said of the vaporizing devices. “If you look at the cannabis after it’s done vaporizing, it doesn’t turn into the black ash material it would when you smoke it. It looks exactly like it did when you put it in.” To see if vaporizers deliver a different high than smoking pot, Spindle and his colleagues recruited 17 healthy adults who were not frequent marijuana users and asked them to both smoke pot from a pipe and inhale the fumes produced by a vaporizer. The same 25-milligram dose of pot produced a significantly stronger high when vaporized than when smoked, the findings showed. Pre-rolled joints sold at dispensaries typically contain 1 gram of pot. People on vaporized pot also showed greater impairment than when they smoked the drug, based on testing that gauges the ability to think, reason and perform fine motor skills. Vaporized pot came with more side effects as well, including heart racing (24 percent versus 18 percent for smoked), paranoia (17 percent versus 10 percent), hunger (38 percent versus 33 percent), dry mouth (67 percent versus 43 percent), and red eyes (25 percent versus 16 percent). Blood tests revealed that people had much higher levels of THC in their circulation after using a vaporizer, about 14.4 nanograms per milliliter (ng/mL) of blood compared with 10.2 ng/mL when they smoked pot. The effects typically wore off between six to eight hours for both vaporized and smoked pot, the researchers said. Heating but not burning pot appears to ensure that more of the weed’s high-producing chemicals are imbibed by the user, Spindle said. “Our theory is that when you combust cannabis, more of the THC is lost due to the combustion process,” Spindle said. “The vaporizer is a more efficient delivery method than the smoked cannabis.” People who don’t use marijuana regularly should approach vaporizers with caution, said Nadia Solowij, a professor at the University of Wollongong in Australia. “There is a perception that it is a safer route given that it avoids burning the plant matter, thus reducing toxins formed by that process,” said Solowij, who wrote an editorial accompanying the new study. “These findings raise concerns for inexperienced users, which include those using [pot] both recreationally but also trying cannabis for medical reasons,” she added. “It may be wise to use a smaller amount of cannabis in a vaporizer to achieve the desired effect,” Solowij concluded. More information: Tory Spindle, Ph.D., postdoctoral research fellow, Johns Hopkins University School of Medicine, Baltimore; Nadia Solowij, Ph.D., professor, University of Wollongong, Australia; Nov. 30, 2018, JAMA Network Open, online The U.S. Centers for Disease Control and Prevention has more about the health effects of marijuana. Journal reference: JAMA Network Open
<urn:uuid:baaac5df-e3ee-4480-91cb-ae9b6a2b481e>
CC-MAIN-2022-40
https://debuglies.com/2018/11/30/marijuana-vaporizers-heat-pot-to-a-temperature-just-below-combustion-allowing-people-to-inhale-the-intoxicating-chemical-thc-from-the-plant-material-without-breathing-in-any-smoke/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00784.warc.gz
en
0.940499
969
2.5625
3
Historically, developers have worked individually on different modules of an application and later integrated their code with the rest of the team’s manually. The occurrence of the next build could take days or even weeks to see if the new code would break anything. This isolated process often led to developers duplicating their code development efforts which formed their delivery models, and the effort to find bugs and fix them. In the present era, the growth of Agile and the demand for fast and frequent solution delivery cycles is forcing us to replace the older development and delivery models with a more streamlined process. Continuous integration and continuous delivery is an extension of Agile that focuses mainly on the tools and processes needed to integrate our work into the core code quickly, automate testing, and deliver continuous updates to enable a faster application development. The idea behind it is to create jobs that perform certain operations like building code, testing, deploying, and so on, automatically, rather than doing it manually. Most teams work in multiple environments other than the production, such as development and testing, UAT environments, and continuous integration and continuous delivery ensures there is an automated way to push code changes to them and subsequently, to their delivery models. Today, a lot of tools are available which support continuous integration and continuous delivery (CI/CD): Continuous integration (CI) is a development practice that helps developers to check-in code into their code repository many times a day. Each checked-in code is then verified by an automated build process. Continuous delivery (CD) is an approach in which teams ensure that every checked-in code is verified, tested, and releasable. Continuous delivery aims to make releases frequently through an automated process. CI-CD Pipelines: The goal of the continuous integration and continuous delivery (CI/CD) pipeline is to enable teams to release frequent software updates into production to faster release cycles, lower costs, and reduce the risks associated with development. Once you have an automated process of CI and CD in place, the deployable unit path is called a pipeline. Creating a CI/CD pipeline may be an overhead initially, but it is essentially a runnable specification of the steps that need to be performed in order to deliver a new version of the software. In the absence of an automated pipeline, engineers would still need to perform these repetitive steps manually, which reduces the productivity of the teams. The following image shows different stages of a CI/CD pipeline: A pipeline procedure is triggered when a code is committed to the repository hosted somewhere like GitHub. This notifies to the build system that something has changed. Other common triggers include automatically scheduled or user-initiated workflows, as well as results of other pipelines. The Build stage combines the source code and its dependencies to build a deployable product in the docker container, ready to be shipped to our end users. Programs written in different languages need to be compiled first to create an environment-specific package in the docker container. Like for a web or an application server, the package in the docker container can be in the form of war, jar, or exe, whereas for a cloud environment, the software is typically deployed in a docker container, so the build stage creates a docker image to be deployed in the docker containers. In this phase, automated tests run to validate the correctness of the code and the behavior of the product. The test stage acts as a safeguard that prevents easily reproducible bugs from reaching the end-users. The responsibility of writing tests falls on the developers. Developers working in a TDD environment create unit test cases. Depending on the size and complexity of the project, this phase can last from seconds to hours. Many large-scale projects run tests in multiple stages, starting with smoke tests, unit tests, and integration tests. Failure during the test stage exposes problems in code that developers didn’t foresee while writing the code. Security Scan stage Once testing is done, the product can go through the security scan stage. This stage is required to see early warning related to vulnerabilities and proper licensing to meet the security measures of different customers. Sometimes, vulnerable and poorly licensed products can fetch huge fines on product companies. Some tools like Veracode, SonarQube, Splint, Coverity, etc. can easily be integrated with the build system to check any vulnerability in the product. Once the code has passed all predefined tests, we’re ready to deploy it. There are usually multiple deploy environments, for example, a “beta” or “staging” environment which is used internally by the product team to find the functionality breaks. Advantages of the CI-CD pipeline: - Faster release cycles - Reduced risk of finding obvious bugs - Lower costs by reducing the manual build to deployment processes - Higher quality products as the CI-DC pipeline increases collaboration among teams to make a high-quality product - Better business advantage to meet new market trends and user needs - Need to spend a lot of time to find the perfect tool or combination of tools to create correct build and deployment scenario - Need to spend a lot of time to automate the process
<urn:uuid:2801ae27-b445-4be8-bc78-6d20af7abd12>
CC-MAIN-2022-40
https://www.hcltech.com/blogs/continuous-integration-continuous-delivery-and-pipeline
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00784.warc.gz
en
0.940202
1,084
2.90625
3
In recent years, ‘Smart City’ has gone from a phrase that implied visions of science fiction and the far future, to a buzzword dropped into every other article and conversation, to - finally - something that is now starting to become reality for many people all over the world. With the development of more and more IoT devices and connected objects, even those of us who would not consider our towns to be ‘smart cities’ see our homes and cities becoming more intelligent and connected every day. According to research, by 2025 the number of megacities is expected to reach 29. Smart cities are already being recognised globally and many are utilising blockchain as the foundation of plans to enhance urban living. In 2018 we saw smart city technology developing faster than ever before, and with the global smart cities market expected to grow to USD 757.74 Billion by 2020, it is clear that this trend is only going to continue. With technology revolutionising the way we live and work, what better time to also address some of the deeper issues with our current society. As well as being more connected and intelligent, the first smart cities will also be the sites of large-scale, bold regulatory reforms. These smart cities are expected to become the sites for far-reaching social experimentation and changes to society. Blockchain and smart cities It should come as no surprise that the other key technological revolution of our time will also bring about fundamental advantages and changes to the way smart cities operate. This technology is, of course, blockchain. Blockchain is capable of revolutionising the way we do almost any kind of transaction, and even something as complex as a smart city is ultimately multiple simultaneous transactions. The underlying platform on which all the interconnecting technologies of a smart city is run must be fundamentally reliable, trustworthy, transparent, and, critically, unhackable. Blockchain technology is all of these things, as it is able to process unimaginably vast amounts of data, in an efficient and secure way. One example of an area of smart cities that blockchain can have an obvious impact on is currency and money. The concept of smart money for smart cities has gone largely under the radar until now, but let us consider it. Traditional economic systems only award value by the exchange of money in a transaction. However, in a smart city, transactions serve the purpose of exchanging crypto tokens, which can then later be exchanged for digital currencies or real-world currencies. This is similar to any digital exchange of money, but it is completely secure, meaning no aspect of the transaction is hackable. Today, our economic transactions rely on banks as third-party facilitators and enablers - but using secure blockchain technology to facilitate smart money would ultimately eliminate the need for banks. Banks and paper money are brick and mortar institutions, while smart money using cryptocurrencies and smart contracts are digital realities. By obviating the need for banks and paper money to enact transactions that relay value, the resulting digital currencies are, effectively, smart money. Consider the benefits that could be brought to society through the use of smart money in smart cities. The smart city central computer could have the power to award crypto tokens in return for actions and behaviours that are deemed of benefit to society - something that is made possible through the decentralised system, with just the consent of the people participating in the system. In a smart city such as this, where a central computer distributes digital tokens in return for actions, whatever value has been assigned in the computer for an action with result in an award of tokens for that value. This amounts to the monetisation of service by virtue of consensus, allowing us to completely overhaul many injustices currently suffered in today’s society. For example, in this smart city, there would be no such thing as a gender wage gap resulting from human bias. There would be nothing subjective about the assignment of value to different actions, so no question about the value of crypto tokens awarded for certain actions. Everyone doing the same actions and job would earn the same number of crypto tokens. Thus, any resulting wage differentials could only be the result of self-selection bias. Going one step further, the very concept of poverty will be challenged in the smart city of tomorrow. Poverty, as we know it, is built into brick and mortar economic institutions that rely on exchanges of physical currencies and the perpetual existence of hierarchies of trust, power, and authority to approve everything. In every hierarchy, someone is at the top and someone is at the bottom. Yet, in a decentralised economy where tokens are awarded for actions and transactions, there would be no reason to imagine that people would be abandoned or set apart because they do not have great jobs, access to abundant resources, or enough money to survive. The decentralised economy could be less authority-based and more inclusive, less hierarchical and more of a sharing economy. No good deed goes unrewarded If everyone in the smart city has a certain baseline access to resources, then people’s behaviour and actions will determine how much they earn. When actions and positive behaviours are rewarded with crypto tokens, then an unlimited scenario of new revenue streams will open up in the smart city. As a crude example, a person who took it upon themselves to clear the pavement and driveways on their street when it snows would be awarded a certain number of crypto tokens because the central computer would register this as a prosocial action. This would entice more people to do good works in their community to earn crypto tokens. To take the idea further still, in this smart city parents who elect to stay at home taking care of a baby or child would no longer earn nothing for the invisible work done. A parent who engages ongoing caregiving would be rewarded tokens for the actions they undertake. No longer would the people doing what many consider to be the hardest, most important job in society, to work for no pay. Every positive, useful and necessary action will have value in the smart city of tomorrow - and the person doing this will be rewarded fairly. Bruce Khavar, CEO, A-Nex (opens in new tab) Image Credit: Jamesteohart / Shutterstock
<urn:uuid:1d87b64b-55f9-40c4-a5bf-b37149bebedf>
CC-MAIN-2022-40
https://www.itproportal.com/features/smart-money-for-smart-cities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00784.warc.gz
en
0.952108
1,270
2.875
3
In today’s world, it’s become common to conduct a significant amount of business online. You schedule Zoom meetings, attend conferences, and meet with clients from your laptop. All you need is a good WiFi connection. But while you can have conversations and share your screen on a video conference, electronic signature require extra layers of protection. This is because they are your proof of someone’s agreement to enter into a contract with you or the identity of the person reading a restricted message. If you want it to be as secure as possible, you want to implement the right technologies. What is a Digital Signature? Electronic signature is the online equivalent of a traditional, handwritten signature. They exist to authenticate the identity of the person who’s accessing and/or signing a document. They are practical, since they don’t require the parties to be present in the same room — a feature that’s increasingly necessary in the world of remote work and global business transactions. Once each party has signed a contract, they automatically receive an email with a signed PDF version of the agreement. That said, digital signatures are used for purposes other than signing contracts (as explained below). Mostly, these uses are designed to protect the integrity of communications. As such, they require certain security protocols, since, by nature of existing online, they are more prone to security breaches than taking a physical document from point A to point B. Why Use Electronic Signatures? Digital signatures are used to authenticate documents by establishing many factors: - Proof of receipt - Consent of the signer - Time stamps - Identify whether a message has been modified As such, implementing electronic signature practices is a crucial component of file security. Why It’s Crucial to Authenticate Electronic Signatures It’s important to note that although digital signatures are electronic signatures, the term electronic signature is not always synonymous with digital signatures. You can electronically sign a document by clicking on a box with a mouse or by tracing a screen with your index finger or mouse the same way you would sign with a pen. On the other hand, a digital signature uses what’s known as a hash function to protect the integrity of the communications. A hash is a string of numbers and letters that are generated by algorithms. Each one is unique to each file that’s shared. In addition to generating the hash, this string of letters and numbers is encrypted by what’s called public key infrastructure (PKI). This technology consists of using a public key to encrypt a message. Only a person or entity with a private key would be able to decrypt that message. So if an unauthorized person accesses or modifies the document, the hash will be modified as well – leaving a trail for the sender to realize that the communication has been compromised. Another way to protect the integrity of communications is to authenticate e-signatures with biometrics verification. This ensures that a document becomes available once the software it’s stored in recognizes the identity of the intended recipient. How to Create an Electronic Signature How to create traditional electronic signatures will vary by platforms. Adobe, Microsoft, MacOS, and other providers each have their own processes, and they are fine for simple signatures of agreements without having to meet in person. However, these serve specifically to allow users to manage signature options. To provide increased end-to-end security of your documents, it behooves you to use advanced technologies. Smart Eye Technology offers several ways to protect digital and electronic signatures, and thus ensure a person’s identity— without the need for unreliable passwords. Continuous Biometrics Authentication Most security solutions offer one-time login verifications for intended users. Smart Eye offers the option of installing continuous facial recognition for as long as the document is open. If your intended recipient walks away from their device, a pop up prevents unauthorized viewing and access. Self-Protecting Data Technology + AES256 A proprietary Unbreakable eXchange Protocol (UXP) that is self-keyed and self-governing provides AES encryption that secures your documents while in transit, while at rest, and while being accessed by individuals. Active Intelligence comes with a control panel that provides a detailed document log trail. This lets you retain control of everyone who has access to all documents. And if you want to see all document activity in real time, you can do so with advanced capabilities — which, in addition to showing you who is accessing a document, they also identify flight risks and other internal threats. Protect Your Electronic Signatures With Smart Eye Technology At Smart Eye Technology, we provide powerful, comprehensive, and affordable cybersecurity measures across all devices. We also make things simple by allowing you to control all implemented tools from one single platform. Contact us or schedule a demo to see how we can help you create an electronic signature so you can protect your data.
<urn:uuid:d3ee97a6-5e0a-4650-af06-61f48adc3b6f>
CC-MAIN-2022-40
https://getsmarteye.com/how-to-create-an-electronic-signature-authenticated/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00784.warc.gz
en
0.919835
1,024
3.203125
3
Behind the chubby cheeks and bright eyes of babies as young as 8 months lies the smoothly whirring mind of a social statistician, logging our every move and making odds on what a person is most likely to do next, suggests new research in the journal Infancy. “Even before they can talk, babies are keeping close track of what’s going on in front of them and looking for patterns of activity that may suggest preferences,” said study co-author Lori Markson, associate professor of psychological and brain sciences and director of the Cognition & Development Lab at Washington University in St. Louis. “Make the same choice three or four times in a row, and babies as young as 8 months come to view that consistent behavior as a preference.” The findings demonstrated that infants look for consistent patterns of behavior and make judgments about people’s preferences based on simple probabilities calculated from observed events and actions. Co-led by Yuyan Luo, an associate professor of psychological sciences at the University of Missouri-Columbia, the study may shed light on how infants and young children learn about people’s preferences for a certain kind of food, toy or activity. It might also explain why kids always seem to want the toy that someone else is playing with. “Consistency seems to be an important factor for infants in helping them sort out what’s happening in the world around them,” Markson said. “Our findings suggest that, if a person does something different even a single time, it undoes the notion of someone having a clear preference and changes an infant’s expectations for that individual’s behavior. In other words, if you break the routine, all bets are off in terms of what they expect from you.” The findings confirmed that infants as young as 8 months are already developing the ability to see the world through someone else’s eyes, to sense what another person may or may not know, think or believe about a situation. Because babies can’t tell us what they’re thinking, researchers had previously speculated that the ability to see life from someone else’s perspective did not develop until about 4 years of age. But more recent research over the past decade gets around this spoken-language barrier by relying on a proven premise — that babies spend much more time looking at events they consider to be new and unusual. In this study, Markson and Luo conducted a series of experiments to track how infant “looking times” changed when an actor made an unexpected choice between one of two stuffed-animal toys displayed before the infant on a small puppet stage. They corroborated these findings using a similar experiment that tracked whether infants, when asked to give a toy to the actor, would reach more often for the toy consistently chosen by the actor in previous trials, thus implying that the infant understood the actor’s preference. The experiments were conducted on a sample of 60 healthy, full-term infants with an even split of males and females ranging in age from 7 to 9 months and an average age of 8 1/2 months. Seated on a parent’s lap, the infants watched as a young woman reached out and grabbed one of two stuffed animals on the stage, either a white-and-brown dog or a yellow duck with orange beak and a purple bonnet. During the “familiarization” phase of these experiments, the toy selection process was repeated four times under three separate conditions. In the “consistent” condition, a woman in a blue or black shirt picked up the yellow duck four times in a row. In the “inconsistent” condition, the same woman picked up the duck three times and the dog once. And, in the “two actor” condition, the woman in the blue shirt selected the duck three times, while another woman in a white shirt selected the dog once. After each four-trial familiarization phase, the researcher observed the babies’ reactions as the women reappeared on the stage and made a fifth selection, either going back to the previously targeted duck or making a new selection of the dog. Two trained observers watched the babies’ reactions through concealed peepholes and independently coded the babies’ “looking time” responses based on seconds spent watching each toy-selection event. Video cameras captured both the babies’ reactions and the toy-selection process so that response time coding could be further analyzed and confirmed. Findings confirmed that the babies spent about 50 percent more time looking at selections that represented a break from consistent patterns made in the familiarization trials. “Infants who saw someone make the same choice three or four times in a row showed clear signs of being surprised when that person did not follow the same pattern in the future,” Markson said. “They obviously paid more attention to actions that did not fit their assumptions about what toys the women appeared to prefer most.” In a second phase of the study, researchers reaffirmed their findings using a variation on the experiment in which the women who had chosen the stuffed animals during the trial phase asked the infant to choose between two toys by saying: “Can you give it to me? Can you give me the toy?” In this variation, the infants also seemed to have made assumptions about the women’s toy preferences, reaching for the stuffed animal that had been consistently chosen by the woman during the trial phase. “Our study is the first one to show how inconsistent choices affect infants’ understanding about others’ preferences,” Markson said. “Based on these findings, we hope to further explore how ratios of consistent/inconsistent choices matter to infants and eventually compare infants’ understanding to adults’ knowledge about others’ choices.” Other co-authors include Laura Hennefield, a postdoctoral research associate at Washington University; and Yi Mou and Kristy van Marle of the University of Missouri-Columbia. Source: Chuck Finder – WUSTL Image Source: is adapted from the WUSTL news release. Video Source: Video credited to Washington University in St. Louis. Original Research: Full open access research for “Infants’ Understanding of Preferences When Agents Make Inconsistent Choices” by Yuyan Luo, Laura Hennefield, Yi Mou, Kristy vanMarle, and Lori Markson in Infancy. Published online May 26 2017 doi:10.1111/infa.12194
<urn:uuid:88b883bc-2b9e-4441-a0e8-271765c17204>
CC-MAIN-2022-40
https://debuglies.com/2017/07/29/infants-know-what-we-like-best/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00784.warc.gz
en
0.948081
1,369
3.5
4
We at Jamf are firm believers that devices are no longer a luxury or simply a “cool way to learn”. Mobile devices and technology are fundamental to a child’s future and success as they grow older. We believe everyone in the world deserves access to these tools when they are learning and that is likely why your school has made a considerable investment in devices. But what about when the device comes home for learning and they don’t have the same guidelines they have in the classroom with their teachers? How do parents make sure their child is using the device how they are supposed to be used? Ensuring their child has a healthy relationship with devices is tough, and we believe many parents would agree, but it doesn’t have to be. Talk to your school about Jamf Parent and the “Parent App”. Jamf Parent is a free iOS and watchOS app that empowers parents to manage their children’s school-issued devices. Parents can restrict which apps children can access on their devices, receive notifications when a child arrives at school and schedule homework time or bedtime to allow or restrict certain apps. With our release of watchOS compatibility, our goal is to make Jamf Parent accessible where and when parents want it to be. All of this has many functional purposes and benefits – we want to be here to help you both understand them and convey them to your school or fellow parents. Here are a few: 1. Creating a focused, learning environment at home In class, students are in that true classroom setting with a teacher teaching a lesson and fellow students making it easier for them to understand it is time to learn. For many kids, using a device at home likely doesn’t have the same feeling. After all, many kids may feel as though when they have their device at home, it’s the time they get to play games, watch videos, or use apps that would otherwise distract them from their schoolwork. Using Jamf Parents, you can create that focused, learning environment by restricting access to apps that aren’t needed for learning and schedule set studying times with simple toggles and timers. This helps extend the classroom “feel and structure” they are comfortable with into their life at home. 2. Helps parents, teachers, child, and school achieve educational goals With just as much reliance on learning at home as in the classroom and relatively strict educational goals being put in place around children’s advancement, helping parents provide the learning environment mentioned above makes life easier on parents and helps kids learn undistracted, which makes teacher’s lives easier when it comes to moving through material. All of this helps the student body as a whole achieve the level of learning that schools are hoping for. 3. Helps parents feel their child is protected For many parents, giving their child access to devices is scary. Yes, it is necessary to their learning, but if unprotected, it can open children up to content parents don’t want their children viewing – on purpose or accident. Jamf Parent allows parents to have some insight into how devices are being used and access to shape that usage positively. 4. Can create structure and a sense of reward Most kids thrive in structure and some kids benefit from the feeling of being rewarded for their work. It’s good for them to learn that they get benefits from completing their work. With this in mind, parents are able to create an environment where their child is granted access to apps outside of their schoolwork after achieving certain goals – completing study time, finishing all their homework, or just simply earning a break. With how easy it is to set toggle on and off access to aspects of the device this is easy. 5. Jamf Parent extends beyond the classroom and home Jamf Parent has specific features for parents surrounding a device’s location. This feature allows parents to set up rules for devices based on specific locations and view a device’s location should their child lose the device or they want to make sure their child has arrived someplace safely. Think sports practice, children getting on the bus, or kids going home with friends. Now parents have the ability to quickly check-in and feel comfortable with where their kids are and, based on the rules they created, how their kids can use their mobile device while there. 6. Easily usable and accessible These features are great simply because of how easy it is for parents to get the most out of them. With the ability to have it on a parent’s iOS and now watchOS device, it is accessible everywhere parents want it to be. Set entire schedules up from your phone or restrict access to apps as you need to on the fly, it’s what fits your child’s schedule. That’s what matters the most – creating an environment, specific to your child’s learning. These are just 6 simple perks of Jamf Parent for parents, kids, and schools but it certainly doesn’t stop there. One of the largest benefits to all of this is helping children create healthy relationships with mobile devices. Usage of a device helps with learning skills for their future, but too much usage can create an unhealthy reliance or addiction to screen time. Use Jamf Parent to help guide your child toward healthy habits and growth. If you liked this and want more resources around how schools can facilitate remote learning check out our E-book. If you have any questions or want to tell us how you talked to your school about Jamf and Jamf Parent, we want to hear. Let us know by contacting us or engage with us on social media! See how Jamf School can help Get your school started with Jamf School Have market trends, Apple updates and Jamf news delivered directly to your inbox.
<urn:uuid:87c2f08e-8ef0-4861-aa5a-58e22c8e7846>
CC-MAIN-2022-40
https://www.jamf.com/blog/talking-to-your-school-about-the-accessibility-and-benefits-of-jamf-parent/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00784.warc.gz
en
0.969849
1,229
2.65625
3
Ransomware quite often targets businesses (for example hospitals) rather than individuals. Corporations have more valuable data and more money for ransom; ransom increases from roughly $500 per computer to $15,000 for the entire enterprise. Below, different variants of ransomware are examined to help users get an idea of what might be coming down the Internet pipeline. So keep an eye out for these characteristics before your network is taken hostage. Deleting files at regular intervals to increase the urgency to pay ransom faster, Jigsaw ransomware operates like this: for every hour that passes in which victims have not paid the ransom, another encrypted file is deleted from the computer, making it unrecoverable even if the ransom is paid or files decrypted via another method. The malware also deletes an extra 1,000 files every time victims restart their computers and log into Windows. Encrypting entire drives, Petya ransomware encrypts Master File Table. This table contains all the information about how files and folders are allocated. Encrypting web servers data, RansomWeb and Kimcilware are both families that take an unusual route — instead of going after users’ computers, they infect web servers through vulnerabilities and encrypt website databases and hosted files, making the website unusable until ransom is paid. DMA Locker, Locky, Cerber and CryptoFortress Encrypting data on network drives, even on those that are not mapped, DMA Locker, Locky, Cerber and CryptoFortress are all families that attempt to enumerate all open network Server Message Block (SMB) shares and encrypt any that are found. Maktub ransomware compresses files first to speed up the encryption process. Not safe in the cloud Deleting or overwriting cloud backups: In the past, backing up your data to cloud storage and file shares was safe. However, newer versions of ransomware have been able to traverse to those shared file systems making them susceptible to the attack. Targeting non-Windows platforms, SimpleLocker encrypts files on Android, while Linux.Encode.1 encrypts files on Linux, and KeRanger on OSX. Using the computer speaker to speak audio messages to the victim, Cerber ransomware generates a VBScript, entitled “# DECRYPT MY FILES #.vbs,” which allows the computer to speak the ransom message to the victim. It can only speak English but the decryptor website it uses can be customized in 12 languages. It says “Attention! Attention! Attention!” “Your documents, photos, databases and other important files have been encrypted!” Ransomware as a service is a model offered on underground forums networks. It will provide the malicious code and infrastructure to facilitate the transfer of funds and the encryption key for the victim to be able to access their information. Tox ransomware does this. Listen below as our Director of Sales has more on the matter. A professional outsourced IT team can protect your business from today’s ransomware. Contact Gulf South Technology Solutions today to discuss a plan of attack to keep your business safe.
<urn:uuid:b1a6c32e-d78e-4297-afdb-4f98fec929e6>
CC-MAIN-2022-40
https://gulfsouthtech.com/managed-service-provderransomware/tricks-that-ransomware-uses-to-fool-you/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00784.warc.gz
en
0.913687
677
2.515625
3
Technology is getting smarter and, thanks to recent big advances in machine learning, it can now automate many of the functions that used to be performed by humans. Self-healing automation is the next step in this revolution. It refers to applications, devices, and systems that can discover system faults or security problems, and make the necessary changes to fix them, without human intervention. It’s set to be a game-changer for IT departments that spend huge amounts of time resolving issues and dealing with security threats. What is self-healing automation? The idea that systems could be designed to diagnose and repair themselves can be traced back to the early 2000s, when IBM initiated the term “autonomic computing.” It was inspired by the human body’s ability to self-heal. Now that technologies, such as mobile networks and the internet of things (IoT) look set to rival the human body in complexity, the concept is becoming more relevant than ever before. A key attribute of self-healing automation is that it is proactive, not reactive, which allows it to: As such, this type of automation will become not just a benefit, but a necessity when using business systems that are too big and complex for people to manage alone. What are the business benefits? It’s been said, accurately, that every company is now a technology company. Businesses of all sizes need their software, computers, and office machines to run efficiently and securely. In today’s increasingly connected world, they can’t afford outages that will significantly disrupt customer service. In the office, self-healing automation is already beginning to play an important role in protecting businesses from malicious security breaches and attacks. Laptops, mobile phones and other networked devices are all potential targets for hackers who want to break into the company’s network. The risk even extends to previously overlooked devices such as multifunction printers, which are powerful computers in their own right. As a result, some of the latest printer models come with self-healing features built in. Self-healing is particularly suited to virtualized IT environments such as the cloud, where applications and resources can be quickly added and removed as needed. As distributed computing services become more popular, self-healing automation will be critical to making sure they remain reliable, stable, and secure. The biggest benefit of self-healing automation is it’s ability to “look ahead” and shut down problems or attacks before they cause serious loss or disruption. This type of preventative action, however, may require an advanced system of predictive analytics that is able to recognize which problems are serious, and which are simply fals alarms. So, before investing, consider whether it might be enough to simply update your existing software, computers, and/or networks devices to newer versions. In the coming decades, there can be no doubt that self-healing automation will play a vital role in keeping business services running at optimum efficiency. As the technology continues to develop, it will become a must-have for companies that want to focus on growth and innovation, and not just keeping the lights on.
<urn:uuid:28707c95-987a-48bd-beb9-8ed4eca95715>
CC-MAIN-2022-40
https://gulfsouthtech.com/uncategorized/for-smart-businesses-its-all-about-proactive-maintenance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00784.warc.gz
en
0.961785
658
2.703125
3
This blog is to stress the importance of MLOps in the ML project lifecycle, assert the necessity of MLOps in the context of the entire Software industry, explain its relevance within the context of DevOps, and how the changing Software development environment and processes help the ascend towards the peak of the Digital Revolution. MLOps – What’s that got to do with Production or Revolution? To understand this relation, we need to go through a bit of history and some examples of how inventions and innovations revolutionized the technology industry, which has, in turn, revolutionized several other industries. We’ll put this in the context of Machine Learning and MLOps, and explain the rationale behind the necessity to put more effort and focus into it. There are many jargons, acronyms, noise, and clutter around this space that decision makers are finding it hard to come to a conclusion on when and where to invest. The top 3 questions and concerns that we hear from business stakeholders when we pitch the importance of MLOps are: - MLOps seems to be much more intense than the Development Operations we have in house. Why can’t this be integrated to the existing DevOps cycle? - Can we consider MLOps once after we vet the success of ML projects or ideas? - Cost of implementing MLOps right away might outweigh the budget for the ML PoC or idea. How can we justify this additional spending? This article aims to alleviate these concerns and make these questions self-explanatory by stressing the importance and necessity of MLOps. To understand the significance of the term “production” in the title, let’s delve a bit into the history of the Industrial Revolution. This will provide the context for the argument. The first industrial revolution started with water and steam, and created a disruption in how mechanical production happened. Steam powered machines and machine tools revolutionized how tasks were accomplished, and how people got from one place to another. This was the beginning of the factory, and the transition to new manufacturing processes. The technological revolution saw rapid progress in science, applied science and technology in mass production. Factories, production lines and processes vastly improved with the advent of gasoline engines. The digital revolution – saw groundbreaking inventions happening with semiconductors, Integrated Circuits (ICs), miniaturization, microprocessors, affordable computing device form factors, wireless technologies, internet etc. This was the beginning of the Information age and most importantly this was the time when Software came to be recognized as a product, and this outlook changed the entire dynamics of this age. The exponential growth in the digital age happened due to Software. Once we arrived at a point where we were able to create the right kind of generic computing hardware for the software to shine on, things started moving at an exponential pace. Every industry has to write code and program their systems in some form or the other. Every device / equipment that you see in any domain is heavily software driven. Take for e.g. EDA, CAD/CAM, Physical design tools, hardware description languages like VHDL or Verilog, AV workflows, broadcasts, content delivery, just to name a few. Hardware and Chip design, development & manufacturing workflows are heavily software driven, and this led to much improved platforms to run even better Software, which led to an optimizing cycle. Emergence of a common theme If closely analyzed, we can see a pattern emerging across all the industrial revolution phases. Each one starts with mechanization and automation using new discoveries or inventions. But, across each phase we see a period of drastic improvement to the factory and production line prevalent during that phase. These factory and production line advancements lead to the pinnacle of each phase, until they are completely surpassed by a totally new invention. For example, it was the James Watt steam engine that upraised the first phase, the internal combustion engine revolutionized the second, and now Software is changing the third. Since the current digital phase is primarily Software driven, this summit can only happen through advancements in Software development and delivery process. That is, the very Software factory that produces Software. This has to span across industries and domains. The way we do Software Development now is vastly different from what was the norm 10 years ago. Alongside that we are also witnessing the emergence of new Software development paradigms like Deep Learning helping us reach new areas and fill gaps, which we thought would never be possible with Software. Let’s now talk about this new Software paradigm and the improved Software factory and production lines. Enter the DevOps Era DevOps is now considered a standard practice with or without its knowledge! Development teams across the globe have started the practice knowingly or unknowingly! It is slowly turning out to be the de facto way of Software Development and Delivery. There is still a long way to go before DevOps gets fully adopted, and the industry as a whole is starting to benefit from the rapid pace and quality of Software. DevOps is not a single person’s job or a single team’s job. It’s not a Job title, because it is not a job function, role, or technology per se. It is a collaborative methodology for doing Software Development and Delivery. Over the past few years the amount of quality tooling, processes, and workflows that got introduced to the development and production line, have made significant improvements to the way Software is produced and delivered. As noted earlier, mostly everyone in every industry is writing Software one way or the other, and this trend is going to go up exponentially. Machine Learning, a new Software paradigm In the Artificial Intelligence parlance, there are different approaches like Symbolic, Deep Learning, and hybrid ones like the IBM Watson. When we use the term AI, ML or Deep Learning in this article, we mean “Supervised Learning”, which is one form of Deep Learning. Other AI and ML approaches might need radically different computing platforms, development, delivery and operational workflows which is beyond the scope of this article. Supervised learning is basically learning an X -> Y mapping, a function that maps an input to an output based on example input-output pairs. With the supervised learning approach there is a paradigm shift happening with how we program computers, and basically this is changing our relationship to computers from a development and usability perspective. Instead of programming computers, the ML approach is to show them, and let them figure it out. That’s a completely different way of developing Software, when whole industries are built around the idea of programming computers. Educational institutions and Corporations are only now slowly catching up to this paradigm shift. So what needs to be shown, and what is there to figure out In simple terms, we need to show the data and the mapping (X -> Y), and what is being figured out is an approximate function for the mapping. X denotes the feature vectors from the input data and Y can be thought of as the labels for the ground truths. The approximation functions are figured out using back propagation algorithms like Gradient Descent, Momentum, Adam, RMSprop etc. These algorithms continuously try to optimize the weights and bias factors by minimizing the cost or loss function for the entire training set. The goal is to identify weight factors that make the convex cost function go down the slope, and reach the global optima as quickly as possible. This should be optimized for the entire training sample. This is a mouthful, but in very simple terms you need to show the data and mapping or labels to the computer, and the algorithms will learn this mapping as an approximation function, which we can later use for prediction. But, how does this actually work? Nobody clearly knows, and people have drawn analogies to how the brain learns etc. which is a bit far-fetched imagination. But the fact is, this “forward-prop” & “back-prop” method used in Supervised Learning has turned out to be a very good way for finding an approximation function, and this function will work quite well, subject to certain conditions. These conditions form the pillars for our discussion forward. Recently researchers and big internet companies have shown us that supervised learning has worked brilliantly well especially with some unstructured use cases, like image, audio, video, speech, NLP etc. where traditionally computer algorithms (expert systems) were not that good. Deep supervised learning is a very empirical and iterative process and by “empirical process” we mean – you just have to try a lot of things and see what works. There is no magic bullet! Remember we’re trying to learn an approximation function. Since this is entirely data driven, it will always be evolving. Data is the fuel here. For compliance reasons, even the explanation for a prediction or result from a neural network model needs to be done in data parlance. TL;DR: What you need to show the computer is a lot of good data and mapping (labels), and what they are figuring out is an approximation function. So, isn’t this Software? Yes, it is Software, and it also needs tons of typical software and hardware around it to work. It’s a different way of programming, the key factors being… - Data is the fuel here - Labelling is the labor here - Experimentation is the process here And maybe we can also add this… - Weaving networks differently is the research here All of this is very much iterative & empirical in nature, much more than typical SDLC (Software Development Lifecycle). The empirical nature of the Model Development Life Cycle is much more of a necessity than typical SDLC. This cycle needs to happen at a much more agile pace, needs to be monitored & measured continuously, can never stop because of data evolution or seasonality, needs to be sensitive, needs to be responsible and the list goes on. It’s not like typical Software processes don’t need any of these; In ML this is a necessity. So hope you are getting a sense of where we’re going with this, and why we’re pitching the importance of MLOps to keep all of this running and improving. Please keep reading as the pitch will be more clear and evident after some more points. Analogy to make the ML paradigm apparent When we talk or write in English, do we always think and apply the rules of the English grammar? I am a non-native English speaker and I never learned English grammar by the rule-based approach (symbolic approach)! Then, how do we speak or write without explicitly studying or knowing any of these? It’s because the brain might have heard, read, and got trained, and developed some sort of an approximation strategy. More experience means more data, and the better we get without explicitly knowing all the rules.Page Break This is a very crude analogy of supervised learning and one shouldn’t draw analogies to the human brain functions from this example. Consider ML as naive neuroscience, just like genetic algorithms are naive biology! 🙂 So what’s MLOps then? If you’ve come this far, you might have understood that we need tons of good data, continuous monitoring, continuous training, and continuous experimentation, or in short continuous operations to make ML work. It’s empirical, iterative & data centric by nature. There is no fixed forward function which you know upfront how to program! You don’t know that function, so you need to derive approximation functions using forward-, back-prop techniques, and for that you need to train/fit the neural network model with data & labels. But the world (data, labels, truths) evolves, and therefore for the function to stay relevant the entire cycle of operations (data collection, mapping, labelling, feature engineering, training, tuning etc.) needs to churn along. This is a necessity in the case of ML projects! Learning and adapting continuously is the simple mantra behind successful Deep Learning or ML projects. This operational cycle is called “MLOps”. Without this optimizing cycle there is no relevance for ML projects or ideas. MLOps is not a single person’s job or a single team’s job. It shouldn’t be used as Job title, because it is not a job function, role, or technology per se. It is a collaborative, iterative, empirical, and data centric methodology for doing Machine Learning Model Development and Delivery. Stressing the Significance of ML projects and MLOps To understand the significance of ML projects and therefore MLOps, we’ll take a slight detour. Earlier we talked about two fundamental approaches, programming functions and learning functions. Some questions arising here are… Will the two fundamental programming paradigms coexist? Or will it replace the Software expert systems, symbolic representations, and the traditional Software development in general? Proponents of ML have argued the need for more people who get computers to do things by showing them. Large corporations like Google are now training people, called brain residence. A lot of new aspiring engineers actually want to work on ML. But, both these approaches are going to coexist in the future. Even in the AI space the symbolic and non-symbolic representations will co-exist. This coexistence is necessary because… - There are certain areas where the classic approach shines. - There are other areas where classic approaches lack, especially the unstructured ones, like vision, NLP, translations, object detection, image classification etc. where the Human Level Performance outshines. ML will fill this gap. - There are cases where we still don’t know, or have not developed the function to program. So we need to learn those functions from existing data and labels. - There might be yet other use cases where ML approaches may be used to start with, and then these networks and their connections and weights will be analyzed and studied to deduce a generic function. In fact if you see it from another angle, the feature engineering and feature extraction work that is given much importance in supervised learning can be thought of as a step towards designing an expert or classic system. Data scientist / domain expert studies, analyzes the data to extract the features that they think might have significant impact on the results of the model. It should be clear by now, why MLOps is called a “data centric” methodology. What makes MLOps a necessity is the empirical, iterative, and data centric nature of supervised machine learning to sustain its relevance. This makes MLOps much more intense than the typical DevOps cycle we are used to. Both Software and ML development & release workflows are super essential for revolutionizing the grand Software production line. Efficient DevOps and MLOps practices and tooling will be key to climb the summit of the Digital Revolution. This is true for all industries and all domains. What if there is no proper MLOps? By this time, you might have already understood the issues with not having a production line workflow and tooling for ML projects and software projects in general. There are many facets to it, but we’ll briefly touch upon some of the most important ones. Lack of proper experimentation tracking – will lead to chaos for data scientists and ML engineers, and is a recipe for underperforming Models in production. As we’ve repeated throughout the article, supervised ML is a highly iterative, data intensive, highly empirical process. If there aren’t enough tooling and processes to track and analyze these experiments during development and operations, it’s simply not going to work. Model irrelevance – will be the direct result of not having an end to end streamlined MLOps workflow. For ML projects, the most important mantra to remember is that the real development starts with the first deployment. When the model starts to see the real world data, it’s going to be a different story altogether. If you didn’t have a process or workflow to monitor the model performance w.r.t input and output metrics, measure the drifts, detect outliers, or if you didn’t have a way to collect and ingest real data back into the workflow, and retrain the model, the model is not going to stay relevant for long. This will have a direct impact on the business if it’s relying on this model for its core operations. So MLOps shouldn’t be an afterthought, it should be set up right alongside the first lines of code or data that you develop or collect. Cost effectiveness – should be top priority for any ML project. Afterall primary aim for any ML project is to meet or even exceed Human Level Performance (HLP). This is true for use cases which humans were typically good at, and also for those tasks where humans typically relied on computers. The main fuel here is data, mapping, features that can be engineered out of the data. The cost impact of these resources and processes is a very important question. For that 0.1% improvement, if it’s going to drain the pocket by an additional 10%, does it make any sense? This could be development or operational costs like infrastructure scale cost, data storage costs, data transformation costs, training compute costs, inference costs, specialized hardware cycles etc. Without MLOps processes, tooling, and workflow there’s no way to measure and quantify these metrics and act upon them in an iterative manner. Without MLOps there’s no way to identify whether an ML idea is worth pursuing, and quantifying its cost effectiveness. Implementing MLOps is definitely going to cost an organization for the tooling, infrastructure etc. But the long term benefits and savings it brings about far outweighs its cost. Over time, organizations should develop, unify, and enforce standard MLOps tools & practices across many ML projects and teams. This is key to avoiding disasters down the line. Without continuous Development and Operations, there is no relevance for ML projects. It’s so much more critical for ML projects due to its nature. MLOps tooling, workflows, processes, pipelines, etc. should be set up right alongside experimentation of the project idea. Common MLOps guidelines, infrastructure, tooling environment etc. across multiple projects in an organization is also critical for streamlining and cost reduction. Think of supervised deep learning as one big Continuous Experimentation, a Lab that runs forever. MLOps is only the way to tame this highly iterative, empirical, data intensive genre of Software development. Every industrial revolution started with new discoveries and paradigm shifts in production, and then reached their summit by advancements in these production lines. We see this grand DevOps space with MLOps being an integral part of it, as the cornerstone for production line optimization which will take the current Digital phase to its pinnacle. In the next set of blogs, we’ll delve deeper into what needs to be done differently, pipelines, lineage, provenance, experiment tracking, benchmarking, continuous integration / continuous deployment / continuous monitoring / continuous training, responsible data handling, how tools can help streamline the process, how much automation is good, and much more. So stay tuned…
<urn:uuid:45c5a58a-e3f7-4748-8776-15e5865f7001>
CC-MAIN-2022-40
https://www.dailyhostnews.com/mlops-ml-production-revolution
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00184.warc.gz
en
0.931857
3,992
2.53125
3
While computers have been around in some form for much longer than many of us would think, they haven’t always worked the way we have come to be familiar with—a screen with a cursor, buttons, icons, text boxes and search bars, clickable links, and more. Early computers relied on punch cards that dictated commands through a coded, hole-punched card. Later came DOS command prompts, which allowed users to type in commands. Then, came something revolutionary. Graphical user interface, or GUI, took the world by storm in the early 1980s, when computer manufacturers began to transition to more user-friendly graphical systems. Since then, graphical user interfaces have evolved a lot—but the basic technology has remained the same for nearly 40 years. Despite its enduring popularity, it’s become clear—GUIs limit user productivity and creativity. It’s a tech-driven world, and we’re just living in it As technology continues to be developed and evolve, so do the ways we interact with it. Users develop new habits based on the technology at their fingertips. Once, you had to read the newspaper or turn on the television to access the weather forecast. Then the spread of computers and the internet put the weather right on your home screen when you logged online. A few years later, your cellphone would display its weather widget on the lock screen, using your GPS location to keep it updated. Now, all you have to do is ask—and your voice-activated smart speaker will read you this week’s weather forecast. It would be practically inconceivable that you could hold a natural feeling conversation with your computer even just two decades ago. Science fiction has long played with the idea of talking computers and voice response, but it is only recently that we’ve seen progress in huge leaps and bounds towards that dream—and thanks to voice user interfaces (VUI) and artificial intelligence development, we’re seeing our own interactions with technology start to change again. The shift from GUI to VUI In recent years, we’ve seen the popularity of voice user interfaces explode as smartphone developers created Siri, Cortana, and Google voice assistant. Amazon got into the VUI space with their line of Alexa smart speakers, and others, like Google Home, followed suit. But why are voice-activated devices becoming so much more popular than the traditional visual interfaces many users have known their whole lives? The answer is simple—convenience. When learning a new recipe and covered in sticky dough, it’s much easier to say “Alexa, what’s the next step?” than to unlock your phone and scroll through a recipe. Why open an app and type in “weather forecast tomorrow in Los Angeles” when you could ask your phone, “Hey Siri, what’s the weather looking like in SF tomorrow?” Graphic user interfaces were, at the time, an obvious next step in moving user experiences towards a more natural interaction. But we’ve reached its limits, and now we’re stepping towards another form of interaction—conversation. As humans, we primarily communicate through our language, and VUI is ready to listen to us. When matched with the power of emotion AI, voice user interfaces can feel almost as natural as talking to another human being, and that is what sets it so far apart from clicking icons. For so long, we’ve been tethered to the constraints of GUIs. We can click and code and build for years, but we still will be able to wring out only so much improvement. We’ve reached the point where graphical interfaces are limiting our productivity and creativity—and it’s time to change that. Developing graphic user interfaces is a labor-intensive task, and the end product is still limiting. Users are required to learn the specific steps to complete a task in a GUI space, which means that hours are lost to training or learning the knowledge needed for a new task or program. We’re forced to work around the constraints of GUI, which slows our productivity as we navigate menu after menu of options. Our creativity is confined to what GUIs are capable of processing, and while it’s considerable still, working with a GUI limits us. Voice user interfaces, on the other hand, are far more intuitive. Rather than having to find the visual navigating options that match what they want to do—basically doing all the extra work themselves—users declare their intent and the system organizes itself to respond by collecting data through an interactive dialog. VUIs are designed differently, and require a different style of interaction to use them, but, fortunately—humans tend to be pretty good at conversation. That’s why asking Siri for directions or telling Alexa to turn up the lights in the living room feels much easier to us than clicking or typing in a GUI to get the same results. Technology freeing us from GUI As discussed above, smart speakers and voice assistants are just two of the technologies that are working around the constraints of GUIs. These devices can be found in many homes, offices, purses, and pockets—practically everyone has at least one device that uses a VUI. Apple is a company that is leading the charge when it comes to redesigning our interactions with technology. They were early adopters of touch technology for mobile devices, doing away with keys and cursors. Next, they introduced their groundbreaking voice assistant, Siri. Now, they’re continuing to change the way we interact without devices. AirPods, the sometimes lauded, sometimes mocked, fully wireless earbuds introduced by Apple are one such example of revolutionary user interface design. Voice commands can be used to interact with Siri and control your device—all without ever taking it out of your pocket. Another device pushing the limits of GUIs is the Apple Pencil—a wireless precision stylus that is designed to turn an iPad tablet into an unlimited-feeling sketch pad for drawing, scribbling, note-taking, designing, and more. This tool releases us of many of the constraints of a GUI, instead of letting us craft content exactly the way we want. There’s no question that technology is rapidly evolving and our habits with it. While VUI is the latest frontier, we look forward to seeing the way it grows, and what steps developers take as we expand the limits of VUI and even move past it.
<urn:uuid:91f83630-2e6b-4348-abe5-2122fe265b23>
CC-MAIN-2022-40
https://bdtechtalks.com/2019/08/22/how-gui-limits-productivity-and-creativity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00184.warc.gz
en
0.950657
1,337
2.984375
3
Enterprises are moving to cloud-based environments at an ever-increasing pace. Cloud infrastructure offers many benefits, and it enables organizations different options and possible set-ups (public cloud, hybrid cloud etc.). However, security remains a top concern among enterprises which are moving to cloud-based infrastructure. Cloud computing continues to transform the way organizations use, store, and share data, applications, and workloads. It has also introduced a host of new security threats and challenges. With so much data going into the cloud—and into public cloud services in particular—these resources become natural targets for bad actors. While using cloud technology offers many advantages compared to on-prem models (scalability, control, cost reduction etc.), it’s important to realize that cloud environments are vulnerable to both inside and outside attacks. Most cloud service providers offer airtight cloud storage security. It’s how cloud storage services are used that presents a risk to many organizations. Recent research has revealed huge volumes of publicly-accessible storage, huge volumes of unencrypted data, and an increasing number of data breaches attributable to compromised credentials. In this article we’ll discuss cloud storage security best practices. Cloud storage security Cloud security—also referred to as cloud computing security—is designed to protect cloud environments from unauthorized use/access, distributed denial of service (DDoS) attacks, hackers, malware, and other risks. To accomplish this, cloud security uses strategy, policies, processes, best practice, and technology. Cloud data security typically involves a number of tools, technologies and approaches. A major advantage to the cloud is that many security elements are already built into systems. This typically includes strong encryption at rest and in motion. It may also involve: - Geo-fencing. The use of IP addresses and other geolocation data to create a geographic boundary and identify suspicious activity. - Policy-based lifecycle retention. Systems use data classification polices to manage and automate how data is stored, retained, archived and deleted. - Data-aware filtering. This function allows organizations to watch for specific conditions and events – and who has accessed information and when they accessed it. It can be tied to role-based authorizations and privileges. - Detailed logs and full user/workload audit trail reporting. The ability to peer into logs and audit workloads can provide insight into security concerns and vulnerability risks. - Backup and recovery functions. These essential capabilities allow an organization to navigate an outage but also deal with security risks such as ransomware attacks and maliciously deleted data. Robust cloud-based disaster recovery solutions leads to availability across all conditions. In March 2018, Gartner predicted “through 2022, at least 95 percent of cloud security failures will be the customer’s fault”. Although not specifically focusing on cloud storage security, there is little doubt Gartner had this in mind when suggesting businesses should develop a strategy that “includes guidance on what data can be placed into which clouds under what circumstances”. Cloud security is tight, but it’s not infallible. Cybercriminals can get into those files, whether by guessing security questions or bypassing passwords. But the bigger risk with cloud storage is privacy. Even if data isn’t stolen or published, it can still be viewed. Governments can legally request information stored in the cloud, and it’s up to the cloud services provider to deny access. Tens of thousands of requests for user data are sent to Google, Microsoft, and other businesses each year by government agencies. A large percentage of the time, these companies hand over at least some kind of data, even if it’s not the content in full. Cloud storage security best practices Before committing to a cloud based storage architecture, discuss the physical security features that your provider has implemented. Ask questions that detail how hardened their access policies are for getting onsite. Keep in mind, securing your data is a partnership between you and the provider. Typical security measures such as firewalls, VLANs and multi factor authentication should be implemented. First, understand the flow of data for each application. Once you understand the correlation of data to application, you can then implement several key policies that will safeguard your data. Role Based Access is a key step in securing your data and environment. Limiting users to access only the necessary applications and data essential to their job function in essence limits the reach of a rogue employee. Access management generally requires three capabilities: the ability to identify and authenticate users, the ability to assign users access rights, and the ability to create and enforce access control policies for resources. It’s impossible to monitor cloud storage security best practices when you may have millions of assets deployed in the cloud. So, an ideal solution is to implement a cloud management platform with automation capabilities that can monitor compliance with governance policies and alert system administrators to policies violations, or take an administrator-defined action to prevent a breach of cloud storage security. To govern your cloud with automation, all you need to do is define the policies users must comply with and the actions you require the cloud management platform to take if a policy violation occurs. The solution then monitors cloud activity around the clock, alerting you to activities that may compromise your business’s cloud storage security. Example policies could include: - If any CloudTrail S3 bucket is publicly accessible, restrict access and send email notification. - If any S3 bucket with tag “PII” is unencrypted, execute function to encrypt bucket. - If an IAM User’s Cloud Access Key has not been rotated in 90 days, send an email notification. - If any privileged IAM user has MFA disabled, execute function to revoke access. Most cloud service providers offer airtight cloud storage security. It’s how cloud storage services are used that presents a risk to many organizations. If you have any questions about how we can help you optimize your cloud costs and performance, contact us today to help you out with your performance and security needs.
<urn:uuid:497e2adc-a9ea-4dfe-9f3a-3b43d293b97f>
CC-MAIN-2022-40
https://www.globaldots.com/resources/blog/cloud-storage-security-best-practices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00184.warc.gz
en
0.921185
1,247
2.59375
3
Researchers in the UK recently identified 12 small asteroids close enough to Earth to be used in mining operations that could begin as early as 2021. The research was part of a larger effort by both private and public institutions to learn more about the potential for tapping into asteroids that could contain large deposits of valuable resources including platinum, iron-nickel ore or gold. Even if some of the resources were too expensive to carry back to Earth, certain minerals could be tapped to help refuel spacecraft already in orbit. “If material or volatiles extracted from an asteroid could be used in space to feed an engine or to save launched mass from Earth — water for life support, for radiation shielding — the costs of space operations may drop considerably,” said Daniel Garcia Yarnoz, PhD Researcher at the Advanced Space Concepts Laboratory at the University of Strathclyde and the lead researcher on the paper. Of course, mining for those precious and industrial metals far away from Earth comes with several challenges. One of the technical difficulties is to find “easily retrievable objects,” or EROs, that can be transported to Earth by changing their velocity by less than 500 meters per second. The EROs also must be maneuvered into an accessible orbit, meaning an L1 or L2 Lagrangian point, an area in space where the gravity of the Earth and the sun balance out. The researchers at the University of Strathclyde scoured a list of about 9,000 near-Earth objects and identified 12 that fit the criteria for possible mining. One of the asteroids, 2006 RH120, could be sent into orbit by changing its velocity by as little as 58 meters per second, according to the scientists, who said it could be done as soon as February of 2021. If it were set into motion then, the researchers calculated it could reach its destination in five years. Since that mission would be on a smaller scale and using today’s technology, that is a realistic time frame, said Leslie Gertsch, senior research investigator and deputy director of the Rock Mechanics and Explosives Research Center at the Missouri University of Science and Technology. “Retrieving an asteroid of 2-5 meters diameter is certainly realistic in that time frame, within the constraints laid out in the Yarnoz et al. paper,” Gertsch told TechNewsWorld. “Mining enough asteroids of that size to be economically viable is another thing altogether.” NASA Wants In, Too The research was released around the same time NASA announced plans for a mission designed to help scientists learn more about the resources and experience needed to mine asteroids. The OSIRIS-REx spacecraft is set to launch in September of 2016 and is expected to reach the asteroid Bennu in October of 2018. Once there, it will scout the asteroid, looking for clues about its formation and the path of its orbit to better determine if mining could pose any threats to Earth. The OSIRIS-REx is then scheduled to return with samples from Bennu that will provide scientists with more clues about the possibilities for mining asteroids. NASA’s effort, together with research that is currently under way, will aid scientists and potential investors in understanding both the challenges and possibilities associated with asteroid mining, said Gertsch. “The biggest hurdles from the commercial perspective are poorly known costs and poorly known markets; both aspects must be well understood to make a viable case to investors,” she told TechNewsWorld. “Technical questions center on ‘How much will it cost?’ rather than ‘How can it be done?’ In the beginning, it will be very expensive and inefficient. Costs will go down and efficiency will go up as experience is gained.” The overall lessons about space exploration that could develop from studying asteroids could also be a catalyst for helping interest and public funding in asteroid mining, said Garcia Yarnoz. “Some of the main limitations can also be the driving forces or part of the solution,” he told TechNewsWorld. “The cost per kilogram in orbit — of the order of US$10,000 — and the limitation in volume on a rocket fairing [are] pushing the industry either towards more efficient launch systems, smart deployable structures, and/or hopefully in the near future, in-site resource utilization,” Yarnoz explained. “The synergies between other asteroid manipulation endeavors may also aid the growth in the industry,” he continued. “The current threat posed by asteroids can spark international efforts to detect, catalog, monitor and characterize the asteroid family of NEOs,” Yarnoz observed. “Several deflection techniques, such as laser ablation, have additional applications for opportunistic science or material extraction. Their development could kill two birds with one stone.”
<urn:uuid:6c882963-312f-47d2-b64d-cd3da217fbe3>
CC-MAIN-2022-40
https://www.crmbuyer.com/story/heigh-ho-heigh-ho-its-off-to-the-asteroid-mines-we-go-78710.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00184.warc.gz
en
0.955094
1,015
4
4
If you work in medicine, you know that HIPAA is one of the most important pieces of legislation to follow. Failing to maintain HIPAA security compliance can result in devastating consequences. It can cost practices thousands of dollars in litigation, and forever damage their reputation. Part of staying compliant with the law is healthcare cybersecurity. It involves keeping information about patients’ health and finances as secure as possible. By doing so, patients can trust doctors and practices can care for people. Keep reading below to learn about why cybersecurity is so important with providers, and how important HIPAA is. Protecting Patient Information Comes in Many Forms When it comes to protecting patient information, health care leaders need to take several steps. In the olden days, they used to be able to keep patient records under lock and key in a drawer. More information is now kept on servers and hard drives than physical records. Since information is digital now, there are more ways for hackers and thieves to access it. Hackers are always looking for new ways to exploit vulnerabilities in popular systems. Sometimes, that involves finding bugs in code or tricking people. It is up to healthcare leaders and security experts to predict ways hackers may try to steal information. They need to constantly develop new ways to make sure thieves aren’t able to access private information. It’s stressful, but it’s important to stay in compliance with HIPAA regulations. HIPAA Compliance Requires More Than Cybersecurity Staying compliant with HIPAA is hard. Healthcare practices need to do more than put passwords on their computers. They need to invest in security programs installed across their systems. They also need to make sure staff understands how to protect patient information. This involves testing staff members and making sure they don’t fall for phishing attempts to other kinds of hacks. It also means restarting systems so they get the latest updates and protections from vulnerabilities. HIPAA compliance is ongoing and active — you can fall out of compliance in a day if you’re not careful. To stay compliant, stay vigilant against threats. If you take security seriously and take steps to protect your patients’ information, you will be okay. And if the worst happens, you will be able to prove that you tried your best and did everything you could. Healthcare Cybersecurity is Fundamental to Any Practice HIPAA is not just another law that practices are expected to arbitrarily follow. Instead, it is a way to make sure that people can trust the ones who provide them care, enhancing their treatments. If patients do not trust their doctors, they may not give them all the information they need. And without that information, doctors will not be able to give people the care they need. By staying in compliance with HIPAA, patients know that you are doing everything in your power to help them. They will know they can provide delicate and sensitive information, confident that it will stay private. And so, you will be able to fully treat them and make sure they get healthy. Staying in compliance with HIPAA helps you find more success in your treatments, which reflects well on your practice. Just by following HIPAA standards, you can help improve your brand and grow your business. Cybersecurity and HIPAA Compliance Are the Same Cybersecurity is a complicated topic, and it can be hard to fully grasp the intricate details that go into it. Doctors simply aren’t trained on how to spot man-in-the-middle attacks or identify strange network activity. They certainly aren’t taught how to handle zero-day exploits either. Luckily, you don’t need a degree in cybersecurity to keep information safe. You just need to follow the guidance of people who do study it. Their advice usually boils down to some simple principles, and most of them focus on trust. Never give information to a person or a website you don’t fully trust. You should also never click on suspicious links in an email, and make sure to activate two-factor authorization. Also, you should make sure your staff is aware of common cybersecurity practices too. You can achieve this through some HIPAA compliance training. You can also sign your staff up for online HIPAA training, so they can finish it on their own time and it doesn’t interfere with their schedules. The HITECH Act Made Practices High-Tech The HITECH Act was passed as a way to achieve similar goals to HIPAA. The legislation encourages healthcare providers to adopt digital systems to achieve several goals. For example, the act encourages providers to adopt digital systems to stay in constant communication with patients. However, by adopting digital systems providers can also open themselves up to new risks with HIPPA compliance. So, the act includes a section that provides grants and loans funding to help providers. It also describes ways providers can improve privacy, and improve their compliance with other laws. Different Practices Require Different Kinds of Security It doesn’t matter what kind of practice you are what kind of medicine you practice — patient information must be protected. Some kinds of practices may want to focus on protecting information more than others, though. For example, many patients may not worry too much about information from annual checkups. Yet, mental health providers speak with patients on intimate levels. These kinds of providers usually uncover information that doctors do not reveal during a checkup. And so, they may want to protect patient records more than other kinds of practices. By doing so, they establish a deeper level of trust with patients which can be conducive to relationships with them. They trust that their doctors will make sure information is never revealed to the public. Stay Safe With HIPAA Compliance Training Healthcare cybersecurity involves many moving parts. Organizations need to train staff on how to keep their systems safe, and how to handle patient information. It also requires leaders to stay connected to the tech world, routinely updating their systems so they stay safe from hackers. It can be complicated and trying to maintain HIPAA compliance can consume all the time they have in a day. It’s best to reach out to experts for help, so they can focus on helping people instead. And for that, we’re here. Just reach out to us, and we will make sure your practice is protected from hackers and your patient’s information is safe.
<urn:uuid:9c023291-785e-463b-81e9-6c11671ffafd>
CC-MAIN-2022-40
https://hipaasecuritysuite.com/how-healthcare-cybersecurity-affects-hipaa-compliance/?amp
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00184.warc.gz
en
0.951541
1,304
2.671875
3
Cybercrime is typically driven by three main factors: - Criminal profit incentives ($, £, €, bounty, rewards, fame, etc.) - Malice or political incentives - Geopolitics or espionage opportunities. And to achieve these aims, cybercriminals undertake a range of different scams and attacks on UK enterprises. So what are the typical attacks that form the threat landscape for UK businesses? Here I’ll assess three of the most common forms of cyberattacks that you should be alert against and protect your business from as we enter 2017. Ransomware is a relatively new type of malware which prevents or limits users from using their system. Ransomware attacks are primarily carried out for money – it’s called ransomware because it effectively holds your computer hostage until you pay the attacker a certain amount of money. You usually have to make the payment through a certain online payment platforms and within a limited time period. Once you make the payment, you are again free to use your own system or to get your data back. SMEs (as well as big corporates) are more and more often getting specifically targeted by ransomware type malware, including as Cryptolocker, CoinVault or CTB-Locker. There are several ways it can infect your system. Most commonly it can be downloaded by users, usually through visiting a compromised website. Ransomware can also be downloaded in conjunction with another file – either dropped into your system by another malware or sent as an attachment in a spam email for example. The impact of these attacks can be dramatic and crippling, as this malware will encrypt all your data, making everything completely unusable unless you have the key. Paying the ransom to the hacker is supposed to be the only way to solve the problem and often is sometimes seen as the lesser price to pay than the cost of recovering your systems by other means. However, as with ransom demands in the movies, there is no guarantee. - Denial-of-service attacks Denial-of-service attacks give criminals another way to target individual organisations. By overloading critical systems, such as websites or email, with Internet traffic as a way of blocking access, denial-of-service attacks can wreak financial havoc and disrupt normal operations. Distributed denial-of-service (DDoS) attacks are not a new development, but sadly they are growing in intensity and frequency. We had a live example quite recently with the 1Tbps+ DDoS attack faced by DNS provider Dyn which was likely the largest ever seen. In this example, attackers used the Mirai IoT botnet composed of compromised CCTV cameras, among others. Dyn’s official report on the incident said it had seen traffic from “tens of millions of IP addresses.” In 2017, we will see an increase in the use of DDoS attacks being used as a smokescreen to distract IT teams while other incursions infiltrate networks to steal sensitive data (aka ransomware). My prediction is that ransom demands associated with DDoS attacks will increase exponentially in 2017, fuelled by the increased automation of DDoS attacks and the ability to buy them off the shelf. The ‘Lizard Squad’ are one example of a group of hackers who sell DDoS attacks-as-a-service for as little as $6 a month. To protect themselves, companies should deploy a combination of on-premises and cloud-based solutions to handle attacks of varying types and sizes – effectively a multi-layered network security approach. - Healthcare Threats The healthcare industry is going through a major evolution as patient medical records go online and medical professionals realise the benefits of advancements in smart medical devices. Similarly, patient medical records, which are now all online, are a prime target for hackers due to the breadth of sensitive information they contain. According to a poll by Health IT News and HIMSS, 75% of hospitals surveyed have been hit by a ransomware attack over the past year. With hospitals and medical facilities still adapting to the recent digitalisation of patient medical records, hackers are capitalising and exploiting the many vulnerabilities in these organisations’ security layers. Breaches within the healthcare industry will likely continue in 2017 until the industry is able to get a better grasp on the mass amount of digital patient data now under its control. - Mobile Malware One of the key contributors to the threat from mobile malware is the proliferation of applications that conduct real business using access-sensitive and confidential information. Typical users may have banking, credit card, hotel, airline and corporate applications installed on their mobile devices. This access is secured, at minimum, with username and password controls. Cybercriminals are practical actors; they follow the money. They are turning their focus and attention to the mobile platform because of the growth in mobile devices coupled with the opportunity to harvest a wealth of information from each device. Unlike work desktops and laptops, which typically contain only job-related information, mobile devices often combine work and personal information and applications. - Advanced Persistent Threats – highly targeted attacks Targeted attacks have evolved from early novice intrusion attempts to become an essential tool in the cyber-espionage field. Industrial control systems (ICS) are prime targets for attackers whose motives for executing these attacks are typically a matter of national security. In view of the growing sophistication of these attacks, good IT security is essential and broad cybersecurity practices should be the norm. Well-funded state operations are not the only threat. Patriotic hackers (the self-titled ‘hacktivists’), criminal extortionists, data thieves and other attackers may also use similar techniques – but with fewer resources and less sophistication. In 2017 I believe email-based attacks will continue much as before and web-based attacks will grow increasingly sophisticated. Espionage based attacks will use more exploit kits, which involve bundling together exploits rather than using just one attack. Exploit kits have been used in e-crime for many years, but cyber-espionage attackers have now adopted them too. Protecting your business There are a number of steps you should take to help ensure your organisation can remain secure against these types of attacks. Here are the top 10 practical information security measures that should be on your security agenda: - Regularly review the personal data you hold and encrypt, encrypt, encrypt - Build a managed security ecosystem around you - Create access management policies - Adopt patch management and malware approach - Backup and minimise your data - Review logs regularly - Stay informed of the latest vulnerabilities - Train your staff - Understand your cloud service provider security model - Choose your service providers amongst those who are ISO27001 or CyberEssentials accredited [su_box title=”About Maintel” style=”noise” box_color=”#336588″][short_info id=’100542′ desc=”true” all=”false”][/su_box]
<urn:uuid:66e02f3b-4f8c-4ea2-81b5-c8810da5e262>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/articles/lies-ahead-top-security-trends-2017/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00384.warc.gz
en
0.944444
1,442
2.546875
3
More than 400,000 computers wrldwide have been infected with WannaCry ransomware since the beginning of the devastating attack on May 12th, 2017. WannaCry has compromised standalone and networked Windows computers, at home and in the enterprise. The initial attack was made possible because of EternalBlue, the vulnerability exposed by the Shadow Brokers, an anonymous group who claim they will expose more zero–day vulnerabilities soon. Around the same time experts were trying to handle the WannaCry attack, they detected another computer virus in the same family as WannaCry: Adylkuzz. What is Adylkuzz? Silently installing itself in the background of your computer, Adylkuzz is a virus that runs software which mines Monero, a cryptocurrency similar to Bitcoin. This virus exploits the Doublepulsar and EternalBlue vulnerabilities for attacking systems, and neutralizes Server Message Block (SMB) networking to prevent further attacks from other malware via that same vulnerability. Surprisingly, WannaCry could have had a larger impact if Adylkuzz didn’t prevent other malware from exploiting the SMB vulnerability. Discovery of Adylkuzz While exploring the impact of WannaCry, a few security researchers exposed their lab machines to the EternalBlue vulnerability to identify exactly how WannaCry infects systems; instead, they discovered new malware called Adylkuzz, which was more prominent than WannaCry. Those researchers repeated the operation a few times, exposing a few other machines with the same vulnerability to the web. Their machines ended up being enrolled in an Adylkuzz mining botnet, which activates this virus. How does Adylkuzz infiltrate your network? Adylkuzz is executed from various virtual private servers that scan for a point of entry on TCP port 445. After successful exploitation using EternalBlue, machines are then infected with DoublePulsar. DoublePulsar opens a backdoor for the download and installation of Adylkuzz from another host. After installation, Adylkuzz blocks SMB communication to avoid further infection. Adylkuzz then identifies the victim’s public IP address and downloads the mining instructions to their computer. At any given instant, there are multiple Adylkuzz command and control servers hosting the cryptominer binaries and mining instructions. Where do cryptocurrencies fit into the mix? Over the last few years, cryptocurrencies have been gaining traction, and the cryptocurrency market has . Most cryptocurrencies have the ability to transfer funds directly to an address—a wallet—and many people employ this feature for mundane means, such as making international transactions. But cryptocurrency also has some nefarious uses, like among individuals working with viruses and black market affairs. Mining is the only way to generate cryptocurrencies like Monero, but it’s a slow process that requires a considerable amount of processing power. Cryptocurrency mining malware like Adylkuzz implements hidden mining processes on infected machines to generate cryptocurrency without draining any of the hacker’s own resources. Adylkuzz significantly slows down infected computers while downloading and executing mining instructions or performing mining operations. Monero, the cryptocurrency mined by Adylkuzz, gained popularity after AlphaBay, a major darknet market, began accepting it for payment. Even the hackers who developed WannaCry utilized cryptocurrency, requiring infected users to pay ransom with Bitcoin. Stay secure by patching your systems Adylkuzz’s attack began in parallel with WannaCry. Unfortunately, unlike WannaCry, Adylkuzz infections are difficult for end users to identify and the hackers recently changed the address where mined Monero is delivered. With attacks like WannaCry and Adylkuzz exploiting networks around the world, practicing the right security measures is essential for staying ahead. One of the most effective ways to prevent the exploitation of known software vulnerabilities is by patching. Security professionals are working hard on a daily basis to stop breaches, and IT companies are rushing to patch computers. Now is the time for leaders to start brainstorming ways to keep their organizations secure in the future. Experts suggest organizations should to keep their systems up-to-date, allowing them to avoid any vulnerabilities that may crop up For those who haven‘t implemented the SMB patch that Windows released last month, their PCs and servers will remain vulnerable to Adylkuzz and other viruses that implement this type of attack. Ransomware and viral cryptocurrency miners are disruptive and costly, and now that two major threats have employed them in their attack tools and used the same vulnerability, we expect other threats will follow soon.
<urn:uuid:819cc8b2-f966-40bb-8c64-793a36e3932b>
CC-MAIN-2022-40
https://blogs.manageengine.com/desktop-mobile/desktopcentral/2017/06/30/wannacry-again-meet-adylkuzz-its-sneaky-cryptocurrency-mining-sibling.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00384.warc.gz
en
0.9296
937
2.640625
3
While most consumers know about the risks associated with using the Internet and engaging in e-commerce, the majority still relies on inadequate measures to protect their privacy and identity, according to a new survey by Anonymizer, Inc. An overwhelming majority of respondents believed a firewall and an anti-virus software offered adequate protection for their identity online. However, while those tools are important safety measures, they are addressing a very different threat, Anonymizer’s chief scientist Lance Cottrell wrote on the company’s Privacy Blog. Online identity theft is much more common than people realize and consumers are overwhelmed with conflicting information about what they need to do to protect themselves as they surf the web, said Bill Unrue, president of Anonymizer. “Consumers need to realize that the steps they take to protect their computer system are not the same measures they need to safeguard their privacy and identity when they’re online,” he said. “Firewalls and anti-virus software simply aren’t enough.” The survey also revealed consumers are increasingly aware their mobile devices are vulnerable to malicious cyber activity. Only 28 percent believed their identity was secure on a mobile device. And 85 percent of respondents were aware they were being profiled by advertisers as they surfed the Internet, and 85 percent knew cyber crooks were stalking them without their knowledge.
<urn:uuid:f9be7871-8ff6-44c9-9911-0fa7d3cc666c>
CC-MAIN-2022-40
https://blog.executivebiz.com/2010/10/survey-consumers-bad-at-protecting-identity-privacy-online/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00384.warc.gz
en
0.974684
283
2.671875
3
No Silver Bullet While a well-designed system using two factor authentication can be very secure, simple two-factor authentication is not secure against phishing, pharming, and man-in-the-middle attacks. That has led some people to question its usefulness. However, here again the term two-factor authentication conceals as much as it reveals. - The most rudimentary form of a man-in-the-middle attack is to simply listen in on a transaction and harvest information. Although there are many ways to safeguard against this type of attack, such as using Secure Sockets Layer (SSL) to protect the transaction against eavesdroppers, it's still one of the most popular and potent forms of attack because most network traffic is not encrypted. - A more sophisticated example of a man-in-the-middle attack is pharming—the process of redirecting the unsuspecting user to a phony web site. This strategy works by corrupting a DNS server somewhere on the Internet and substituting phony IP addresses for real ones. Because almost everyone uses the URL (such as http://www.mybank.com) rather than the IP address, the corrupted server redirects the unsuspecting customer to a criminal's web site, where information can be harvested, transactions spoofed, and all kinds of other nasty things can happen. - Phishing, of course, is the process of tricking the victim into revealing identifying information, such as PINs or credit card numbers. Some forms of two-factor authentication have features to protect against such attacks, including encrypting transactions and ID information, changing keys constantly, and handling authentication information only in a highly protected space inside the system. A number of vendors have systems that make such attacks much more difficult. TranSend, for example, has a product that uses a challenge-response system with the party on each end of the transaction generating an encrypted key that the other party can decrypt and recognize. The responding party, such as a bank or merchant, uses a system built around the IBM 5000 series cryptography board to automatically generate keys in a highly controlled space. The initiating party, such as a customer, uses a protected hardware device to generate his or her key. Because the keys can change minute by minute or even faster, it's extremely difficult to mount an effective man-in-the-middle, phishing, or pharming attack against such a system. Similar systems are available from many vendors. E*Trade employs a similar system from RSA Security; E*Trade customers can be issued devices (the size of a credit card) that produce a six-digit identification code with the press of a button. The customer attaches the code to his or her password. Since the codes change every minute, the chances of a successful man-in-the-middle attack are greatly reduced. "The great strength of secure ID is that the token code is changing every minute," says Karl Wirth, Director of Product Management for Authentication Solutions at RSA Security Inc. "It's only valid for a very short period of time. Assuming someone got your PIN and your token code, that would only be valid for a minute." One further variation, Wirth points out, is to hash the token code with the PIN to produce what's entered into the banking site. Even if someone is intercepting the communication, they don't get your PIN. However, even within this approach there are variations. "The big discussion in the industry is whether the generation of the second factor is done in protected or unprotected space," says TranSend's Scott. "If you protect the space where the second factor is created and make it harder to duplicate and copy, and harder to discover, you're increasing its reliability." The IBM board is considered quite secure because it generates keys in its own space, which cannot be accessed by the host system's operating system. Using a hardware security module such as a protected smart card or USB fob and adding safeguards such as intrusion prevention and detection greatly increases security.
<urn:uuid:21fb00d5-0dad-4c85-9aa0-b2820b3fb1c3>
CC-MAIN-2022-40
https://www.ciscopress.com/articles/article.asp?p=377071&seqNum=4
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00384.warc.gz
en
0.950882
813
2.796875
3