text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
What You Will Learn By the end of this chapter, you should know and be able to explain the following: The essentials of wireless LANs, including their benefits and risks The major threats to a wireless network The breadth and scope of possible attacks and exploits that are available to attackers Being able to answer these key questions will allow you to understand the overall characteristics and importance of network security. By the time you finish this book, you will have a solid appreciation for network security, its issues, how it works, and why it is important. In the end, we will remember not the words of our enemies, but the silence of our friends.Martin Luther King Jr. (1929-1968) When was the last time you went on vacation to get away from it all? Perhaps to some remote beach or maybe a getaway to the country? Imagine that you walk out the patio door of your hotel room (an ocean view, of course) and admire the beauty of the sun setting on the ocean. The air is cool, so you decide to sit on the porch in your favorite lounge chair; the sea-gulls are playing, the waves are breaking in a rhythmic beat, and beep-beep-beepyour pager begins to go off! Who could possibly be paging you while you are trying to relax and unplug? What emergency could be so grave that it would require you to be interrupted on this fantasy vacation? According to the message on the display, there seems to be a problem with the company firewall/VPN/Exchange server/<insert emergency here>. It looks pretty serious, so you conclude that you need to log into your office network and take a look. It is a good thing that you chose a hotel with high-speed Internet access, and that you brought your wireless access point. The access point is plugged into the high-speed LAN port via wireless so you can still enjoy the beautiful view. You cannot really avoid turning on the laptop that you were not planning to turn on while you were on vacation; you are needed for an emergency. So, here you are on the patio booting up your laptop. You see the "blinky-blinky" of the wireless NIC's status lights. All systems are go! You fire up Telnet and proceed to log in to the router/firewall and start snooping around to see what the problem could be. This should not take too long, you say to yourself. There is still plenty of time to enjoy the rest of the evening and perhaps have a nice dinner. An hour goes by and you have solved the problem. You are quite taken with yourself for being ingenious enough to diagnose and resolve the situation within a few tick-tocks. Screeeech?stop the movie for a second. Unknowingly, the "vacationing uber tech" just caused his company to lose millions of dollars. How, you might ask, did this guy in the movie cause millions of dollars to be lost just by logging in to his company's router/firewall to fix a problem? It was not the act of telnetting to the router/firewall that caused the problem; it was the fact that he used a wireless connection. You see, the company that uber tech worked for (yes, past tense cause he no longer works for them as a result) is a multinational corporation that was about to announce the creation of a new widget that was capable of converting discarded pizza boxes into SDRAM memory chips; a competitor of this revolutionary company not only wanted to stop this announcementbut they also wanted a copy of the plans for this widget so they could bring it to market first. It seems that a hacker employed by the competitor was paid to follow vacationing uber tech and, at a convenient moment, break into his hotel room and download the contents of his laptop to a portable storage device, in hopes that the hacker could find some proprietary information about the widget. Upon seeing uber tech boot up his laptop, complete with wireless NIC, the hacker realized that he had struck gold and decided to do some long distance sniffing and hacking, courtesy of uber tech's unsecured wireless connection. Long-distance sniffing and hackingsounds like a script from "Mission Impossible," doesn't it? Too far fetched to really happen? The truth is that this type of scenario occurs on a daily basis. Bad guys with wireless-enabled laptops steal information right out of the air with little effort. They use tools that are readily available on the Internet and can cause many problems for companies that do not take the time to understand the threats an unsecured wireless connection poses to their corporate network. This chapter covers several topics related to wireless networking security and helps you identify, understand, and prevent the types of intrusions to which wireless connections are vulnerable from the outside. This chapter focuses on the commercial wireless products that are available and not the home version from Cisco subsidiaries such as Linksys. It is important to understand the differences; in this article describing the Cisco Linksys acquisition, there is a clear, related message: Take, for example, Cisco's Aironet wireless products. The Aironet products are the result of Cisco's significant investment in industry-leading WLAN and networking technology. Cisco Aironet solutions offer premium value in security, range, management, performance, features, and total cost of ownership as part of a complete, complex network. Linksys' products, on the other-hand, are developed using off-the-shelf silicon and software and focus on ease-of-use, price, and features that are important to consumers. As you can see by this example, the products are geared towards a different market with different needs.
<urn:uuid:fbbdd0bd-1eda-4d1e-81a8-dd2bbf4dd84a>
CC-MAIN-2022-40
https://www.ciscopress.com/articles/article.asp?p=177383&amp;seqNum=7
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00308.warc.gz
en
0.964856
1,185
3.21875
3
AWS has just announced Amazon’s SageMaker Ground Truth to help companies build machine learning data sets. This is a powerful new service for people who have access to a lot of data which has not been annotated consistently. In the past, people would have to label a massive body of images or frames in videos to train a model for computer vision. Ground Truth uses machine learning to automatically label a training data set in addition to humans. This is an example of a new topic in the past year, machine learning for machine learning. Machine learning data catalogs (MLDCs), probabilistic or faded matching, automated annotation of training data and synthetic data creation all use machine learning to generate or prepare data for subsequent machine learning downstream, often solving data scarcity or dispersion problems. This is all well and good until we consider that the learning of machines in itself is based on inductive reasoning and therefore based on probability. Let’s consider how this can take place in the real world: a healthcare provider wants to use computer vision to diagnose a rare condition. An automated annotator is used to create more training data (more labeled images) due to sparse data. The developer sets a propensity threshold of 90 percent, meaning that only records with a 90 percent chance of accurate classification are used as training data. Once the model has been trained and deployed, it is used for patients whose data is linked from multiple databases using fluent text data matching. Entities from different data sets are matched with a 90% chance of being the same. Finally, the model flags images with a 90 percent or higher probability of diagnosis of the disease. The problem is that data scientists and machine-learning experts traditionally focus only on this final propensity as a representation of the overall prediction accuracy. This worked well in a world in which data preparation was deductive and deterministic. But if you introduce probabilities above probabilities, the final propensity score is not accurate anymore. In the above case, there is an argument that the probability of an accurate diagnosis decreases from 90 percent to 73 percent. As the emphasis on the need for explanation in AI increases, a new framework for analytical governance needs to be created that incorporates all the probabilities included in the machine-learning process— from data creation to data preparation to inference training. Without it, erroneously inflated propensity values misdiagnose patients, mistreat clients and mislead companies and governments as they make critical decisions. Next week, my colleague Kjell Carlsson will be holding a deep dive session entitled ” Drive Business Value Today: A Practical Approach To AI ” at the Data Strategy & Insights Forum in Orlando, Forrester. Please join us next Tuesday and Wednesday, 4 and 5 December to discuss this topic and learn best practices for introducing data into actions that drive measurable business outcomes. Original Source
<urn:uuid:7185d0c8-c834-4d85-a130-017378a95e08>
CC-MAIN-2022-40
https://cybersguards.com/artificial-intelligence-is-likely-to-have-a-problem/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00308.warc.gz
en
0.935132
572
2.96875
3
Firewall is a term widely used in the information security market, and certainly the most remembered asset within a security architecture. Moreover, no wonder the concept did not have major changes over time; however, the scope underwent major modifications. Even with so many new features in the perimeter security market, firewalls are always present in the corporate world. Throughout this post, we will bring you a bit of the history of firewall, understanding everything from the need to the evolution through time, bringing to the present day in highly modern and complex solutions ready for security challenges. If you want to deepen your knowledge about firewalls, continue reading the article Firewall: Concept and Terminology. What is firewall? Firewall is nothing more than a concept, applied in a software or set of software and hardware, which aims to offer security features and interconnection of networks, regulating all traffic passing through it, according to the policies previously established. Complementarily, the firewall is an asset in face of a strategically positioned infrastructure where traffic is tunneled, and because of this, it can allow or block the continuity of the communication if it does not present any non-compliance or threat to the network. Firewalls are heavily used as a defense strategy in companies of the most varied and segmented types, and are generally placed in a topology between public networks (internet) and private networks (internal network segments). Getting to know a little of history is to understand how challenges have been posed over time, and how market and businesses have adapted and transformed into an excellent business model for an increasingly interconnected world. Timeline: Firewall in the 80’s Firewall is not a new concept; it has become especially popular with the spread of the TCP/IP protocol stack due to its own nature. Since the IP protocol has the ability to intercommunicate, leaving networks with different purposes or domains (companies, universities etc.) without any control, it presents a potential risk of unauthorized access, data compromise, among other possibilities. So defending the perimeter is nothing more than creating a barrier that separates the public part of the interconnection offered by the internet, and operated by large telecommunications companies and local providers as well, in the private network segments. In computer networks, information travels through packets from one point to another. Each packet is a unit that carries a portion of identification (header) and data (content), being routed independently through the internet. The first firewall proposal, or packet filter, came in 1989 by Jeff Mogul of Digital Equipment Corp. (DEC), marking, therefore, the first generation. Timeline: Firewall in the 90’s AT & T Bell Labs, through Steve Bellovin and Bill Cheswick, developed in 1991 the first concept of what would be consolidated later as stateful packet filtering, or simply stateful firewall. This stage was marked as second generation of firewalls. In a short time, the third generation of firewalls appeared, when the commercialization of the DEC SEAL was started, counting on modern resources of application proxies. The combination of packet filtering and proxy in a single solution has made the hybrid firewall name begin to be more widely used in the market and academia. In 1994, Checkpoint launched Firewall-1 that was extremely important for the development and maturation of the security market, pioneering the GUI (Graphic User Interface) concept, as well as other technologies directly related to security. In the second half of the 1990s, several parallel projects appeared, such as Squid (1996) and Snort (1998) that had as their main purpose not commercialization but the development and maturation of solutions and concepts over time. These projects have, to this day, great use by commercial and free security solutions. At the same time, other companies emerged, and other security features were added to the solutions, making them increasingly hybrid. Features such as VPN, URL filters, QoS, integration or incorporation of antivirus, WAF and other solutions have allowed for greater robustness in the construction of secure environments for companies. Timeline: Firewall between 2000 and 2015 With the incorporation of complementary security solutions for firewalls, in 2004 the term UTM (Unified Threat Management) appeared for the first time through IDC. The term is nothing more than a better name for the evolution of firewalls over the years. Through the popularization of the internet, many services and applications began to centralize their operation on the web. This move greatly increased the need to protect specific systems based on the HTTP protocol. In 2006, Web Application Firewalls (WAF) appeared as a standalone solution, but also incorporated as a resource for UTM. Although the UTMs were prominent, by bringing together various functionalities and security features in a single solution, it had the negative side associated with performance, in view of the amount of resources. In 2008, Palo Alto Networks brings to the market the concept of next generation firewalls (NGFW), solving the performance problem presented by UTMs, and adding an important feature: visibility and application-based controls. Then, in 2009, Gartner goes on to define the concept of next-generation firewalls. Many vendors underwent technical and commercial reformulations to keep up with the trends they would follow in the coming years. Many of the other known features have been upgraded, most of them only commercial, to the next generation term, as was the case with NGIPS. The technologies behind firewall solutions have changed a lot in the last few years, driven by the convergence of information and knowledge to the electronic world, and the internet was a great impetus for that to happen. In the coming years we will see major changes with IoT (Internet of Things) and many other new challenges for mobile devices that already have significant presence in the corporate world. History does not stop being built. Did you know the evolution of firewalls over time? Tell us about your experience reading this article and help us contribute to its construction. Improve your knowledge by reading the article Firewall: Learn the main differences between UTM and NGFW.
<urn:uuid:9ef07ef8-09aa-46c9-b9d4-5000735e4adc>
CC-MAIN-2022-40
https://ostec.blog/en/perimeter/firewall/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00308.warc.gz
en
0.955216
1,259
3.375
3
What Is SD WAN? A Software Defined Wide Area Network (SD WAN) is a way of changing or configuring your WAN network beyond the confines of hardware and network carriers. What Is SD WAN In Simple Terms? SD WAN (Software-Defined Wide Area Network) is a more efficient type of WAN used to create and manage long-distance networks. The key difference between SD WAN and regular WAN is that it’s “software-defined”, which makes it far faster to configure policies across your entire network. It also improves your network’s performance by making it easier to utilise multiple connections, instead of spending a fortune on private MPLS connections. This allows you to achieve a higher performance from your network at a lower cost. What Is The Difference Between WAN and SD WAN? A WAN would typically be in the form of an MPLS, using the same provider for all connections to create a WAN. The WAN is normally created to overlay services within the same organisation/network. With an SD WAN setup, the network / WAN setup isn’t reliant on a single carrier or type of connection. Whereas typically a WAN built using MPLS is. How Does SD WAN Work? You can use multiple (and sometimes different) lower cost internet facing connections to create a flexible and more adaptable network architecture. The creation of a SD WAN means you can utilise different types of internet connection and even different providers, using what is available in any given location rather than being stuck with one form of connection or another. The SD WAN infrastructure (both the branch office / location router and head office or data centre router) is capable of being ‘software defined’ meaning that it can be configured in many different ways, depending on the situation at any given site. Why Use An SD WAN? It’s very flexible, more cost effective than alternative WAN solutions and the hardware can be reused multiple times in different locations should that need arise. What Are The Benefits Of SD WAN? You can use multiple connectivity types (and all at the same time) such as Ethernet circuits, FTTP, FTTC, Fixed Wireless, 4G/5G mobile sims etc. to create branch office location connectivity, with the flexibility of routing different networks and services to any given location within your SD WAN solution without hardware changes. What Are Some Of The Weaknesses Of SD WAN? In some ways its strength can be its weakness, if you are able to use any internet connection to create the SD WAN, you then have multiple points of contact for support issues on the different connections rather than just one with the more conventional WAN setup. This can be mitigated by using a single supplier for the whole solution who then manages all the providers and services, almost like a sudo carrier. SD WAN Vs MPLS MPLS and SD WAN provide a similar solution, however when comparing the two, SD WAN is usually the clear winner. It costs less, it’s more secure and gives businesses far better network performance – hence why it has gained so much popularity in recent years. Can SD WAN Replace MPLS? Yes SD WAN can be used as a direct replacement for MPLS as it provides a similar solution. It’s also cheaper, safer and provides higher performance. Does SD WAN Improve Security? Many SD WAN solutions have enhanced built in security due to the nature of them using internet facing connections. Because of this perceived security risk, most if not all solutions will have secure encrypted tunnels between branch office and head office or data centre. How Will My Applications Behave Using SD WAN? Your applications and services will behave just like they would on a MPLS or alternative WAN solution. How well they work will always depend on two things: - The quality and reliability of the internet connection at the head office / data centre. - The quality and reliability of the internet connection at your branch office. Which SD WAN Solution Is Best? There are many SD Wan solutions available for businesses in the UK and it’s important to take your time finding the right one for you. For example, many businesses in the: construction, hospitality, logistics and retail industries rely on our MultiConnect+ product, which guarantees no downtime owing to a failing connection. Whereas other industries utilise our other SD Wan solutions. For help choosing the best SD WAN solution for your business, call us now on 03331 500 140 or click on the button below to get a specialist to call you back shortly.Request a call back
<urn:uuid:522b003f-33bb-430d-91ea-140a35aea9da>
CC-MAIN-2022-40
https://www.apcsolutionsuk.com/what-is-sd-wan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00308.warc.gz
en
0.897288
972
2.921875
3
Different vulnerabilities affect devices in different ways. They often allow cybercriminals to infiltrate into vulnerable systems to steal sensitive information or compromise devices. Recently, cybersecurity researcher Mathy Vanhoef uncovered a set of critical vulnerabilities, tracked as FragAttacks, that impact the systems connected to Wi-Fi, exposing millions of Wi-Fi users to potential remote attacks. What is a FragAttack? Vanhoef claimed that all these vulnerabilities are a combination of fragmentation and aggregation attacks (FragAttacks). FragAttacks can be leveraged by any remote hacker that is within range of a victim’s Wi-Fi network to abuse these flaws. It was found that most Wi-Fi devices are affected by several vulnerabilities, with every Wi-Fi device being vulnerable to at least one flaw. The researcher discovered multiple vulnerabilities including three design flaws, four implementation vulnerabilities, and five other critical flaws caused by widespread programming mistakes in Wi-Fi products. Vanhoef tested more than 75 Wi-Fi devices including computers from Dell and Apple, mobile products from Huawei, Google, Samsung, and Apple, IoT devices from Xiaomi and Canon, routers from Asus, Linksys, and D-Link routers. All these bugs will impact all Wi-Fi security protocols, including the Wired Equivalent Privacy (WEP) and the latest Wi-Fi Protected Access (WPA3). The detected vulnerabilities with CVSS scores between 4.8 and 6.5 include: - CVE-2020-24588: Which causes an Aggregation attack (accepting non-SPP A-MSDU frames). - CVE-2020-24587: Cause Mixed key attack (reassembling fragments encrypted under different keys). - CVE-2020-24586: Cause Fragment cache attack (not clearing fragments from memory when (re)connecting to a network). - CVE-2020-26145: Accepting plaintext broadcast fragments as full frames (in an encrypted network). - CVE-2020-26144: Accepting plaintext A-MSDU frames that start with an RFC1042 header with EtherType EAPOL (in an encrypted network). - CVE-2020-26140: Accepting plaintext data frames in a protected network. - CVE-2020-26143: Accepting fragmented plaintext data frames in a protected network. - CVE-2020-26139: Forwarding EAPOL frames even though the sender is not yet authenticated (should only affect APs). - CVE-2020-26146: Reassembling encrypted fragments with non-consecutive packet numbers. - CVE-2020-26147: Reassembling mixed encrypted/plaintext fragments. - CVE-2020-26142: Processing fragmented frames as full frames. - CVE-2020-26141: Not verifying the TKIP MIC of fragmented frames. How Attackers Can Exploit these Flaws The researcher also provided a demo video showing how an adversary can abuse the vulnerabilities to intercept sensitive information, exploit insecure IoT devices remotely, and launch advanced cyberattacks. Vanhoef stated that many of the companies released mitigation measures to fix these vulnerabilities. Hence, it is highly recommended to update all your connected devices to thwart potential risks. “The biggest risk in practice is likely the ability to abuse the discovered flaws to attack devices in someone’s home network. For instance, many smart homes and internet-of-things devices are rarely updated, and Wi-Fi security is the last line of defense that prevents someone from attacking these devices. Unfortunately, due to the discovered vulnerabilities, this last line of defense can now be bypassed,” Vanhoef added. Related Story: How to Secure Your Home Wi-Fi Network
<urn:uuid:915bffd0-95af-4b08-927f-88a483335d48>
CC-MAIN-2022-40
https://cisomag.com/multiple-flaws-expose-wi-fi-connected-devices-to-fragattacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00308.warc.gz
en
0.882179
774
2.5625
3
Opinion piece: How you can bolster cybersecurity in the IoT era We are entering into the era of the IoT. In fact, Gartner recently predicted that IoT technology will be present in 95% of electronic products for new product designs by 2020. The rapid gravitation towards the IoT has come at a cost, however, and the IoT has already earned itself a reputation for poor security. In the ferociously competitive technology sector, the urgency to be first to market with affordable IoT devices has resulted in many organisations overlooking even some of the most basic security principles in favour of fast development cycles. On top of this, the combination of overly rigorous cost controls and the drive towards user-friendliness leaves even less room for robust security measures. Many IoT devices also use extremely cheap processing units, equivalent to something you might buy back in the 1970s but much smaller. These kinds of devices are often either memory constrained or ‘input' constrained, allowing for simple functionality, but leaving little to no room for future updates or patches. With the lifespan of these devices often expected to exceed ten years, this creates a very serious problem. As security evolves and new threats become apparent, IoT devices can become a major security risk if they cannot be patched. Fortunately, this growing threat isn't going unnoticed. Technology groups such as I am the Cavalry are now driving and advising governments on what needs to be done to build more secure IoT solutions. The IoT Security Foundation is also driving to build standards and enlist companies to work together in order to improve the overall security of devices. Furthermore, the GSM Association (GSMA) has produced a set of guidelines around security best practices for IoT devices. For manufacturers, developers and users of IoT devices, associations such as the GSMA have highlighted several key areas where security must be improved. The following eight areas are prime examples of these: 1) Device authentication and identity: Proper and secure authentication with individual device identification allows a secure connection to be built between the devices themselves and the backend control systems. If every device has its own unique identity, organisations can quickly confirm that the device communicating is indeed the one it claims to be. This requires individual device identification based on solutions such as Public Key Infrastructure (PKI). 2) Physical security: Physical security is paramount. Integrating tamper-proofing measures into device components should be at the forefront of all developers' minds as it ensures they cannot be decoded. Additionally, ensuring device data related to authentication, identification codes and account information is erased if a device becomes compromised will help to prevent private data from being used maliciously. 3) Encryption: When utilising IoT solutions, organisations must ensure traffic flowing between devices and backend servers is properly encrypted. Ensuring that commands are encrypted and looking at command integrity via signing or a strong encoding is vital. IoT devices should also encrypt any sensitive user data collected as well for further data security. 4) Firmware updates: In the rush to get new IoT products to market, manufacturers sometimes build devices with no firmware update capability at all. Creating a consistent process that enables flexible firmware deployment over time allows for the creation of new products whilst ensuring important security fixes are distributed universally across existing product lines. 5) Secure coding: IoT developers must implement secure coding practices and apply them to the device as part of the software build process. Focusing on QA and vulnerability identification/remediation as part of the development lifecycle will streamline security efforts while helping to mitigate risk. 6) Close backdoors: Building devices with a backdoor inside, whether for surveillance or law enforcement purposes, has become commonplace. However, this practice compromises the integrity and security of the end user. Manufacturers must ensure that no malicious code or backdoor is introduced and the device's UDID is not copied, monitored or captured. Doing so will guarantee that when the device registers online, the process is not captured or vulnerable to interception, surveillance or unlawful monitoring. 7) Network segmentation: If a network is partitioned into secure segments, then – should an IoT device become compromised – its segment can be isolated from the rest of the network. If a path to the Internet is required, then that could be granted, and if the device is somehow breached, the devices in that segment are the only ones impacted. The zone can be quarantined and remediation steps can be taken without incurring risk to other systems. 8) Understand the business need: Security teams can be so focused on reducing risk that they sometimes do not align well with business needs. A good security team will assess business requests around IoT and think ‘what is the challenge or problem that the business is trying to solve?'. Without this mindset, security teams are liable to propose a way to secure the IoT devices that may actually reduce the business benefit. Security teams that work alongside the business, rather than against it will build more effective, more secure IoT infrastructure. In 2018 we will, unfortunately, continue to see IoT devices being compromised by malicious parties. This is the harsh reality of having so many devices already out in the wild that are virtually impossible to update. We are also likely to see more DDoS attacks stemming from compromised devices and even the appearance of IoT focused ransomware. However, all is not lost. Initiatives to improve IoT security standards are already gathering momentum, and it should not be long before manufacturers start to make amends in relation to the key areas above, as well as other aspects of IoT devices. Over time, the IoT will become more robust, particularly as existing vulnerable devices reach their end of life and are replaced by more secure equivalents.
<urn:uuid:40d02099-682a-4e99-8bbe-561da0d2a05a>
CC-MAIN-2022-40
https://securitybrief.asia/story/opinion-piece-how-you-can-bolster-cybersecurity-iot-era
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00308.warc.gz
en
0.947298
1,143
2.53125
3
Over the coming decade the electricity sector must face increasing demand, increasing regulation, and fundamental changes in the way that electricity grids work on a day-to-day basis. New fluctuating energy sources must be integrated into existing grids, and many of today’s customers may take on a second role as suppliers, writes Jim Morrish, a founding partner at Transforma Insights. The deployment of new technology is critical to enabling the changes that must happen, and a wide variety of technology-based solutions are being deployed in the electricity industry right now. These range from smartmetering applications through to remote asset monitoring and control solutions and solutions to better enable a remote workforce. According to research undertaken by Inmarsat, investment in IoT solutions overall is expected to be significant for electricity utilities, and with cost reduction and increased efficiency come related benefits in sustainability The electricity industry worldwide is facing a period of unprecedented change. In recent decades distribution grids and other infrastructure have become increasingly smarter, but today’s environment is characterised by a number of disruptive changes. These range from increased demand overall and the rapid adoption of evehicles, through to an expected increased incidence of harsh weather events associated with global warming. The fundamental dynamics of electricity distribution are changing too. Historically, electricity supply has been dimensioned to match demand but with a shift in the mix of energy generation towards renewable sources (particularly solar and wind) the associated unpredictability of electricity supply is ushering in a need to manage demand to match supply. This is a complete reversal of traditional grid dynamics. Associated with this development is the advent of microgrids that optimise the generation and consumption of electricity within a campus or similar context. Such microgrids will only draw power from an external grid when they are experiencing a net deficit within the local network. These grids may also seek to supply power back to an external grid in circumstances where local generation exceeds local demand. This reverse-supply dynamic can extend down into consumer markets, with households with photovoltaics installed seeking to supply excess power back to the grid. How technology can help Looking at some specific applications in the electricity industry can help us to develop a picture of what the future might look like. As William Gibson noted, “the future is already here – it’s just not very evenly distributed”. The following is a far from complete list of some of the applications that will change electricity industry operations in coming years: - Smart metering to provide accurate consumption information, and potentially to manage local storage and consumption – in the case of demand response – and enable reverse-supply. - Monitoring distribution substations, transformers, and feeders to ensure that any faults can be quickly identified and resolved, or pro-actively managed. - Remote monitoring and control of network assets, ranging from reclosers – that automatically interrupt power in the case of a supply problem – to voltage regulators. - Drone inspection for transmission and distribution assets, potentially incorporating artificial intelligence to identify any abnormalities. - Remote monitoring for a range of renewable energy generation assets, including wind turbines and photovoltaics – either at grid level, or at campus or even consumer level. - Deploying connected inductive sensors throughout a grid to monitor the efficiency of grid operations, and potentially combining this information with smart metering information and using artificial intelligence to identify and combat energy fraud. - Fleet management solutions for maintenance field forces, including job allocation systems, safety monitoring solutions, and solutions that enable remote workers to access second- and third-line support – possibly using augmented reality and shared video images. This is a diverse range of applications, but any list can only ever be indicative of the innumerable niche and optimised applications that will be deployed in the electricity sector in coming years. A survey of electricity utilities undertaken by Inmarsat found that in five years’ time the average respondent expected to have achieved a 30% saving in their organisation’s costs by deploying IoT solutions. The same respondents are likely to benefit from additional related benefits including in terms of overall efficiency and carbon reduction. Any IoT solutions that reduce the need for field engineers to visit remote sites are likely to be particularly impactful, and may have associated health and safety benefits. The same survey suggested that electricity utilities would spend 9.9% of their IT budgets on IoT projects in the next three years, a figure matched only by spend on cloud computing. Other big spend areas included related topics such as next generation security, big data analytics, machine learning, and robotics.
<urn:uuid:286ab1fc-4393-4856-913e-204d17d756b4>
CC-MAIN-2022-40
https://www.iot-now.com/2022/02/23/119703-how-iot-is-enabling-transformation-in-the-electricity-sector-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00508.warc.gz
en
0.93516
925
2.78125
3
AI has been trending for the last few years, but now the question has become, “How can DevOps take advantage of AI?” Well, it is surprising how amazingly DevOps has been making the most out of the application of AI. Artificial intelligence (AI) and machine learning (ML) can assist DevOps humans in moving away from simplistic tasks. One part of DevOps is automating routine and repeatable processes, and AI and ML can conduct these tasks more efficiently, allowing teams and businesses to perform better. Some algorithms can do various activities and procedures, allowing DevOps professionals to do their jobs efficiently. How can AI help DevOps services and solutions? DevOps is a set of methods that promote improved collaboration and automation between development and operations teams. It is a set of processes that assists a team in developing, testing, and deploying new software more quickly and with fewer bugs. Artificial intelligence refers to a computer program’s or machine’s ability to reason and learn. It is sometimes referred to as a branch of research that focuses on making computers “smart.” DevOps teams may use AI in various ways, including continuous planning, integration, testing, deployment, and constant monitoring. It can also improve the efficiency of all of these procedures. Artificial intelligence allows DevOps teams to focus on creativity and innovation by removing inefficiencies. It also aids groups in managing data speed, volume, and variability. How is Artificial Intelligence Driving DevOps Evolution? Businesses are under a lot of pressure to satisfy their consumers’ ever-changing demands, and many are turning to DevOps to help them do so. However, many businesses find it challenging to implement AI and machine learning because of its intricacy. A creative mentality may be necessary to perceive any benefit from AI and DevOps. Because of the complexity of the distributed application, tracking and organizing in a DevOps environment takes time and effort, which has traditionally made it difficult for the team to manage and handle customer complaints. Before developing AI and ML, DevOps teams could spend hundreds of hours and a significant amount of resources trying to find a single point within an exabyte of data. To address these issues, the future of DevOps will be AI-driven, assisting in managing massive amounts of data and computation in day-to-day operations. In DevOps, AI can become the critical tool for assessing, computing and making decisions. What is the AI’s Influence on DevOps? AI can revolutionize how DevOps teams create, produce, deploy, and structure applications to increase performance and conduct DevOps business processes. There are three significant ways in which AI might affect DevOps: Enhanced Data Accessibility: For DevOps teams, the limitation of unrestricted access to data is a big point of stress, which AI may address by releasing data from its proper storage required for big data projects. AI can gather data from various sources and prepare it for accurate and thorough analysis. Effective Resources Use: AI provides much-needed expertise in automating routine and repeated processes, reducing the complexity of resource management to some level. Greater Implementation Efficacy: AI aids in the development of self-governing systems, allowing teams to move away from a rules-based human management structure. It contributes to the complexity of evaluating human agents to increase efficacy. What are the benefits of DevOps? According to cloud DevOps consulting services, DevOps has the following key benefits: Quick delivery and response to consumer feedback The process moves at a fast pace; Best practices ensure reliability. Rapid adoption and deployment, resulting in time and cost savings The main advantage is scalability and flexibility. Incident response and management provide security and protection against risks and vulnerabilities. Supports third-party collaboration Tools from the open-source community can be used. How Can Enterprises Apply AI for Optimizing DevOps? AI and machine learning can help organizations dramatically improve their DevOps environment. For example, AI may assist in managing complicated data pipelines and creating models that feed data into the app development process. AI and machine learning will overtake IoT in digital transformation by 2022. Implementing AI and ML for DevOps, on the other hand, poses a variety of obstacles for businesses of all kinds. A tailored DevOps stack is necessary to profit from AI and ML technologies. By streamlining DevOps operations and making IT operations more responsive, AI and ML may provide a meaningful ROI for a corporation. They can boost the team’s efficiency and productivity while also helping to bridge the gap between humans and big data. A corporation that wishes to automate DevOps must choose between purchasing or developing a bespoke artificial intelligence layer. The first step, though, is to build a solid DevOps infrastructure. After laying the foundation, artificial intelligence can boost efficiency. By removing inefficiencies across the operational life cycle and enabling teams to manage the amount, pace, and variability of data, AI may help DevOps teams focus on creativity and innovation. It can lead to the automatic enhancement and an increase in the efficiency of the DevOps team.
<urn:uuid:5f258b1e-edd7-4a2e-9b87-b6246e874a0e>
CC-MAIN-2022-40
https://enteriscloud.com/how-can-devops-take-advantage-of-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00508.warc.gz
en
0.935896
1,064
2.8125
3
Researchers at Cyble discovered over 8,000 exposed VNC (virtual network computing) endpoints that allow access to networks without authentication. VNC is a graphical desktop-sharing system that allows control of another machine remotely. It mirrors graphical screen changes as well as keyboard and mouse inputs from one machine to another. Many of the exposed VNC’s found belonged to industrial control systems that should never be exposed. “the exposed VNCs found during the time of analysis belong to various organizations that come under Critical Infrastructures such as water treatment plants, manufacturing plants, research facilities, etc. During the course of the investigation, researchers were able to narrow down multiple Human Machine Interface (HMI) systems, Supervisory Control And Data Acquisition Systems (SCADA), Workstations, etc., connected via VNC and exposed over the internet.… “A successful cyberattack by any ransomware, data extortion, Advanced Persistent Threat (APT) groups, or other sophisticated cybercriminals is usually preceded by an initial compromise into the victim’s enterprise network. An organization leaving exposed VNCs over the internet broadens the scope for attackers and drastically increases the likelihood of cyber incidents. “Our investigation found that selling, buying, and distributing exposed assets connected via VNCs are frequently on cybercrime forums and markets.”
<urn:uuid:5da17930-b3b6-4cff-b666-c0d60eb06aaf>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/expert-comments/over-8000-exposed-vnc-ports-major-threat-to-critical-infrastructure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00508.warc.gz
en
0.921025
276
2.921875
3
Types of Network Security Protections Network security protection includes a variety of tools, configurations, and policies implemented on your network to prevent any intrusion into your security. The fundamentals of network security include detection, protection, and response. Resources are set up to help you analyze traffic on your network and detect any potential threats before they infect your system. Configurations are set in place to protect your network from intruders and provide you with the tools to properly respond to and resolve any problems that are identified. Firewalls filter the traffic on your network. They work to prevent and block unauthorized internet traffic and manage authorized access within your network. Network segmentation divides a network into multiple sections, and each section then acts as their own individual networks. The administrator is able to control the access to each smaller network while improving performance, localizing issues, and boosting security. Access control gives you the ability to grant or deny access to individual users based on their responsibilities within your network. This will define a person or group's access to a specific application and system on the network and prevent any unauthorized use. Remote Access VPN A remote access virtual private network (VPN) provides integrity and privacy of information by utilizing endpoint compliance scanning, multi-factor authentication (MFA), and transmitted data encryption. The remote access VPN is typically provided for telecommuters, extranet consumers, or mobile users. Zero-Trust Network Access (ZTNA) The Zero Trust Network grants specific access to an individual user based on the exact role they play within the network. Each individual is only granted access to certain processes or applications they need to complete their job successfully. Email security is set up to prevent users from unknowingly providing sensitive information or allowing access to the network via a malware-infected email. This security feature will warn or block emails containing potentially dangerous threats. Data Loss Prevention (DLP) DLP is a network security technology that aids in preventing sensitive information from accidentally being leaked outside of the network by users. It works to prevent the misuse or compromise of data to protect the network from exposure to outside entities. Benefits of Network Security Network security is essential for safeguarding client data and information. It maintains the security of shared data, guarantees dependable network performance, and protects against online threats. An effective network security solution keeps overhead costs at a minimum and safeguards businesses from losses brought on by a data breach or other security incident. Ensuring legitimate access to systems, applications, and data through network security makes business operations smooth and that services and goods are provided to clients in a timely manner. Network Security Protection Trends There are several network security developments that have been moving the industry forward. The most current advancements in network security management include: - Zero-trust security: This is a security model that encourages businesses to take a "never trust, always verify" approach to security, meaning they should not trust any user, entity, device, or application on the network. - A focus on education: Training staff on cybersecurity best practices, including how to spot vulnerability in network security, is one of the best ways to prevent data breaches. - Focusing on incident detection and response (IDR): IDR is a proactive security strategy in which businesses are always on the alert for irregular behavior. It is critical to have an incident mitigation plan in place, so that once an event is discovered, you can execute the proper response promptly and efficiently. Top 5 Network Security Tools & Techniques The top five network security tools include: - Wireshark: Wireshark provides an overview of your live network. It is also a very popular packet sniffer, meaning it examines the contents of data packets to identify threats. - Metasploit: This Rapid7 network security program allows users to scan over 1,500 processes. It also helps businesses perform various safety evaluations and enhance overall network security. - Nessus: Nessus locates and fixes vulnerabilities found in computers, operating systems, and applications, including those that are missing or lacking patches. - Aircrack: Aircrack, a set of tools for breaking WEP and WPA encryption, offers powerful, internet-based solutions for mobile device security. - Snort: Snort is an open-source IDS that works with all hardware and operating systems. It examines protocols, searches data, and identifies attacks. Network Security Policy A network security policy outlines an organization's network security environment. It also specifies how the security policies are applied throughout the network. In addition, a network security policy establishes rules for network access. How a Network Security Policy Management Improves Business Security The network's integrity and safety are governed by security policies. They include guidelines for connecting to the internet, using the network, adding or changing devices or services, and more. These rules only work when they are put into practice. By ensuring that policies are streamlined, uniform, and enforced, network security policy management helps enterprises maintain compliance and security.
<urn:uuid:40b07412-06c9-4a2a-9e82-2613c99e8242>
CC-MAIN-2022-40
https://www.fortinet.com/ru/resources/cyberglossary/what-is-network-security-
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00508.warc.gz
en
0.916227
1,034
2.8125
3
Nowadays, ten years after the introduction of Bitcoin, blockchain technologies have been successfully deployed in various financial applications including several cryptocurrencies. Moreover, the blockchain is considered as an enabling technology in many different sectors such as the energy, healthcare and government sectors. This is the reason why there is a growing interest in blockchain programs, such as smart contracts. Smart contracts represent versatile and popular applications that are executed over blockchain infrastructures. Some of the prominent blockchain infrastructures (such as Ethereum) include smart contract frameworks based on Turing-complete languages. As such smart contracts can be used to implement sophisticated functionalities over blockchain infrastructures including functionalities that enforce complex terms in financial and insurance context and also in other contexts where valuable assets are traded. Therefore, understanding smart contracts is a key prerequisite for developing non-trivial applications over state-of-the-art blockchains. The notion of a smart contact is not new: It was introduced more than twenty years ago, as a key building block of digital markets where valuable assets are traded. At that time, smart contracts were characterized as computerized transaction protocols that executed the terms of a contract by means of satisfying common contractual conditions such as payment terms. Similarly, in the context of a blockchain, a smart contract is considered a secure and continually running computer program which represents an agreement. The main properties of a blockchain smart contract are as follows: In practice, a smart contract can represent a Service Level Agreement (SLA) between two of more different parties. Smart contract based SLAs can be implemented for various industries, yet the majority of blockchain-based smart contracts focus on financial services use cases. In order to ease the process of encoding legal agreements and SLAs in smart contracts, the notion of smart contract templates has been introduced. The smart contract templates are standards-based formats that provide a framework for representing legal agreements. They are usually developed based on domain specific languages (DSL), which provide constructs for legally-enforceable programs that correspond to legal documents. To this end, DSLs provide a host of different functionalities, including legal prose and cryptographic functions. DSLs are in most cases are dedicated to describing applications for a particular area or sector. Hence, they tend to provide quite limited expressiveness that is tailored to the domain at hand. As such, they are optimized for the domain that they target, yet in most cases being inappropriate for expressing and building general-purpose programs. For instance, there are DSLs that support the definition of insurance contracts (i.e. for InsuranceTech applications), such as the Actulus Modeling Language (AML) which is focused on the description of life insurance and pension products. Another example is the DIESEL language, which can represent energy derivatives as a means of facilitating Monte Carlo pricing and analytics. Likewise, Risla is a DSL language that is destined to describe interest rate products. Smart contracts need not be executed over a blockchain: They can be used to represent and enforce agreements over data residing in conventional databases as well. Nevertheless, integrating smart contracts over a blockchain allows them to benefit from the blockchain’s distributed consensus mechanisms. The block chains provide safety and reliability for the enforcement of smart contracts. Ethereum is the most prominent example of a blockchain that provides native support for the development and deployment of smart contracts. Smart contracts on the Ethereum blockchain are part of larger applications called Decentralized Autonomous organization (DAOs). They enable richer application logic than the conventional double-spending and transaction-locking functionalities of the Bitcoin blockchain. In practice, Ethereum smart contracts are executed on the computers of the blockchain network which are expected to execute the contracts’ code and to arrive at the same result as part of the blockchain consensus mechanism. Hence, distributed consensus ensures that the smart contract logic has been executed as it should be. Miners are motivated to run the smart contract in exchange for “Gas” i.e. Ethereum’s currency unit, which is used to represent the reward given for the code execution. Hence, smart contract function calls declare the amount of gas that is associated with their execution. The gas price varies depending on the complexity of the smart contract’s application logic. However, the gas is not only an incentive mechanism for miners. It is also a motivation for developers to write efficient code that requires less gas for its execution. Contrary to conventional programming languages, smart contracts are not able to access services and APIs, outside the nodes they are running into. This is due to the fact that this could make ultimate consensus impossible. To alleviate this limitation, the Ethereum blockchain provides a solution called “Oracle”, which allows external services and APIs to push data to the blockchain. Based on Oracle, all blockchain nodes are able to access exactly the same data within the blockchain network. Nevertheless, there are some critics for Oracles mainly related to the fact that they break the complete decentralized nature of a blockchain as external data/information is centrally pushed to the blockchain. It is envisaged that Ethereum smart contracts could serve as a basis for a wide range of applications that could disintermediate all the trusted third parties that are nowadays running popular marketplaces in areas such as e-commerce, crowd-funding and transport services. Nevertheless, relevant implementations are still in their infancy. Even though the notion of smart contracts is not new, its integration in the blockchain has brought it in the foreground and made it an enabler for a whole new range of applications. In case you are just starting with blockchain infrastructures and applications, it’s likely that you will get involved in understanding and develop smart contracts. We hope that this introduction helps you understand the basic concepts of smart contracts, templates, and DSLs as well as of their integration with blockchain based consensus mechanisms. Smart Contracts for Innovative Supply chain management Increasing Trust in the Pharma Value Chain using Blockchain Technology Top Technology Trends for the Future of Insurance On Blockchain’s Transformative Power in Seven Industrial Sectors Are Blockchains ready for Industrial Applications? Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? The emerging role of Autonomic Systems for Advanced IT Service Management Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:bdb88dba-05d5-47fd-b6e0-52594af7874f>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/an-introduction-to-blockchain-smart-contracts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00508.warc.gz
en
0.939866
1,543
2.859375
3
Let’s face it: Some people are born with natural musical abilities, and some aren’t. But you probably shouldn’t tell your tone-deaf spouse that the cats yowling on the back fence sound better than their version of the Star-Spangled Banner. Suggest a tone-deafness test from The Music Lab instead. This online quiz accurately tests musical skills in about 10 minutes. It’s part of an experiment being conducted by researchers at Harvard studying how the mind works. Specifically, they’re investigating how people make sense of what they hear. So far, more than 780,000 people have played. Here’s how it works: Grab a pair of headphones, then answer questions about the sounds you hear. First you’ll decide which sounds are loudest to calibrate your speakers, then the tough part begins — deciding whether you’re hearing a pitch that’s higher or lower than the sound before it. It starts relatively easy but gets tricky. Don’t worry, we missed a few too. Those 1/32nd steps are tough to discern. You can then answer a few anonymous questions to help out Harvard’s research team. Answers will be stored securely, and you’re really not giving away much in the way of personal info. Online quizzes always come with a little risk, but this is a lot safer than those random sites floating around the web or even Facebook. Once you finish that game, there are others to play, too. A world music quiz, synthesizer game, musical style test and a handful of others. Have fun!
<urn:uuid:534aa702-36ef-40fc-8605-1db4713fb9b2>
CC-MAIN-2022-40
https://www.komando.com/lifestyle-reviews/tone-deaf-test-music-lab/240666/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00508.warc.gz
en
0.936166
344
2.65625
3
LDAP stands for Lightweight Directory Access Protocol and it allows applications to make rapid queries of user information. In this article we’ll give you the lowdown on what the protocol is for, how it works, and basically everything you need to know to become an LDAP genius! Explain LDAP to me – what exactly is it? LDAP is pronounced “el dap”, and it’s a vendor-neutral language that’s used to manage users, attributes, and authentication. It is a lightweight version of X.500. It was created back in 1993, although back in the day it was known as LDBP (much harder to say – we agree), with the B standing for browsing. LDAP is one of the core protocols that is used for managing users and their access rights. Today, it’s one of many protocols that will be used for directory services, to access data such as email addresses, credentials, and other static data. After deciding on a method of directory storage, LDAP can also add, delete or change records, and even search the records so that users can be authenticated and authorized to access specific resources. LDAP has three core functions. First, it can update directory information with adds, deletes or modifications. Secondly it can query – which means searching and also comparing information within the directory. Finally, it can authenticate, either authorizing an action or abandoning the function so that the server cannot complete the requested task. A typical LDAP query will have four parts to it, connection, request, response and completion. Employees will probably connect using LDAP regularly, likely every single day. This could be anything from when they verify a password to when they connect to a printer or another device. The basics of LDAP If you’re just starting out with LDAP, you’ll need a boost to your vocabulary. There are a whole slew of new terms to understand in the world of LDAP protocols! Here are some of the first ones you might come across: (You can access a much more comprehensive list right here.) # Information Tree: An information tree, or a directory information tree is how LDAP structures its data, and will be used to represent all the directory service entries. You might see this written as DIT. # Distinguished Name: Often abbreviated to DN, this will be the unique identifier for every LDAP entry. It also will be how you differentiate between information on the DIT. # Relative Distinguished Name: This term describes how DNs are related to one another in terms of their location on the DIT. See? You’re getting the lingo already! # Modifications: Whenever LDAP users make a request to change the data, this is a modification. For example, they might add, replace or delete data. # Object identifier: Also known as an OID, this is a string of numbers, separated by periods, that acts as a unique identifier for an element in the LDAP protocol. One use is for request and response controls. # Schema: This is the name for the coding of your LDAP, and specifies all the information that a directory server might include. Think about attributes, rules, object classes, and more. # LDAP URIs: These are mostly used for referrals, or to specify the properties of establishing connections. A URI (uniform resource identifier) will bring together a number of disparate pieces of information. What’s the difference between LDAP and Active Directory? A lot of people will use the terms LDAP and AD interchangeably, but that’s a recipe for disaster! In fact, while Microsoft might have created a lot of Active Directory basics from LDAP, and it uses LDAP, they are not the same. AD usually uses Kerberos for authentication, a totally different protocol altogether. AD also needs domain controllers, and is not vendor neutral, it works best with Windows devices and operating systems, as it’s a Microsoft tool. While LDAP and AD do work well together, AD is used for organizing Windows IT assets, while LDAP can be used with other programs, for example Linux-based systems. LDAP on the cloud LDAP was built for on-premises systems, but today, the majority of enterprise and business workloads are on the cloud. Enter the idea of directory-as-a-service, a new technology where cloud friendly LDAP is built for the modern era. With this model, the servers for cloud LDAP already exist on the cloud, and so organizations don’t need to set up and manage the core directory itself, or integrate their systems and processes. Instead, they can just direct their LDAP-connected endpoints and they are good to go. What are the pros of LDAP? If you’re wondering whether LDAP is the right protocol for you, here are some great reasons to say yes! First of all, it’s open source. That means it doesn’t cost you anything, and you can get a lot of support from the IT community when setting it up and managing it in your own corporate or client environment. However, unlike a lot of open-source tools, it’s also standardized – being given a standard by RFC 2251. That means the industry will continue to support this protocol. You can use LDAP for a lot of different use cases, and it’s compatible with a broad number of operating systems and assets. That makes it a super flexible choice. Lastly, it’s very secure, and communications can be encrypted over SSL or TLS. Now let’s talk about the cons – are there any? Like any IT decision, there are always going to be some downsides, or negative considerations. First of all, there are definitely newer protocols which might be a better choice, especially if you’re working on the cloud. Secondly, this isn’t the kind of protocol that you can get started with as a newbie. LDAP set up and maintenance generally needs someone with a bit of networking expertise. The larger your organization, the more problems you’re going to have setting up your directory to be an accurate representation of your business environment. See Atera in Action RMM Software, PSA and Remote Access that will change the way you run your MSP Business
<urn:uuid:a20bd5fd-87af-4ec8-8ec5-47bf7f02ec4e>
CC-MAIN-2022-40
https://www.atera.com/blog/what-is-ldap-and-how-does-it-work/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00508.warc.gz
en
0.922692
1,333
3.046875
3
A fibre optic cable is a network cable that contains strands of glass fibres inside an insulated casing. They’re designed for high performance data networking and telecommunications. Fibre optic cable carry communication signals using pulses of light, faster than copper cabling which uses electricity. They are becoming the most significant communication media in data centre. Then how much do you know about them? This post serves as a guide for beginners. The three basic elements of a fibre optic cable are the core, cladding and coating. Core is the light transmission area of the fibre, either glass or plastic. The larger the core, the more light that will be transmitted into the fibre. The function of the cladding is to provide a lower refractive index at the core interface, causing reflection within the core. Therefore the light waves can be transmitted through the fibre. Coatings are usually multi-layers of plastics applied to preserve fibre strength, absorb shock and provide extra fibre protection. Generally, there are two basic types of fibre optic cables: single mode fibre (SMF) and multimode fibre (MMF). Furthermore, multimode fibre cores may be either step index or graded index. Single mode optical fibre is a single strand of glass fibre with a diametre of 8.3 to 10 microns that has one mode of transmission. The index of refraction between the core and the cladding changes less than it does for multimode fibres. Light thus travels parallel to the axis, creating little pulse dispersion. It’s often used for long-distance signal transmission. Step index multimode fibre has a large core, up to 100 microns in diametre. As a result, some of the light rays that make up the digital pulse may travel a direct route, whereas others zigzag as they bounce off the cladding. These alternative pathways cause the different groupings of light rays to arrive separately at a receiving point. Consequently, this type of fibre is best suited for transmission over short distances. Graded index fibres are commercially available with core diametres of 50, 62.5 and 100 microns. It contains a core in which the refractive index diminishes gradually from the centre axis out toward the cladding. The higher refractive index at the centre makes the light rays moving down the axis advance more slowly than those near the cladding. Single mode fibres usually has a 9 micron core and a 125 micron cladding (9/125µm). Multimode fibres originally came in several sizes, optimised for various networks and sources, but the data industry standardized on 62.5 core fibre in the mid-80s (62.5/125 fibre has a 62.5 micron core and a 125 micron cladding. It’s now called OM1). Recently, as gigabit and 10 gigabit networks have become widely used, an old fibre design has been upgraded. 50/125 fibre was used from the late 70s with lasers for telecom applications. 50/125 fibre (OM2) offers higher bandwidth with the laser sources used in the gigabit LANs and can allow gigabit links to go longer distances. Laser-optimised 50/125 fibre (OM3 or OM4) today is considered by most to be the best choice for multimode applications. The two basic cable designs are loose-tube cable, used in the majority of outside plant installations, and tight-buffered cable, primarily used inside buildings. The modular design of loose-tube cables typically holds up to 12 fibres per buffer tube with a maximum per cable fibre count of more than 200 fibres. Loose-tube cables can be all dielectric or optionally armored. The modular buffer-tube design permits easy drop-off of groups of fibreers at intermediate points, without interfering with other protected buffer tubes being routed to other locations. Tight-buffered cables can be divided into single fibre tight-buffered cables and multi-fibre tight-buffered cables. single fibre tight-buffered cables are used as pigtails, patch cords and jumpers to terminate loose-tube cables directly into opto-electronic transmitters, receivers and other active and passive components. While multi-fibre tight-buffered cables also are available and are used primarily for alternative routing and handling flexibility and ease within buildings. While there are many different types of fibre connectors, they share similar design characteristics. Simplex vs. duplex: Simplex means 1 connector per end while duplex means 2 connectors per end. The following picture shows various connector styles as well as characteristics. Ultimately, what we’ve discussed is only the tip of the iceberg. If you are eager to know more about the fibre optic cable, either basics, applications or purchasing, please visit www.fs.com for more information.
<urn:uuid:04c46185-c16d-4696-a921-c1ba343e8a82>
CC-MAIN-2022-40
https://www.fiber-optic-equipment.com/tag/single-mode-fiber
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00508.warc.gz
en
0.915579
995
3.84375
4
What is World Password Day? World Password Day is a reminder for users to update weak or old passwords to ensure personal and corporate information security. As cyber threats continue to evolve and bad actors develop new attack techniques, a good cybersecurity posture requires more than just a strong password to avoid compromise. As many employees continue to work remotely from anywhere or even in a hybrid model, it’s critical that they have strong passwords for all platforms as they no longer have the same level of on-site IT and security support to help. World Password Day: How Do Cybercriminals Commonly Compromise Passwords? One of the most important parts of avoiding compromise is understanding how cybercriminals may attempt to gain access to your critical data. Attack techniques continue to evolve and become more sophisticated, giving cybercriminals a vast toolkit to use to exploit users. Here are some techniques to look out for: - Social engineering attacks: Attacks such as phishing through emails and texts, where users are tricked into providing their credentials, clicking on malicious links or attachments, or going to malicious websites. - Dictionary attacks: Attackers use a list of common words, called “the dictionary,” to try to gain access to passwords in anticipation that people have used common words or short passwords. Their technique also includes adding numbers before or after the common words to account for people thinking that simply adding numbers before or after makes the password more complex to guess. - Brute force attack: An approach in which adversaries randomly generate passwords and character sets to guess repeatedly at passwords and to check them against an available cryptographic hash of the password. - Password spraying: A form of brute force attack that targets multiple accounts. In a traditional brute force attack, adversaries try multiple guesses of the password on a single account, and that often leads to account lockout. With password spraying, the adversary only tries a few of the most common passwords against multiple user accounts, trying to identify that one person who is using a default or easy-to-guess password and thus avoiding the account lockout scenario. - Key logging attack: By installing key logging software on the victim’s machine, usually through some form of email phishing attack, the adversary can capture the key strokes of the victim to capture their username and passwords for their various accounts. - Traffic interception: Criminals use software like packet sniffers to monitor and capture the network traffic that contains password information. If the traffic is unencrypted or using weak encryption algorithms, then capturing the passwords is even easier. - Man-in-the-middle: In this scenario, the adversary inserts themselves in the middle of the user and the intended website or application, usually by impersonating that website or application. The adversary then captures the username and password that the user enters into the fake site. Often, email phishing attacks lead the unsuspecting victims to these fake sites World Password Day: How Can Users Prevent Passwords From Being Compromised? Users can adopt a number of tactics to ensure bad actors cannot compromise their personal information through the techniques above. These should include: strong passwords, multi-factor authentication, and single sign-on capabilities. In addition to these, a strong cybersecurity education is critical to protect yourself, your family, and your employer from compromise. Creating a Strong Password It is important to develop passwords that are impossible to forget and difficult to guess, even for a person that may know intimate details of your life like the name of the street you grew up on or the name of your first dog. Though it may seem compelling to add numbers and special characters to common words as a way to develop a strong password, cybercriminals can leverage a number of attack techniques to crack this. Avoid using the following in any password: - Phone numbers - Company information - Names including movies and sports teams - Simple obfuscation of a common word (“P@$$w0rd”) Instead, World Password Day reminds us of the best practices to secure your information: - Leverage unlikely or seemingly random combinations of uppercase and lowercase letters, numbers and symbols, and make sure your passwords are at least 10 characters long to ensure strong password. - Do not share passwords with anyone. - Do not use the same password for multiple accounts, this increases the amount of information a cybercriminal can access if they are able to compromise your password. - Change your password every three months to decrease the likelihood that your account will be compromised. - Use a password manager to generate unique, long, complex, easily changed passwords for all online accounts and the secure encrypted storage of those passwords either through a local or cloud-based vault. This will make it easier for you to ensure you are using the strongest passwords possible, as you will only need to memorize the password to your password locker. Additional Authentication and Protection Measures Users for World Password Day A single line of defense is no longer effective at keeping advanced cyberattacks at bay. To truly ensure a strong security posture, multiple tactics are required. Consider the following: - Multi-factor authentication (MFA): Multi-factor authentication confirms the identity of users by adding an additional step to the authentication process, whether it is through physical or mobile application-based tokens. This ensures that even if a password is compromised, bad actors cannot access the information. - Single sign-on (SSO): Single sign-on allows users to leverage a single, secure username and password across several applications within an organization. - Cybersecurity training and education: As cyber threats evolve and bad actors develop new techniques to target individuals, users must remain cyber aware and stay up to date on the state of the threat landscape. Free training courses like Fortinet’s Network Security Expert (NSE) 1 and NSE 2 can help educate individuals of any age about how to keep safe. With Fortinet’s Training Advancement Agenda (TAA) and NSE Training Institute programs, Fortinet continues to work toward closing the skills gap with training and certifications, career opportunities and key partnerships. As individuals increase the amount of time they spend online for work, e-learning, and communicating with family and friends, and cybercriminals ramp up attacks targeting these users, it is important to perform a security posture check across all accounts—updating weak and outdated passwords as needed. Learn more about the Fortinet free cybersecurity training initiative and Fortinet’s Training Institute, including the NSE Certification program, Academic Partner program, and Education Outreach program which includes a focus on Veterans.
<urn:uuid:129dfbb9-33ce-4d90-9ee6-97d76e5384c4>
CC-MAIN-2022-40
https://www.fortinet.com/blog/industry-trends/ensuring-strong-cyber-hygiene-on-world-password-day?utm_source=blog&utm_campaign=2020-q4-cyber-hygiene
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00508.warc.gz
en
0.915693
1,349
3.765625
4
What are the main benefits and risks of cyber security? Why is cyber security important for every business? Cyber security is the measures taken by a business or organisation to protect all of their data from damage, theft and being used as ransom. This includes sensitive and confidential data, personal information of employees, information systems, PHI (Protected Health Information), PII (Personally Identifiable Information) and even intellectual property. As demand for online services increases, businesses and organisations will need to improve data protection. The use of passwords, firewalls, antivirus software and other basic cyber security solutions are not always enough to keep cyber-criminals out. The problem is that a cyber-attack is not necessarily a one-off event; of organisations that reported a cyber attack in 2021, businesses (31%) and charities (26%) reported that they had been attacked once a week or more. These figures demonstrate why every business owner should be considering a more comprehensive, ‘belts and buckles’ cyber security prevention plan. Analysts predict an ever-increasing level of cyber threats in 2022 – in just the first quarter of this year, there’s been a 17% jump in suspicious activities. Indeed, Microsoft reported in November 2021 they stopped the largest DDoS cyber-attack in history. It’s time to take a serious look at ways to adopt cyber security measures and prevent the world’s cyber-criminals from attacking your company. What are the benefits of cyber security? No business owner wants the problem of a network attack or data being stolen. Whilst implementing a cyber security strategy may not stop the most determined cyber-criminals, it will significantly reduce the chances of being hacked. Investing in cyber security to protect your business’s digital assets – financial data, emails, confidential information and passwords – will give you peace of mind that your data is protected. The main benefit of cyber security is increased protection for your digital assets. By ensuring the entire IT infrastructure (including software, networks, hardware and mobile devices) is protected, you can maintain a strong security posture and reduce the possibility of hackers seriously breaching your systems. Cyber attacks and data breaches will incur a financial cost. Any attack on IT infrastructure that compromises operations can result in downtime and a loss of business sales. Protecting your business will ultimately save money. Lowers the risk of legal issues Your business is responsible for any customer or partner data stored in your network. A cyber attack may leave you liable if the compromised data includes that of your customers. Implementing cyber security strategies and procedures helps prevent this from happening, providing increased protection for data entering and leaving your network. Boosts customer confidence Customers, suppliers and other stakeholders will have more confidence and trust in your business, increasing your business reputation. Increased employee awareness The greater the awareness employees have of the potential cyber threats to the business, the more likely the prevention of cyber-criminals being successful in their efforts to gain unauthorised access to your business’s network and IT systems. Implementing a cyber security strategy isn’t just about investing in the right software solutions or IT systems. It’s also about establishing a cyber security awareness culture within the business. In fact, 83% of UK businesses reported that ‘phishing’ was their most common cyber threat. Making sure every member of staff, remote or office-based, receives the right cyber security training and knows what to look for will reduce the threat of this type of cyber-attack. What are the major problems facing cyber security? One of the biggest problems with cyber security is the ever-changing threat landscape for businesses and the increasing expansion of activity by cyber-criminals. Work from home mandates over the past couple of years has opened up new opportunities for hackers and created a bigger cyber threat for businesses of all sizes. Lack of training These issues have catapulted insider threats up the scale of ‘biggest cyber risks’ for businesses. Whilst IT teams and company boards have a much greater understanding of their threat landscape and what they can do to minimise cyber risk, when it’s broken down to the individual level, there is a serious lack of cyber awareness. Establishing ongoing training, integrating antivirus software across all systems and making the management of passwords and identity access a central function will go a long way toward reversing the trend. Another form of cyber threat that has emerged from the pandemic era is social engineering. This is where cyber-criminals trick or manipulate people into downloading a virus or malware without realising, giving the cyber-criminal sensitive information, or persuading them to transfer money. These techniques are known as baiting, scareware, pretexting and spear phishing. Our increasing dependence on mobile and IoT (Internet of Things) devices has led to a rise in DoS (Denial of Service) and DDoS (Distributed Denial of Service) attacks, as experienced by Microsoft. While internal IT systems are regularly updated, IoT and mobile devices with access to internal systems and databases may not have strict update procedures. This opens up opportunities for hackers to gain unauthorised access to networks, leading to data breaches. The latest cyber security trends Covid-19 and the changes in work environments the pandemic forced have highlighted the importance of investing in robust cyber security measures. The current trends businesses need to be aware of are: Supply chain attacks: Attacks along the supply chain and third-party breaches are becoming more common. Hackers are able to exploit vulnerabilities in partner and supplier company networks, where they can launch further attacks. The move to work from home: Remote business environments mean more employees are connecting to company networks from outside the office. The speed at which this change took place meant many businesses did not also invest in security for these connections, leading to a rise in cyber attacks. The rise in the use of mobile and smart devices: Inadequate security on these devices gives hackers easy access to the enormous amounts of data we store on phones, tablets and laptops. Passwords: Organisations are beginning to switch to biometric authentication via an OTP (one-time-password) or hardware token, together with multi-factor authentication. The benefits and risks of cyber security In an increasingly connected world, robust cyber security measures are essential; even large multinational businesses are not immune from cyber threats. However, there is still a lack of awareness throughout organisations that make some networks easy targets for attackers. These risks to cyber security that must be addressed so that businesses and employees are protected online. Browse more articles from our experts and discover how to make better use of IT in your business. AAG Security Advisory – ‘EvilProxy’ A new type of phishing attack, called 'EvilProxy', is being used by cyber criminals to attack businesses like yours. This security advisory highlights the danger that EvilProxy poses and how… Cyber Security Career: Essential Knowledge Are you looking for a great opportunity in a rapidly expanding sector? A cyber security career is rewarding and dynamic, offering the chance to defend the digital infrastructure of businesses…
<urn:uuid:773cd71c-2a90-4f77-a74a-e80244f9d64f>
CC-MAIN-2022-40
https://aag-it.com/what-are-the-main-benefits-and-risks-of-cyber-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00708.warc.gz
en
0.936908
1,491
2.703125
3
With an unprecedented amount of home workers, hospital staff staying in temporary accommodation near sites, and people turning to the internet to stay entertained and connected, a reliable connection has become the fundamental backbone of almost every aspect of life. The demand for seamless connectivity We are relying on our digital and online presence now more than over. This applies to loved ones connecting during this challenging time, just as much as employees working from home to supply their customers. Internet traffic in the UK has unsurprisingly surged by 78.6 percent since the end of February. Without a sturdy infrastructure in place, businesses will risk falling behind and people will be unable to connect, stay entertained and stay safe by working from home. Broadband is critical to the way we live and fiber-grade infrastructure is the natural next step to delivering reliable and streamlined connectivity in all corners of the world. Much like water, gas and electricity, broadband needs robust plumbing in order to operate effectively and meet the demand for an internet-dependent culture. Fiber should be at the heart of the nation and implementing this infrastructure affords the correct foundations to future proof businesses and support every individual with the same level of connectivity. - Broadband internet vs 5G (opens in new tab) Should broadband be a fundamental utility? Research has shown that the majority of households now regard ultra-fast broadband as a basic utility, even though nearly a quarter of both UK businesses and households still experience low broadband speeds. As it stands, 77 percent consider broadband a utility that is equivalent to electricity and gas. There are numerous initiatives being implemented to help support fast and efficient broadband for both local areas and businesses. One example is the recent Gigabit Broadband Voucher Scheme, run by the UK Government's Department for Digital, Culture, Media and Sport. This multi-million-pound scheme, recently ended, offered grants to individual SMEs and communities to support them in the costs of installing Fiber to the Premises (FTTP). The current crisis has brought into sharp focus the extent to which we rely on broadband for almost everything we do. In recent times, broadband technology has become the backbone of everyday life as well as the economy. Therefore, its availability should be equal to other core utilities such as water, gas and electricity. Implementing a digital infrastructure not only allows us to connect to the internet but it is now the lifeblood for numerous essential services. Now, more than ever we all need government support to make this happen more quickly. The benefits of reliable broadband As many companies make rapid shifts to enable remote working amongst employees, fast, reliable broadband seems like a basic necessity to allow us to do our jobs. However, we must not forget how reliant we are on connectivity for other aspects of our lives as well. We now rely on the internet to access new services that we have never been able to experience before, as well as to provide services that we previously obtained offline. Examples include seamless streaming of TV shows online, and using video calls to feel closer to families and friends during lockdown There also are indirect benefits, such as an increase in productivity, GDP growth, a greater ability for participation from the labor force. It has the ability to create jobs, whilst also enabling improvements in education and health care. By having access to full fiber connectivity, schools are able to hold virtual classes and health care practices are able to have a virtual appointment online with patients; all of which are currently happening in this current climate. Additionally, as the number of Internet of Things (IoT) devices increases, broadband speed and capacity will play a significant part in assisting this development. Reliable, high-speed broadband represents an opportunity for buildings and cities to become smart and truly make a difference in the future. It is essential that fiber cabling is taken into consideration when designing future buildings and cities. Additionally, it enables developers and tenants to upgrade their fiber for generations to come, ensuring they are able to develop as technology enhances. - IoT & broadband internet affects smart homeowners & businesses (opens in new tab) Navigating the new normal With life, buildings and services becoming smarter in almost every corner of the world, the need for seamless broadband connectivity grows ever stronger. During this extraordinary time, it has become more apparent than ever that broadband should be treated on a par with the other utilities we access at work or home each day. The lockdown has simply accelerated the change that has been on its way for a number of years. Now its lifting, things won’t return to the way they were before the crisis hit; it will not be a surprise that more people will permanently switch to this new way of life post-crisis. Social distancing measures will continue for some time post lockdown and therefore the need for efficient broadband to either work or stay connected to loved ones will continue. Broadband alone is not enough, we need to future proof our society amidst a digital-first world and full fiber infrastructure is the way to do this. Through access to a fiber cabling infrastructure, buildings are able to evolve and adapt as technology does the same. A joint effort is required from the government, service providers and the end users to fully realize its full potential and allow people to truly thrive. We are already seeing broadband redefine the perception of essential utilities and we can certainly look forward to greater efforts to make seamless, fiber broadband accessible for all. This will have an immediate as well as long lasting positive effect on the economy. - Full fibre broadband in 2020: A small business checklist (opens in new tab) Meri Braziel, Chief Commercial Officer, Glide (opens in new tab)
<urn:uuid:395b7dc0-189e-4ba9-8ae8-54ebf5a5793c>
CC-MAIN-2022-40
https://www.itproportal.com/features/broadband-is-the-new-fundamental-utility/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00708.warc.gz
en
0.957759
1,141
2.53125
3
What if I said you could connect all your cloud network services and share applications and data between them from the comfort of your living room recliner? Then, what if I told you this can be done instantly – without needing to deploy new hardware? This is exactly what a virtual cloud router allows you to do. So, what is virtual routing? A virtual router (also generically called a vRouter) is a virtual network functions (VNF) device that can be part of a legacy physical network, or an agile network functions virtualization (NFV) infrastructure. Software-based VNF devices deliver traditional hardware network appliance capabilities, such as SD-WAN gateways, cloud routers, virtual private networks (VPNs) and firewalls, on standard commercial-off-the-shelf (COTS) hardware. Virtual cloud routing allows you to seamlessly connect your on-premises and private and public cloud environments together using virtual networking capabilities and create hybrid cloud architectures that lower hardware costs and enable clouds to work harmoniously together when sharing applications and data. Modernize your network, deploy digital-ready infrastructure at the edge virtually, in minutes Network Edge provides virtual network services that run on a modular infrastructure platform, optimized for instant deployment and interconnection of network services.Read More Cloud Architecture 101 When we talk about cloud computing, there are three basic types – private, public and hybrid. - A private cloud architecture delivers dedicated computing to one organization within your company. - A public cloud architecture provides public cloud services that are shared across the enterprise and other organizations (partners, customers) in addition to your own. These services are typically accessed via the public internet. However, private interconnection delivers direct and secure connectivity that bypasses the public internet, improving performance and reducing risk. Virtual private connections can often save over 60 percent versus traditional connections over the public internet, while providing higher throughput. - A hybrid cloud architecture can include: - A least one public and one private cloud. - Two or more private clouds or public clouds connected to on-premises physical or virtual infrastructure. - A physical or virtual bare metal as a service (BMaaS) platform connected to one or more clouds (public or private), where businesses can consume compute and/or storage services as needed. When we talk about cloud computing, there are three basic types – private, public and hybrid." Next, let’s understand a few fundamentals about routers. What is a router? A router is a networking apparatus that creates and sends out data packets (basic communication units over a network). The router’s job is to direct traffic over the internet. Data such as a web page or email is in the form of data packets. What is a routing table? Routing tables contain a list of destinations and information about the network’s topology. In IP (Internet Protocol) networks, Virtual routing and forwarding (VRF) is a technology that allows multiple routing tables to coexist in a router and work simultaneously. What are the three basic modes of a virtual router? Virtual routers operate in three basic modes: backup state, master state, and initialize state. A physical router is a generalized router used for communication between the client and other networks on the Internet. It transports the IP packets based on the addresses present in the routing table. A virtual router is usually static without any interactions with the other networks. What types of routing protocols does virtual routing support? These protocols are enabled simultaneously in one virtual router: - Static routing – used in the manually configured routing entry. - Dynamic routing – used in real-time by way of logical network changes. - Multicast routing – used in TCP/IP communication. Virtual cloud routing enables network modernization that accelerates the fast and secure deployment of hybrid multicloud architectures. There has been a significant commitment to a hybrid multicloud strategy during the global COVID-19 pandemic as companies see it as a necessary choice for supporting remote workers and future growth, given ongoing systems shortages and facility lockdowns. There has been a significant commitment to a hybrid multicloud strategy during the global COVID-19 pandemic as companies see it as a necessary choice for supporting remote workers and future growth, given ongoing systems shortages and facility lockdowns." Hybrid multicloud environments balance corporate control of IT resources distributed across clouds and delivers them closer to users at the edge. And when you’re moving crucial applications and data to the cloud, the most effective solution is one that enhances or replaces legacy network architectures and scales along with your business. How deploying virtual cloud routing optimizes your network, in minutes Network Edge services from Equinix deliver a network automation marketplace for VNF devices from leading vendors, which can be deploy in global metro locations on Platform Equinix®. It delivers agile and scalable virtual network services on a modular digital infrastructure platform, which is developed to interconnect cloud, SaaS and or edge services. Network Edge enables the deployment of virtual network routers and other VNF devices within an Equinix IBX data center in minutes, removing the requirement for costly, dedicated hardware and making cloud-to-cloud routing fast and easy. Network Edge also provides direct and secure access to networks, clouds and IT infrastructure via Equinix Fabric™, a software-defined interconnection service that allows organizations to privately connect their IT infrastructure to service providers around the world on Platform Equinix. It also enables hybrid multicloud architectures that leverage multiple cloud types and providers, eliminating single vendor lock-in. Want to learn more about Network Edge? For more information, see the following: Start your free trial of Network Edge. See how Cisco partners with Equinix to deliver SDCI to secure hybrid and multi-cloud networking for SD-WAN Cloud Interconnect with Equinix Network Edge.
<urn:uuid:d13d6f23-3b0e-4f11-a6ab-b236704d3ac6>
CC-MAIN-2022-40
https://blog.equinix.com/blog/2022/01/10/how-to-speak-like-a-data-center-geek-virtual-cloud-routing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00708.warc.gz
en
0.893004
1,212
2.90625
3
…focuses on ethics specifically related to data science and will provide you with the framework to analyze these concerns. This framework is based on ethics, which are shared values that help differentiate right from wrong. Ethics are not law, but they are usually the basis for laws. Through this course you will learn who owns data, how we value different aspects of privacy, how we get informed consent, and what it means to be fair. The course runs for 4 weeks and covers material beginning from the very basics, including a discussion of what we are investigating when we engage in ethical studies. Building on this foundation the course goes on to cover data ownership, privacy, algorithmic fairness and other topics which will be reviewed in more detail in coming weeks. Each week’s material is split into modules and each module ends with a short quiz. A peer graded assignment is conducted at the end the course. The introduction of the course states the following: The question that we should be thinking about, is should we do everything that’s possible [with data]? As this is a shorter MOOC, I will review it in its totality once it has ended, instead of every two weeks as I am doing for The Analytics Edge. I am looking forward to the material being engaging and novel, as it concerns a lot of the softer issues around the data revolution taking place at the moment.
<urn:uuid:8884872c-37ee-48ee-8c3f-d09689874160>
CC-MAIN-2022-40
https://commsrisk.com/studying-data-science-data-science-ethics-via-edx/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00708.warc.gz
en
0.95999
282
3.3125
3
There are no limits to outdoor fun, and the warm summer weather gives kids a chance to enjoy the fresh air, outdoor activities, and friendly competition with their siblings and friends. But the outdoors can have some risks for kids. With summer approaching and more parents working from home, childcare and school resources may be limited. You need to find ways to keep your kids active and safe. As a parent, it is a challenge to keep your kids safe outdoors when you can’t protect them from every danger on your own. However, you can follow these safety tips for kids and let them enjoy themselves in the outdoors. How Kids Can Benefit from Playing Outdoors Kids who spend more time outdoors were less likely to develop attention and behavioral issues. Kids need to spend more time outdoors, but that time is increasingly spent in front of screens. In the United States, the average child spends just four to eight minutes of playtime outside, compared to seven hours of screen time. Getting outside is crucial for a child’s social and mental development. One study found that kids who spend more time outdoors were less likely to develop attention and behavioral issues. There is no doubt that outdoor play is important for kids. The key is to balance keeping kids safe and letting them take advantage of the benefits of being outdoors. It is more important than ever to give kids a chance to enjoy being outside in a safe environment. 1. Set basic safety rules for outdoor play Create safety guidelines with your kids before the summer arrives. Allowing them to participate makes them more likely to listen and follow the rules. Write them down on a piece of paper and place them someplace where they can be seen. Before inviting guests over, let them know the rules of your home’s outdoor play area. 2. Never let kids play rough with each other If playing on slides, swings, and playset walls, kids need to be aware of their surroundings and watch out for each other. Remind them not to chase, push, and wrestle with each other when playing around the outdoor playset. “Keep Your Hands to Yourself” is an effective rule to establish before outdoor play. 3. Put away any equipment from the play area After they have finished playing with them, make sure your kids store their bikes, scooters, sports gear, and other outdoor play equipment in the garage or another place in the house to prevent any trips and falls. Make it easier for your kids to pack up independently by having a designated storage space, such as a hanger or container, for each item. 4. Keep your kids in your line of sight when they are outdoors Tell your kids not to venture into the shed and any other area where you can’t watch them. Explain how seeing them can help in case they get injured, and you can reach them right away. 5. Make use of fencing Older kids can visit their friends’ houses nearby and play sports outside. However, younger kids need secure places to play. Install a fence around your yard so they do not wander on their own and can play safely. If you have a pool, it is dangerous to let your kids alone near them, so make sure you fence the pool. According to the Building Officials and Code Administrators International, Inc. (BOCA), the minimum height for a pool fence is 48” with the latch at least 54” from the ground, and the fence must be at least 40” away from any object that kids could climb. 6. Remove any potential hazards Anything dangerous should be put away, such as gardening equipment, ladders, and chemicals. Watch out for natural hazards like poison ivy, low-hanging branches and holes, as well as animal droppings. If you discover any insect nests like fire ant mounds and wasps’ nests, remove them as soon as possible. 7. Place signs on your property With these safety tips for kids in mind, consider signage on your property that warns drivers and others that kids are at play for extra security. 8. Put an outdoor surveillance system Modern technology makes it possible for parents to watch their kids as they play outside. If you are working from home or needing to finish some indoor chores but still want to let your kids play outdoors, you can monitor with outdoor cameras that can transmit a live feed to your phone or computer. 9. Check in regularly with reminders Set reminders on your phone and smart devices like Google Home Mini to check in on your kids. Your kids can also get their own devices with their own alarms to remind them to check in with you. 10. Use tracking devices Although you may have provided them with safety tips for kids, it’s best to also consider buying a tracking device to track their movements when you can’t be with them. This way, you feel more secure on where they go, and these devices can alert you if your kids are in danger. 11. Make sure your kids are prepared for the outdoors Dress your kids in weather-appropriate clothing and proper footwear. Playing outdoors requires safety equipment like helmets, wrist guards, and knee and elbow pads. Ensure they are in working order and your kids use them properly. Have them apply SPF30 sunscreen and insect repellent if they will be in a bug-infested area. 12. Keep your kids hydrated with enough fluids Give your kids refillable water bottles when they play outside, especially when it’s hot. Remind them to drink water every 20 to 30 minutes, so they won’t get dehydrated. Let your kids be aware of mild to moderate dehydration symptoms that include a dry mouth, nausea, headache, and no tears while crying. 13. Speak clearly to your kids about safety Since their developing brains do not process risks and threats the way adult brains do, talking about safety with your kids seems challenging. It is inadvisable for you to tell them simple warnings about staying safe. You also don’t want to demoralize them to the point of being scared to go outside. Discuss other safety measures, such as wearing a helmet and stopping their bike correctly, so the conversation flows naturally. When you clearly explain your safety concerns to your child in short but understandable phrases, they will be more comfortable in asking questions and understanding your responses. 14. Create an injury prevention plan with your kids Even with the right equipment and safety measures, sometimes accidents happen. Talk with your kids about what to do if they get injured and how to get medical attention. Maintain a well-stocked first aid kit alongside a list of essential contacts, including your healthcare provider. 15. Teach your kids how to be safe around the roads If your kids play on the roads outside of your home, teach them about road safety. They need to be taught to look both ways before crossing the road. If they are younger, tell them not to cross the road unless they are with an adult. Keeping kids safe on the road can reduce the chance of a car accident. 16. Talk to your kids about stranger danger Every parent fears for their kids’ safety if a stranger approaches them and they become victims of a heinous crime. Teach your kids to never talk to strangers they don’t know and get help if they are in an uncomfortable situation. Keep in mind your kids may not always remember the importance of stranger danger. You can keep them safe is to use an outdoor home surveillance system or playing outside with a buddy. Get Your Kids Involved in Their Outdoor Safety Playing outside is crucial for your kids’ development and wellbeing, but that doesn’t mean there aren’t risks outside. You can build your kids’ confidence and independence when you keep the line of communication open and give them the tools and tips to be responsible for their own safety as well. Being outdoors can be fun for everyone, especially when you take precautions to ensure their safety and make sure your kids make lasting memories. Last Updated on
<urn:uuid:d034a399-d72a-4fde-ab73-63c07a4ac3df>
CC-MAIN-2022-40
https://www.homesecurityheroes.com/safety-tips-for-kids-outdoors/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00708.warc.gz
en
0.960492
1,656
2.921875
3
Most attempts to stop smoking are unsuccessful in the long term, even with smoking cessation methods such as nicotine replacement therapy. Penn State researchers are looking at how reward processing and working memory may determine why smokers choose to smoke again after trying to quit. According Charles Geier, associate professor of human development and family studies and the Dr. Francis Keesler Graham Early Career Professor in Developmental Neuroscience, reward processes and working memory jointly contribute to value-based decision-making. “While previous studies have shown altered reward and working memory function are independently associated with nicotine exposure, little is known about the effects of nicotine or nicotine withdrawal on the joint function of these systems.” Geier, who is also a Social Science Research Institute co-funded faculty member, is interested in how reward processing interacts with cognitive control systems in the brain, such as working memory and inhibitory control. “We are particularly interested in smoking because nicotine has widespread effects on the brain, including effects on cognition and, importantly, one’s sensitivity to both drug and non-drug awards,” he said. “Knowing more about how these processes interact is important to better understand the decisions people make after exposure to nicotine, such as choosing whether or not to continue smoking after a quit attempt, and thus can help inform smoking cessation strategies.” In the study, the working memory of 18 daily smokers were tested on two separate occasions. In one session, participants were tested after normally smoking, and in another, participants were tested after at least 12 hours of smoking abstinence. In both sessions, participants completed a working memory task on a computer in which they were asked to focus on a fixation cross, but be aware of a flashing dot in their peripheral vision. After a short time, the fixation cross disappeared and the participants had to move their eyes to the remembered location of the dot. Geier and colleagues used eye tracking to assess precisely where the participants were looking and when they shifted their gaze. In groups who had smoked regularly before testing and were being monetarily compensated for quickness and accuracy, researchers noticed an improvement in working memory; this group more accurately remembered where the flashing dot appeared. Meanwhile, participants who had abstained from smoking showed no similar increases in accuracy when being monetarily compensated. “Our results indicate that during a state of nicotine deprivation, participants failed to receive the same reward-related ‘boost’ to their working memory,” said Geier. “We hope these results shed light on how rewards affect cognitive systems such as working memory, which is critical for our understanding of motivated decision making. These data also extend our fundamental understanding of smoking’s effects on core affective and cognitive processes. A next step is to test participants on similar tasks within the functional MRI scanner to investigate the nature of motivated cognitive control at the neural circuit level.” Other researchers on the project are Nicole Roberts, doctoral student in human development and family studies at Penn State, and David Lydon-Staley, a former graduate student in the Geier lab and current postdoctoral researcher at the University of Pennsylvania. Funding: The research was supported by Penn State’s Social Science Research Institute, the Hershey Cancer Institute, and the Clinical Translational Science Institute, along with additional support from the USDA National Institute for Food and Agriculture and the National Institute on Drug Abuse. Source: Penn State
<urn:uuid:373fc697-08c6-49dd-a5c9-d8a23d62443d>
CC-MAIN-2022-40
https://debuglies.com/2017/12/11/nicotine-withdrawal-affects-the-brains-cognitive-system/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00708.warc.gz
en
0.957949
703
2.890625
3
Researchers have developed a new version of a hardware-based attack that can compromise an Android phone through the browser using a technique that can flip bits in memory by causing small electrical charge leaks in a chip. The attack is an innovative twist on a known method, but it likely isn’t an imminent threat for most Android owners. “These attacks bypass state-of-the-art mitigations and advance existing CPU-based attacks: we show the first end-to- end microarchitectural compromise of a browser running on a mobile phone in under two minutes by orchestrating our GPU primitives.” The researchers said that most typical users likely won’t see this kind of attack targeting them anytime soon. There is a long list of other attack vectors that are simpler to execute and take far less effort. “For general users I believe for now this is not a real threat. The likelihood of an attacker exploiting such an advanced exploitation vector is relatively low as of now. It all boils down to simple cost function for the attacker. There's no point to waste time in developing such a complex exploit when you can use lower hanging fruits,” said Pietro Frigo, one of the authors of the paper. “However, things are changing rapidly. Until last year everyone believed that a remote Rowhammer attack would have taken hours. Now we've proven that it is possible to do it in few minutes (best case scenario under 1 min) on mobile platforms where it was considered completely unfeasible. So this should be seen as the proof of concept that it actually is.” “As of now there's no software-based mitigation that completely stops the attack." Rowhammer attacks are highly technical and reliant on the ability to access certain areas of memory over and over again. By doing so, an attacker, under certain circumstances, can cause small electrical charges to leak from the memory locations around a target location, which can in turn cause that bit to change its state. The attack that the team at Vrije University developed allows them to use Rowhammer to exploit a remote user who visits a malicious web site in Firefox on an Android phone. The GLitch attack, as the researchers call it, uses the WebGL library to help build what they call “timing primitives” to get past the security defenses on the phone’s chips, and then determine which specific memory locations they want to target. In terms of defenses, Frigo said fixing the Rowhammer problem in general and the attack his team developed requires hardware-based mitigations. Both Mozilla, which makes Firefox, and Google, which maintains the Android code base, have made some changes to mitigate the new exploit, but they don’t completely prevent the GLitch attack. “As of now there's no software-based mitigation that completely stops the attack. Both Firefox and Chrome deployed mitigations against the timing side-channel attack first step of our attack. These consisted in disabling a specific timer extension (EXT_DISJOINT_TIMER_QUERY) and partially fixing the WebGL specification to make it harder to build high precision timers,” Frigo said. “However, for now no step has been taken to make it impossible to trigger bit flips from the GPU. We're still communicating with Google about possible solutions. Bottom line, Rowhammer should be fixed in hardware. And while there are some proposed mitigations in hardware (TRR specifically for rowhammer and ECC more in general for memory errors) their effectiveness still need to be proved.”
<urn:uuid:cc6023f5-0ee2-44a9-b997-bc51b69f4e9b>
CC-MAIN-2022-40
https://duo.com/decipher/rowhammer-android-and-the-future-of-hardware-attacks
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00708.warc.gz
en
0.943322
1,114
2.53125
3
What is Edge Computing? You might have heard of this terminology before but let’s revise it once again. Edge computing is all around us – from the vehicles that you drive to the mobile phone that you use. Edge computing is exactly like it sounds- computing that takes place at the “edge” of corporate networks. Here, the “edge” can be elucidated as the place where end devices access the rest of the network i.e. mobile phones, laptops, industrial robots, sensors, etc. In short, this concept works on bringing computing services closer to the device or data source where it’s most needed. Why is Edge Computing needed when Cloud Computing is available? This is a question most asked by professionals from the IT sector. The word has been out how edge computing works for organizations that are in need to avoid latency and operational costs. Cloud computing is also the trend but there are a few things that make edge computing stand right beside it. In cloud computing, files are stored in remote data centers and can be accessed anytime from any device. On the other hand, edge computing is similar to cloud computing by its key purpose. The only difference they have is that cloud computing uses remote data centers for storage while edge computing makes partial use of local drives. Let’s look at situations when edge computing wins over cloud computing for businesses. Situation 1: The network doesn’t have enough bandwidth to send files to the cloud. Situation 2: When business owners are hesitant in keeping sensitive data on remote storages. Situation 3: If the network isn’t reliable to access files in the online mode. Edge computing can help businesses of self-driving vehicles, healthcare workers, manufacturers, finance, etc., in dealing with these situations. Also, edge computing is favored over cloud computing for businesses operating in remote locations where there is limited or no connectivity. Such businesses appreciate having local storage and so edge computing becomes an ideal choice for them. Edge computing is a fast-growing market, with the forecast global revenue set to reach nine billion U.S. dollars by 2024. Edge Computing Vs. Cloud Computing - Which One’s Better? This has been a debate among IT professionals for quite a while now. Let me tell you that both edge computing and cloud computing have their pros and cons. So it’s all about whose pros will benefit which business. Let’s go through them one by one. The factor of latency: This particular advantage of edge computing is a big one. It provides a faster and smoother user-experience to the users while cloud computing solutions go through delays between a client request and a cloud service provider’s responses Security of data: This is also a crucial one. Edge computing allows businesses to have control over their data by storing the key information locally while cloud computing stores data on a third-party remote data center. The factor of scalability: Edge computing allows storing an increasing amount of data both in remote centers and on the edges of networks while cloud storage solutions lack it. The factor of versatility: Edge computing finds a balance between traditional centralized cloud data storage and local storage while cloud computing solutions also provide a great deal of flexibility but only when you have an internet connection. Power supply: Edge computing lacks here as if the device is cut off from the electricity source, it won’t be able to process data in the local network. This challenge can be treated by producing alternative energy production devices while cloud computing solutions don’t go through this factor. The factor of physical space: Edge computing requires businesses to have a dedicated physical space for the local servers to be able to accommodate data while cloud managed services work completely on remote servers. Maintenance: If a business is using edge computing, it will require them to monitor and repair local servers whenever needed which is an added cost of maintenance while cloud computing solutions are hosted on the internet through remote servers so it doesn’t require physical maintenance. This list of factors can help you determine what works best for your business. For example, if data security is important for you, go for edge computing but if you have a lack of physical space to accommodate local servers, cloud computing might be the right choice for now. Examples of Edge Computing Autonomous Vehicles: Self-driving vehicles use a lot of what edge computing offers. This can eliminate the need for drivers as the vehicles will move safely without any human input. It offers more than just running the vehicle and plays a crucial part in safety features even now like anti-lock brakes, traction control, collision avoidance system, cruise control, etc. It’s all handled by edge computing in real-time. Moreover, it has prevented 276,000 accidents on the road in the US alone and will prevent 2.5 million accidents by the year 2030. In-hospital Patient Monitoring: The majority of healthcare centers are dependent on IoT devices and they require a network that is real-time and has zero latency. This is where edge computing comes into the picture. Edge computing can help with patient monitoring devices, video capture technology, and smart health wearables for monitoring heart rate, blood sugar level, etc. This brings a significant change in the healthcare industry and can save more lives. Smart Homes: We all know how smart homes operate on IoT devices where data is usually sent to a remote server. Well, this introduces the uncertainty regarding privacy, security, and latency. Edge computing brings the processing and storage closer to the smart homes which wipe out the uncertainty. Streaming Services: Video streaming and game streaming are among the most popular and highest bandwidth-consuming media on the Internet. They require high processing power, and fantastic speed connectivity but are somehow limited by delay intolerance and excessive bandwidth usage. Edge computing can introduce new technologies, such as cloudlets, micro data centers, fog, and mobile edge computing that can bring computational and storage resources closer to the network source resulting in minimized latency. Looking to the Future Though edge computing isn’t the only solution for the future, it’s about creating the best of both worlds. When a business starts to leverage both edge and cloud computing according to their needs, that is when the future looks bright. It is also not recommended to delegate all your data to edge computing as so much of it can be deleted from there due to limited capacity. And even if it doesn’t get deleted, managing a network of data can be a nightmare for organizations. Moreover, it’s expensive and requires advanced infrastructure. By utilizing data gathering potential from edge computing with the processing power of cloud computing, organizations can maximize their benefits from both approaches. There are also cloud service providers that can help you with it. For example, if you are a resident of Los Angeles, you can get your set of managed cloud services in Los Angeles from a provider. Or if your business is around Redding, you can consult an IT services provider in Redding to know more about both approaches. The Final Word The debate about what is better than the other, edge computing or cloud computing will go on. And, it’s difficult to choose one over the other but to come to a fair conclusion, Apex is here to help you know more about both approaches.
<urn:uuid:927b9e40-f9c2-405d-96fa-817dc16f6468>
CC-MAIN-2022-40
https://apex.com/edge-computing-vs-cloud-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00708.warc.gz
en
0.924967
1,501
2.9375
3
The Ecessa appliance requires use of an IP address from the IP subnet of each WAN connection. To accommodate a variety of network environments, the Ecessa appliance can function in three different modes: NAT, Routed, and Translucent. The purpose of this article is to provide a brief description for each of these operating modes. For additional information, please refer to the Basic Setup section in the online Ecessa manual. NAT mode – similar to a traditional firewall, NAT mode configures the WAN interface(s) with the appropriate WAN subnet(s) while the LAN interface and all internal network devices connected to the Ecessa LAN are configured with a private network. Depending on the existing network configuration, this mode may require additional configuration changes during installation as existing network device settings are modified to reflect the new private network. Situations where NAT mode may be recommended include, but are not limited to: WANs using point-to-point subnet masks (255.255.255.252 - CIDR /30); or instances when all available WAN addresses are assigned to hosts or services with none available for the Ecessa appliance’s use. Routed mode – a semi-transparent mode, Routed mode allows the internal network devices to continue to use addresses from the WAN subnet, however, may require additional configuration changes during the installation as the existing network devices are modified to use a different default gateway address. This option does have some caveats: - The existing WAN subnet mask is at least 29 bits (255.255.255.240 – CIDR /29) - The existing WAN has four contiguous addresses that fall within a /30 subnet - The gateway address on the firewall or the actual gateway (ISP) device address can be changed Situations where Routed mode may be recommended include, but are not limited to: WANs using point-to-point subnet masks with a separate routable subnet. The point-to-point WAN is sometimes referred to as a “hand-off” for the routable “LAN” subnet. Translucent mode – a transparent mode, Translucent allows the Ecessa appliance to use only a single IP address from the routed WAN subnet. This address is configured on both the WAN and LAN interfaces while the existing network devices behind the Ecessa appliance continue to use the same IP configuration and default gateway. Devices that are configured with an IP address within the same range “pass-through” the PowerLink without requiring NAT while the Ecessa appliance provides load-balancing and WAN failover transparently. Translucent mode is available in firmware versions 8.0 and later. Situations where Translucent mode may be recommended include, but are not limited to: This option is the preferred operating mode, as it uses only a single IP address from the WAN subnet and minimizes firewall and gateway changes. Translucent mode is especially useful for installing an Ecessa appliance in an environment where the firewall (and possibly other devices) is already configured to use an IP address within the WAN range. NOTE: Although multiple WAN lines can be configured to use Routed or Translucent mode, it is typically recommended that additional (secondary) WAN lines are configured for NAT mode.
<urn:uuid:91b581fe-e238-42f0-8b38-d5b5cd61d23d>
CC-MAIN-2022-40
https://support.ecessa.com/hc/en-us/articles/200143236?page=1#comment_200290436
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00108.warc.gz
en
0.905063
696
2.515625
3
How to create a simple network topology using Cisco’s VIRL How to create a simple network topology using Cisco’s VIRL For those of you who are interested in investigating Cisco’s VIRL product, I have created a blog to show how to create a very simple network topology. You may already use other lab-it-up solutions such as GNS3, Packet Tracer, Boson’s NetSim, or something else. I promised that I would blog about VIRL once I had a little time to explore it myself, so let’s begin. For instructor-led training, check out our complete Cisco CCNA Certification schedule. First of all I will assume that you have already purchased your copy and installed it, because I am skipping right to the part where you actually launch the product and use it. If not, that’s okay – you may want to review this blog first and then make a purchase decision. I am running VIRL using VMWare Workstation (NOT free). The Linux box included comes with VMMaestro on its desktop. VMMaestro is the interface for interacting with the lab environment. Unfortunately, perhaps due to my machine’s high resolution and as-yet unresolved issues with Windows 8.1, it is difficult to view much of the landscape inside VMMaestro. Fear not, an alternate solution is to install VMMaestro on your host box and run it from there – as long as you point it to the IP address of the Linux box inside VMWare Workstation. From there I will launch VMMaestro on my laptop. The main screen is shown in figure 1. As is readily apparent, the icons and words are tiny; many attempts at adjusting resolution, making items larger or smaller, and beating the screen and keyboard, were all fruitless. I will just have to live with it until the screen resolution issues are corrected. We now need to create a new topology project. There is an icon you can click, but if you don’t know where it is, it is hard to find without a magnifying glass. For the daring, it is shown in figure 2: It is that little tiny manila-colored folder icon at the far left. If you like menus, click File, then New, then Topology Project. Regardless of which way you get there, you will now be presented with the ‘New Topology Project’ window, shown in figure 3: I will name my sample topology something easy to track – NewTopologyProject. Type that in the ‘Project name’ box and click Finish. You will see the result in the upper left of your screen – I will zoom in a little so it can be seen. Note figure 4: Now to actually create a topology. Once again, there is a quick way – use the icon. Here is a screenshot of which icon and its location: This gets me the Create a new .virl file window shown in figure 6: In the File name box, since topology.virl is already highlighted, I can create and choose my own file name. For this example I will name the file NewTopologyFile.virl. Note that the file name MUST have the .virl extension and you must type it as it does not add it for you. The Finish button will be grayed out until you have added the .virl to the file name. The left side of the screen looks like what is shown in figure 8 (if not, verify that in the upper right, you have selected the Design tab): Now when you click inside the topology area (called a ‘canvas’ in VIRL documentation), you will see the ‘Properties’ window. This is shown in figure 9: We want to click on the Topology tab, also shown in figure 9. Within this area you will see a box labeled ‘Validation Rules.’ Select VIRL from the drop-down, as shown in figure 10. In the Management Network, select Private simulation network, shown in figure 11. Now select the AutoNetKit tab just under the Topology tab. In this pane, we will enable CDP in this example by selecting ‘true’ from the drop-down, as shown in figure 12. In the IP Address Family box, select ‘dual_stack’ as shown in figure 13. While you are here, take note of some of the other aspects of this area of VIRL – such as the fact that much of the scenario is pre-configured for you (IP address space, OSPF info, link info, etc) which will be useful when it comes time to automatically configure your device configurations! Yes, I did say automatically configure! Now we will place a couple of IOSv nodes into our topology. At the upper left, in the Palette pane, click IOSv node to highlight it. Unlike GNS3, you don’t drag it into your topology, merely selecting it is sufficient. Then mouse over into the topology pane and click to place a node. Clicking again places another node. Let’s place two of them, as shown in figure 14. When you have dropped both of them, click Esc to stop dropping nodes. Now to cable up our gear. On the left side above your node choices, click on Connect. Click on one node and then drag a connection to the other node and click once more. Figure 15 shows the end result – the interface labels are placed automatically. Once finished connecting, press Esc. Let’s stop here and let this sink in. You may want to practice just this part of VIRL until it gets to be natural, because if creating topologies to lab up sample network scenarios is painful, you will be less likely to do it. In my next blog, I will pick up here and show how to configure our nodes, view their consoles, and show that the scenario is functioning as we desire. As always, if you have any suggestions or comments, please share … Until next time. You May Also Like Mark Jacob, Cisco Instructor, presents an introduction to Cisco Modeling Labs 2.0 or CML2.0, an upgrade to Cisco’s VIRL Personal Edition. Mark demonstrates Terminal Emulator access to console, as well as console access from within the CML2.0 product. Hello, I’m Mark Jacob, a Cisco Instructor and Network Instructor at Interface Technical Training. I’ve been using … Continue reading A Simple Introduction to Cisco CML2 In this video, you will gain an understanding of Agile and Scrum Master Certification terminologies and concepts to help you make better decisions in your Project Management capabilities. Whether you’re a developer looking to obtain an Agile or Scrum Master Certification, or you’re a Project Manager/Product Owner who is attempting to get your product or … Continue reading Agile Methodology in Project Management In this Office 365 training video, instructor Spike Xavier demonstrates how to create users and manage passwords in Office 365. For instructor-led Office 365 training classes, see our course schedulle: Spike Xavier SharePoint Instructor – Interface Technical Training Phoenix, AZ 20347: Enabling and Managing Office 365
<urn:uuid:d21a90d3-fdfd-40b9-9c71-24d620838a7b>
CC-MAIN-2022-40
https://www.interfacett.com/blogs/how-to-create-a-simple-network-topology-using-ciscos-virl/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00108.warc.gz
en
0.90012
1,588
2.75
3
Quantum Computing: Limits, Options And Applications (Forbes.com) Semiconductor companies have used various techniques to keep Moore’s Law alive in recent years. Some in the computer industry are moving away from central processing units (CPUs) to more powerful and purpose-built graphical processing units (GPUs). All of this raises a few questions: How long can we continue to improve? Do we have to do something fundamentally different? How do so-called quantum computers figure into this process? Kazuhiro Gomi, a Forbes Technology Council member writes, “We are not looking at an either/or scenario”. Quantum will not displace classical computing. Rather, it should be utilized as an accelerator to take up specific kinds of tasks or applications. Classical systems will be used for a long time to run task scheduling and also human-machine interface. Both have strengths, weaknesses and best use cases. One approach involves the use of quantum gates, a basic quantum circuit that operates on a number of qubits. While sensitive to noise and gate error, quantum gates are suited to discovering hidden patterns. Yet, they are unlikely to be commercially available for many decades (which gives cryptographers time to develop post-quantum encryption algorithms). Another quantum gate-based approach is emerging with different characteristics. Using the quantum approximate optimization algorithm (QAOA), this system is relatively robust against noise and gate error and suited for combinatorial optimization. Then there is the so-called Ising model, named after the physicist Ernst Ising, who was a pioneer in explaining phase transitions between magnetic states. This model has several variants, one of which is similar to QAOA-based simulation in its merits, limits and roadmap; but instead of using gate-based quantum processors, it builds a network of artificial “spins” using coherent Ising machines. It is important to note that there is more than one quantum computing model, each with pros and cons, timetables for realization and optimal applications.
<urn:uuid:178752ac-ba80-456a-8921-7ebbf1665617>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/quantum-computing-limits-options-and-applications/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00108.warc.gz
en
0.924734
415
3.15625
3
Augmented reality is becoming the new competition ground for major tech companies. All of the five big players, Google, Facebook, Apple, Amazon and Microsoft, are making big investments in AR projects. Some of these projects includes smartphones with AR-enabling hardware, AR headsets and AR application development platforms. But is all this investment aimed at enabling users to play the next version of Pokemon Go, put cat mustaches on their selfies or preview how furniture will look in their living room? I believe the promise of AR is much more than trivial or entertaining use cases—in other words, “nice to haves.” The fact that makes augmented reality a bigger deal than its better-known cousin virtual reality is that instead of immersing you in a world that is completely detached from the one you’re in, it “augments” the immediate space around you with bits of information or graphical data. This can be a huge deal for the professional workers, as I learned during a recent journey I embarked to see how the hands-on workforce is using augmented reality gear to improve their speed and efficiency. Access to information The evolution of computing devices and internet connectivity has transformed the way we access information. Geographical barriers have been erased thanks to broadband connections and cloud platforms. A computing device and an internet connection is all you need to access an endless sea of information and applications. With a smartphone or a laptop, and a mobile data plan or even a free Wi-Fi connection at a local library, you can read your emails, access cloud applications and perform business tasks. However, when it comes to the on-site work, access to that information is still restricted to a computer display. In most cases, this can be a barrier. For instance, in an assembly line, workers still have to use printed manuals or information terminals (such as laptops), which they have to go to in-between tasks in order to obtain information about the next step of their work. At the very best, workers have a smartphone or tablet they carry around with them, which they use to access the information and applications relevant to their task. However, even that will require them to abandon the task and interact with a computing device. Augmented reality provides a frictionless way to obtain information and interact with applications, which was previously unavailable to users. Smart glasses (such as the Google Glass) can project information relevant in the field of view of wearers, keeping their hands free to perform tasks and obviating the need to interrupt their work. This can be a worker at a warehouse, packing and order list, or at a factory, assembling a complicated device such as the wire harness of an airplane, or a maintenance officer checking on a vehicle out in the field. The devices can process voice commands or scan QR code to launch applications or query for information, so interacting with them does not necessarily require the use of hands. The same technology helps workers capture information (pictures, video) during work and send it to the backend of the company without stopping their work. This use of augmented reality, also called “assisted reality,” directly translates to an improvement in speed and a reduction in errors. Ironically, the Google Glass, which was shunned in public consumer space, is becoming widely popular among the hands-on workforce due to the specific problems it solves. Another field where augmented reality can be of use is getting help from experts. In most field, expert assistants who have long-time experience are a coveted asset. However getting their help to on-site workers and operators has its own challenges. One of the people I interviewed said their company required to fly in experts from Germany to maintain and reset their devices. Another said their assembly lines were in clean rooms, which required experts to suit up and go through a lengthy preparation process whenever they wanted to assist their workers. AR glasses enable workers to directly stream a video feed of their line of site to a remote expert while doing their work, getting a second pair of eyes on the job. Where companies are using this method, experts can guide workers through the steps of the task or provide them with visual aides, videos and more on their AR display. Dealing with the effects of automation While interpretations about the severity vary, most experts agree that artificial intelligence is disrupting employment. More and more jobs are being automated by AI algorithms, and the tasks that remain within the domain of human cognition require higher-level skills. This is a trend that will accelerate as AI developments pick up pace, making the landscape more fluid. Human workers will have to learn new skills and adapt to new tasks at a faster rate than before if they want to stay in the competition. RELATED: What is artificial intelligence? Augmented reality is one of the technologies that will help humans maintain their edge. Immediate access to information, made possible by assisted reality, can dramatically cut down the time required to master new tasks. A bigger promise lies in mixed reality, the more advanced form of augmented reality that embeds graphical objects into 3D space, as opposed to overlaying them on top of real-world imagery the way classic AR does. MR provides a more immersive way to learn tasks, reducing education time and improving the quality of learning. Some companies already use interactive MR models to help workers learn the steps of assembling parts in a factories. With MR, users will be able to directly view and show what a design or plan will look like in its final location. This advanced form of visualization may one day obviate the need for some skills such as reading complicated maps. As we move forward, AR will evolve and become more integrated with other technologies. There are already cases where the combination of AR and IoT sensors is helping workers read measurements as they perform tasks, such as the amount of torque being applied to a bolt. In a not too distant future, we can imagine headsets that leverage computer vision algorithms to analyze the wearer’s field of view to autonomously understand and assist in tasks. While VR might remain a niche field, there’s a likely chance that AR will become an integral part of everything we do.
<urn:uuid:78c3e141-824d-4692-a4bf-18ad53bba50f>
CC-MAIN-2022-40
https://bdtechtalks.com/2017/09/22/augmented-assisted-reality-ar-hands-on-workforce/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00108.warc.gz
en
0.957568
1,260
2.640625
3
The ocean of data keeps rising and concepts such as big data are spawned in that ocean. That leads to statements that have been made to the effect that data is the new center of gravity of IT. Is technology really enabling an information-centric world, and if so, how does it tie to the more familiar and mundane world of operational IT? The rise of a concept called data-driven intelligence serves as the model for a new perspective and we can contrast that with more familiar application driven intelligence models. One of the earliest names for information technology was “data processing” which encompassed the need for both data and processing power. However, the glamour of IT for many years was in application development. Applications, i.e. a processing or computing centric focus, ruled the roost. Throughout its life cycle from birth (creation) to death (deletion) most data remained within the control of applications which tended to make digital what had been manual business processes. Now applications that analyzed data after it had been created have also long existed (such as business intelligence and seismic processing), but these were a small fraction of practical IT uses. Over time, that has changed into a snowball rolling down a snow packed mountain. The rise of data warehousing where data that was originally created for one purpose (such as fulfilling a sales order) was repurposed (and so creating a longer useful life for the data) to meet other business needs is one illustration. The rise of the Web where data exists that can be repurposed for numerous uses other than its creators originally intended has helped to drive these new developments. Differentiating Between Application-Driven and Data-Driven Software Intelligences In his book “Reinventing Discovery” (well recommended by the way), the author Michael Nielsen discusses data-driven intelligence and contrasts it with artificial intelligence and human intelligence. He defines data-driven intelligence as the ability of computers to extract meaning from data, and differentiates it from artificial intelligence, which he says takes tasks that humans are good at and aims to mimic or better human performance (such as chess playing) and human intelligence (such as our ability to process visual information). According to Neilsen, data-driven intelligence is complementary to human intelligence by solving different kinds of problems (big data anyone?). By the way, in many instances the combination of intelligences is useful. That is a very useful and valuable perspective but for our purposes, let’s examine what it means from an IT perspective (as our focus will be a business versus scientific perspective). The following table puts application-driven vs. data-driven intelligence in perspective. Note that this excludes many other important types of software intelligence, such as operating systems and middleware, but instead focuses on what business uses to derive value and benefits. Table: Comparing Application-Driven to Data-Driven Software Intelligence |Application-Driven Intelligence||Data-Driven Intelligence| |Primary Goal||Substitute application intelligence for human intelligence in managing a process||Extract meaning and knowledge from data| |Description||Data is created and managed to fit the needs of the application; typically, the creation of data is part of a process using the application.||The application is created and managed to fit the needs of the data, which may be (and likely are) created independent of the application| |Illustrations||· Supply Chain Management (SCM)· Customer Relationship Management (CRM)· Content Management· Online transaction processing systems in general||· Big data· Data warehousing· Web search engine· Sensor based analysis· IBM’s Watson or Apple’s Siri| Source: the Mesabi Group, November 2012 Setting a Few Things Straight Some things to note: - Application-driven intelligence tends to create, read, update and delete data to fulfill an initial purpose, such as a workflow process to manage order processing shipping and he collection of payments. In contrast, data-driven intelligence often takes already human or machine-generated data and uses it for a secondary or additional purpose, such as performing e-discovery on e-mail files. However, sensory (such as meter reading) information or machine/computer-generated information (such as logs or other information for the software-defined data center or software-defined storage) are created first and then analyzed by a downstream process (which may be in real-time) as appropriate. - There is nothing new under the sun. Data-driven intelligence (such as statistical analysis using techniques like regression analysis, linear programming, and simulation modeling) have been around for a long time though more recently new concepts (such as data warehousing, online analytical processing, and data mining) have emerged. The problem has been that terms such as advanced analytics, business intelligence, and big data have tended to be looked at as valuable by businesses, but isolated IT islands. The totality of these developments do not get their credit for exponentially expanding the role of IT’s data-centric focus. - Yes, there are hybrids. Data-driven intelligence can be inserted in an operational system (such as retail sales transactions) to check a credit card to see if it is fraudulent or at points within a supply chain process. - Data-driven intelligence is an additive view that broadens our understanding and does not replace application-driven intelligence. Let software intelligences continue to multiply and add to our understanding and the value that we derive from IT. What Can a Data-Driven Intelligence World View Do For You? There are some key benefits from thinking with a data-driven intelligence mindset: - Clearly, being able to distinguish between an application-driven intelligence solution and a data-driven intelligence application is important because the development methodology is different. Although both can use agile development methods, there are a number of key differences. For example, Ken Collier in his book Agile Analytics discusses the difference between agile development for data warehousing and business intelligence and that of traditional applications. All in all, you have to know what is different (skill sets, methodology, resources, time frame) for building or using an application-driven intelligence solution vs. a data-driven intelligence application. - You don’t have to worry about trying to fit a project into a particular definition. Big data is a hot topic, but what is it exactly? Doug Laney (now of Gartner) introduced the popular concept of volume, variety and velocity. This is a very powerful and useful idea, but it does not precisely define what is big and what is little. Moreover, size alone does not determine value. Using a data-driven intelligence approach causes you to think about its overall value and the software technology that needs to be applied. If the project seems to fill the bill of big data, then call it that. If it does not but still delivers value, go ahead. Be benefit-driven – not label-driven. - Recognize that collectively data-driven intelligence is the engine that makes more data-centric IT possible. This collective perspective encompasses all the pieces and gives a better sense of the total value that results when viewing the world through a data-driven intelligence lens. This is a short introduction to a broad subject and will require further discussion both from a general perspective as well as using specific product illustrations. Now, application-driven intelligence tends to focus on operational business processes. Data-driven intelligence tends to finally fulfill the needs of management information systems (an old term that fell into disuse because early OLTP and other systems were really not management information systems (MIS)) that aid in decision-making processes (which can be operational, tactical or strategic decisions) including both knowledge or information-acquisition as well as direct decision making. All in all, leveraging a data-driven intelligence lens in our world view expands our perception of where the role of data-centricity is going in IT.
<urn:uuid:9779fd28-e0c6-4601-a021-f7851bc1bf0c>
CC-MAIN-2022-40
https://mesabigroup.com/it-perspectives/the-rise-of-data-driven-intelligence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00108.warc.gz
en
0.942095
1,627
2.625
3
It is comprised of four different “nucleotides,” which combine in different ways to provide genetic instructions for different outcomes. I like to think of it like binary machine code where the combinations of 0’s and 1’s are combined to define a program for a computer to execute. This is probably a common analogy since scientists have been encoding digital data into organic DNA for a while now. This is exactly what Tadayoshi Kohno at the University of Washington was thinking about when he and his team devised the experiment to encode a malicious virus in DNA — a virus that doesn’t compromise humans, but computers. While much of scientists’ work with DNA happens with organic materials, some of it requires computers to decode the DNA information into a digital format and this is where the research team focused their attack. The team admits that they created the “best possible environment” in which to test their theory. They changed the source code of the fqzcomp DNA compressor to include a fixed data buffer which would be vulnerable to a buffer overflow attack. The next step was to encode the buffer overflow data into synthetic DNA. Encoding digital information into DNA that uses only four nucleotides with physical restrictions on the combinations is challenging and took many iterations, but the team was eventually able to come up with a viable formula and it was sent to Integrated DNA Technologies for synthesis. When the vial of DNA was received from the synthesis service, the team now had a computer program vulnerable to the exploit encoded on that DNA and the test was ready to go. They sequenced the DNA samples using the known-vulnerable fqzcomp compressor and 37% of the time the attack was successful — the buffer overflow compromised the computer system and could have granted unauthorized access to the perpetrators. “[the] attack was fully translated only about 37 percent of the time since the sequencer’s parallel processing often cut it short or—another hazard of writing code in a physical object—the program decoded it backward. (A strand of DNA can be sequenced in either direction, but a code is meant to be read in only one. The researchers suggest in their paper that future, improved versions of the attack might be crafted as a palindrome.)”, reads the Wired Magazine. Is this a viable attack? It depends on many factors. The bad guys would have to compromise software used in the DNA sequencing and analysis stages like these researchers did. Or they would have to find existing vulnerabilities in the software currently being used (not hard to imagine when you realize how many vulnerabilities exist in all software.) The bad guys would also have to arrange for the target to receive a sample of the specially crafted malicious DNA, or find a vulnerability that could be exploited by known samples that did not require modification. There are a variety of ways the DNA processes could be compromised but for now, they are all complex with a low probability of success. It will take a lot of (financial) motivation or time for malicious researchers to make these attacks viable. But we know it is possible, so we can start to think about the implications now. About the author: Steve Biswanger has over 20 years experience in Information Security consulting, and is a frequent speaker on risk, ICS and IoT topics. He is currently Director of Information Security for Encana, a North American oil & gas company and sits on the Board of Directors for the (ISC)2 Alberta Chapter. Pierluigi Paganini is member of the ENISA (European Union Agency for Network and Information Security) Threat Landscape Stakeholder Group and Cyber G7 Group, he is also a Security Evangelist, Security Analyst and Freelance Writer. Editor-in-Chief at "Cyber Defense Magazine", Pierluigi is a cyber security expert with over 20 years experience in the field, he is Certified Ethical Hacker at EC Council in London. The passion for writing and a strong belief that security is founded on sharing and awareness led Pierluigi to find the security blog "Security Affairs" recently named a Top National Security Resource for US. Pierluigi is a member of the "The Hacker News" team and he is a writer for some major publications in the field such as Cyber War Zone, ICTTF, Infosec Island, Infosec Institute, The Hacker News Magazine and for many other Security magazines. Author of the Books "The Deep Dark Web" and “Digital Virtual Currency and Bitcoin”. Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information. Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
<urn:uuid:0d97c321-c734-41d2-9a42-973519996b8b>
CC-MAIN-2022-40
https://securityaffairs.co/wordpress/61940/hacking/dna-contains-instructions-biological-computer-viruses.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00108.warc.gz
en
0.960559
1,187
3.640625
4
I like to think of the data steward as the unsung hero of data. Truth be told is that without them, data scientists wouldn't be able to understand and trust the data that they are using, AI/ML wouldn't output correct results, and a company wouldn't be able to become data-driven. So who is this unsung hero? What do they do? Let's put the spotlight on them. To put it simple, a Data Steward is responsible for the maintenance and understanding of data and metadata of an organization. Their overall objective is ensuring quality, compliance, clarity, and understanding of the data that they oversee. The Data Steward is responsible for the maintenance and understanding of data and metadata of an organization. This individual comes from the business side and they have experience and knowledge about the data domain that they are assigned to. Though there are different other types of data stewards, the data domain data steward is the most common one to have. That aside, this is that person that you go and ask: - Do you know where I could find this data that I need? - Can you please help me explain what this data is all about? - What does this business term mean? - How much should I trust the quality of this data? - Can I use this data for this project? As I'm calling out these questions I'm sure you can already see the face of that colleague of yours that's able to answer these questions. Maybe it's even yourself. And this is just scratching the surface, of course. You might also ask yourself, "Wouldn't these answers also come from a tool such as a Business Glossary, or a data catalog, or a data dictionary?" Yes, absolutely, but it's because the data steward helped creating that information and adding it to these tools. So what are the main responsibilities of a data steward? There are plenty, but for the most part they can be mentioned in the following 3 categories: 1. Data quality - Help create data quality requirements, rules, and standards - Validate and monitor the level of data quality - Contribute to develop the business rules that govern their data domain (ex: ETL rules) - Help establish data quality metrics - Help creating data quality audits, controls, procedures, and policies - Contribute to helping determine the root cause of data quality issues 2. Metadata management - Create business metadata. Basically they define business terms and populate the Business Glossary - Provide context and guidance on the meaning of data - Promote the use of approved data and metadata definitions and reference data - Work with data custodians on documenting the technical metadata - Help with data classification - Determine the retention, archival, and disposal requirements of data - Define data security requirements - Translate regulatory rules into data policies and standards - Establish guidelines on data usage to ensure data privacy controls are enforced As I mentioned before, a data steward is usually a subject matter expert from the business side. They are experience and knowledge about the data domain they represent. I've also seen data stewardship responsibilities assigned to data analysts or data management professionals that have a good understanding of the technical side of things. Ideally though, they are recruited from the ranks of the business as that knowledge and insight that they bring is valuable in everything that they have to do. Who are your data stewards? What are their main responsibilities?
<urn:uuid:ca3b0d9c-f4a5-4b9c-98e7-c9ca97ceba2d>
CC-MAIN-2022-40
https://www.lightsondata.com/main-responsibilities-of-a-data-steward/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00108.warc.gz
en
0.947124
723
2.515625
3
Oftentimes when discussing the frequency band which Random Phase Multiple Access (RPMA) operates in, depending upon who the audience is, they initially become confused, but then almost immediately confident about the utilization of the spectrum and what performance can be expected with RPMA. Add in the fact that we (Ingenu) and our global partners are currently deploying a publicly accessible Machine Network, and straightaway the audience believes we must need to deploy tens of thousands of access points to provide the coverage for the markets we are targeting. This is because the audience is associating our RPMA technology with other 2.4 GHz technology; your home, office of neighborhood coffee shop Wi-Fi. As the title of this blog post states, not all 2.4 GHz networks are created equal. When thinking of range, the typical 802.11b/g/n Wi-Fi network everyone uses daily and is familiar with, provides an average indoor range of 120’ to 240’ and an average outdoor range of 300’ to 600’ with a single access point depending upon which mode it is operating in. In comparison, a single Ingenu RPMA access point can provide coverage of 70 square miles in urban areas and 400 in rural flat areas. That’s quite a difference! In terms of speed or how much information can be passed on the network, there are a couple of ways to think about it. You can think about the data rate of the network or the capacity of the network. Also keep in mind that in data communications, speed is measured in kilobits or megabits per second, designated as kbps or Mbps. In addition, there are many Wi-Fi standards in use today, and newer technologies can bond multiple channels/frequencies together to achieve higher throughput. However, a single channel on a Wi-Fi access point in a standard configuration can provide the following speeds; 802.11b – 2 to 3 Mbps, 802.11g – 18 to 20 Mbps and 802.11n – 40 to 50 Mbps; with a maximum of 50 devices connected. In comparison, a single channel on an Ingenu access point can provide 100kbps upstream per device with a maximum of 64,000 devices connected. Again, quite a difference. The reason for such stark differences is because these two technologies where built for two different purposes and while these two technologies operate in the 2.4 GHz ISM band, they are completely different. It’s important to understand that point. The common place Wi-Fi technology was built for includes bandwidth-intensive applications like voice, video and music stream. As such, it has very short range and requires many access points. On the other hand, Ingenu’s RPMA wireless technology was built from the ground up to provide low-power, wide-area connectivity for device communications. Offering unparalleled capacity and extreme coverage per access point, RPMA is designed for connecting applications in difficult to reach locations, and as such, requires a ridiculously low number of access points. Robust communications are essential for smooth operations, and RPMA offers the most robust wireless connectivity for machines.
<urn:uuid:506cc2fa-a4a6-4bc2-89bd-62d484d6d68c>
CC-MAIN-2022-40
https://www.ingenu.com/2016/07/not-all-2-4-ghz-networks-are-created-equal/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00108.warc.gz
en
0.939466
644
2.765625
3
Cisco CCNP ROUTE IPv6 Anycast An IPv6 anycast address is a global unicast address that is assigned to more than one interface. When a packet is sent to an anycast address, it is routed to the “nearest” interface having that address. In a WAN scope, the nearest interface is found according to the measure of distance of the routing protocol. In a LAN scope, the nearest interface is found according to the first neighbor that is learned about. The following describes the characteristics of the anycast: Anycast addresses are allocated from the unicast address space, so they are indistinguishable from the unicast address. When assigned to a node interface, the node must be explicitly configured to know that the address is an anycast address. The idea of anycast in IP was proposed in 1993. For IPv6, anycast is defined as a way to send a packet to the nearest interface that is a member of the anycast group, which enables a type of discovery mechanism to the nearest point. There is little experience with widespread anycast usage. A few anycast addresses are currently assigned: the router-subnet anycast and the Mobile IPv6 home agent anycast. An anycast address must not be used as the source address of an IPv6 packet. Cisco CCNP ROUTE Autoconfiguration A router on the local link sends network-type information, such as the prefix of the local link and the default route, to all its nodes. An IPv6-enabled host appends its 64-bit link-layer address to the 64-bit local link prefix to autoconfigure itself. The host 64-bit extended universal identifier (EUI) address format results from the original 48-bit MAC address plus a 16-bit 0xFFFE inserted into the middle. This autoconfiguration produces a full 128-bit address that is usable on the local link and guarantees global uniqueness. IPv6 will detect duplicate addresses in special circumstances to avoid address collision. The host sends an RS a boot time to request a router to send an immediate RA on the local link. The host then receives the autoconfiguration information without waiting for the next scheduled RA. The RA message includes the prefix for the link, and also gives the host the lifetime of the prefix. Cisco CCNP ROUTE Other IPv6 features Mobility and security: Mobility and security help ensure compliance with mobile IP and IPsec standards functionality. Mobility enables people to move around in networks with mobile network devices—with many having wireless connectivity. Mobile IP is an Internet Engineering Task Force (IETF) standard available for both IPv4 and IPv6. The standard enables mobile devices to move without breaks in established network connections. Because IPv4 does not automatically provide this kind of mobility, you must add it with additional configurations. In IPv6, mobility is built in, which means that any IPv6 node can use it when necessary. The routing headers of IPv6 make mobile IPv6 much more efficient for end nodes than mobile IPv4. IPsec is the IETF standard for IP network security, available for both IPv4 and IPv6. Although the functionalities are essentially identical in both environments, IPsec is mandatory in IPv6. IPsec is enabled on every IPv6 node and is available for use. The availability of IPsec on all nodes makes the IPv6 Internet more secure. IPsec also requires keys for each party, which implies a global key deployment and distribution. Transition richness: There are two ways to incorporate existing IPv4 capabilities with the added features of IPv6: One approach is to have a dual stack with both IPv4 and IPv6 configured on the interface of a network device. Another technique—called “IPv6 over IPv4” or “6to4” tunneling—uses an IPv4 tunnel to carry IPv6 traffic. This newer method (RFC 3056) replaces an older technique of IPv4-compatible tunneling (RFC 2893). Cisco IOS Software Release 12.3(2)T (and later) also allows protocol translation (NAT-PT) between IPv6 and IPv4. This translation allows direct communication between hosts speaking different protocols. Cisco CCNP ROUTE IPv6 Routing Protocol Considerations As does IP version 4 (IPv4) classless interdomain routing (CIDR), IPv6 uses longest-prefix match routing. Recent protocol versions handle longer IPv6 addresses and different header structures. Currently, the updated routing protocols shown in the figure are available. Static routing with IPv6 is used and configured in the same way as IPv4. There is an IPv6-specific requirement per RFC 2461: A router must be able to determine the link-local address of each of its neighboring routers to ensure that the target address of a redirect message identifies the neighbor router by its link-local address. This requirement basically means that using a global unicast address as a next-hop address with routing is not recommended. The Cisco IOS global command for IPv6 is ipv6 unicast-routing. Config Example of OSPFv3: network area command: The way to identify IPv6 networks that are part of the OSPFv3 network is different from OSPFv2 configuration. The network area command in OSPFv2 is replaced by a configuration in which interfaces are directly configured to specify that IPv6 networks are part of the OSPFv3 network. Native IPv6 router mode: The configuration of OSPFv3 is not a subcommand mode of the router ospf command (as it is in OSPFv2 configuration). To configure OSPFv3, first enable IPv6, and then enable OSPFv3 and specify a router ID, using the following commands: Router(config)#ipv6 router ospf process-id Router(config-if)# ipv6 address 3FF#:FFFF:1::1/64 Router(config-if)# ipv6 ospf 1 area 0 Router(config-if)# ipv6 ospf priority 20 Router(config-if)# ipv6 ospf cost 20 Cisco CCNP ROUTE IPv6 Cisco CCNP ROUTE IPv4 to IPv6 Transition Strategies and Deployments What do you do if you are ready to deploy IPv6 on an existing IPv4 network? Deploy both IPv4 and IPv6 on all of your systems Configure both ends of the tunnel to communicate via both addressing strategies This allows IPv6 to run over an automatically configured tunnel. This solution requires that the routers connecting to the IPv6 remote sites though the IPv4 cloud to be running dual stacks. There are Diferent tunneling techniques to establish a tunnel between IPv4 and IPv6, that are available: Cisco CCNP ROUTE Translation Mechanism Here we identify the translation point for our addressing scheme. This solution requires that the routers connecting to the IPv6 remote sites though the IPv4 cloud to be running dual stacks. Cisco IOS software is IPv6-ready. As soon as IPv4 and IPv6 basic configurations are complete on the interface, the interface is dual-stacked, and it forwards IPv4 and IPv6 traffic. Using IPv6 on a Cisco IOS router requires that you use the global configuration command ipv6 unicast-routing. This command enables the forwarding of IPv6 datagrams. All interfaces that forward IPv6 traffic must have an IPv6 address. The interface command is as follows: ipv6 address IPv6-address
<urn:uuid:c64dbbe5-3abd-46ce-b578-9a2e66991499>
CC-MAIN-2022-40
https://www.certificationkits.com/cisco-certification/cisco-ccnp-route-642-902-exam-study-guide/cisco-ccnp-route-implementing-ipv6-part-ii/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00308.warc.gz
en
0.851594
1,669
2.578125
3
10 Network Security Threats + The Threat Defences To Protect You If you’ve been tasked with tightening up your network security, threat defence is the right place to start. If you focus on stopping threats getting into your organisation, there’s less likelihood of them breaching your network and systems further down the line. What is threat defence? Threat defence is the process of securing your organisation against cyber threats. By applying a deep understanding of the cyber threats that could affect your business, you can proactively put systems and processes in place to mitigate these risks. What are the 3 threats to information security? Typically, we segment threats to information security into three primary categories: malware, phishing, and internal threats. What are the different types of security threats? 1 - Malware Malicious software is one of the most common threats to information security and is an umbrella term for viruses, worms, ransomware, and trojans etc. Malware is designed to intentionally cause damage to computers and networks, leveraging victims’ personal information for financial gain. Malware is usually spread by email as a link or downloadable file. The user will need to click the link or open the file to distribute the malware. For example, a virus will bind its deceptive code to clean code and wait for an unwitting user to run it. The virus will then spread causing damage to key functions, corrupting data, and locking users out of their devices. You can protect your business from malware by ensuring you have a robust network monitoring system in place, with antivirus software and SIEM tools that enable security teams to identify suspicious behaviour. Endpoint Detection and Response (EDR) tools can provide in-depth defence against malware attacks, with early detection systems on endpoints that highlight ‘anomalies’ and respond accordingly. At Nasstar, our EDR monitoring technology continuously scans your devices for suspicious activity, analysing usage and behaviour to determine what is ‘normal’ before sounding the alarm when something seems unusual. 2 - Phishing Another common method used by hackers, phishing is where users are contacted by someone posing as a legitimate business to lure them into handing over sensitive and personal information. When this information is given, cyber criminals can use it to access important accounts, resulting in identity theft and financial loss. In the workplace, users are usually contacted by email. These emails often look too good to be true or have a sense of urgency that provokes the user to react quickly without thinking or assessing the content of the email. Sometimes the emails will also include dodgy links or attachments. Phishing attacks rely on human error to be successful. Therefore, the best way for organisations to protect themselves from phishing attacks is by ensuring all employees have a clear understanding of the threat and the key signs to look out for. At Nasstar, we offer a phishing risk assessment and ongoing cyber security training. Our immersive experience is combined with simulated phishing attacks to test ongoing awareness and compliance, with tracked results to provide you with reports detailing how employees improve over time. 3 - Internal threats Internal threats refer to the risk of someone from within the business exploiting a system to cause damage or steal data. Like phishing, internal threats can be hard to detect and plan for because of the human element. Employees are regularly trusted with sensitive data in the workplace, but this trust can be abused through negligence or selfish motives. Many security teams focus on external threats, plugging holes to prevent them from happening without sparing a thought for the inside threats happening under their nose. For example, a disgruntled employee could steal sensitive client information to take to a competitor which could damage the business’ reputation and completely destroy client relationships. Password protection is a good place to start when protecting your business from internal threats. You should have the ability to quickly change passwords from any device and location, at any time, especially in the case of leaving employees. Single-sign on and two-factor authentication can also help prevent credential sharing. At Nasstar, we can help you with your cyber security strategy to ensure your organisation is protected from both internal and external threats. What are the top 5 cyber threats? The three high level information security threats are broken down in several areas. Cyber crime is the most common way into a businesses information systems as hackers and attackers have access to smarter and more complex systems than ever before. Check your business has threat defences for these 5 cyber threats: 4 - Email Email security risks are top of the list as they are commonly used in phishing attacks which is one of the most common methods used by hackers. You can mitigate the risk of emails being used to infiltrate your organisation by engaging with third-party security providers who further protect your email system. Although emails are encrypted when in transit, once they are static they can be easily attacked. At Nasstar, we partner with security providers such as Mimecast and Proofpoint to incorporate an additional layer of security into your email systems. 5 - Social engineering It’s becoming increasingly easier for cyber criminals to use human error to gain access to sensitive information. Social engineering attacks include phishing emails, scareware and other techniques used to manipulate human psychology. You can incorporate cyber security training in your security strategy to ensure your employees are all aware of the signs to look out for. Your business could also implement Zero Standing Privileges where users are granted access privileges for one particular task, for a limited amount of time. 6 - Cloud computing vulnerabilities With more businesses turning to the cloud, so too have hackers. Cyber criminals scan for cloud servers with no passwords, exploiting unpatched systems and performing brute-force attacks to access user accounts and wreak havoc. With access, hackers can plant ransomware, steal sensitive data or use cloud systems to coordinate DDoS attacks. Patch cycles can be used to help protect your business in the cloud. At Nasstar, our vulnerability patch management service ensures critical security requirements are continuously patched as required, reducing your security risk and keeping your software up-to-date. We can also implement security management solutions such as EMS Fortinet to enable scalable and centralised management of multiple devices. Your endpoints and servers would be traffic filtered with application security to block Torrent/TOR systems backdoors communicating if they are on the network at a business location or remote. 7 - Ransomware Ransomware uses data encryption to demand payment for release of the infected data and is a common method applied by cyber criminals. There have been several notable cases of Ransomware being used, including the 2017 WannaCry attack on the NHS which resulted in thousands of cancelled appointments and operations, and widespread disruption. It is difficult to completely protect your business from ransomware attacks, but adopting a thorough security strategy with several layers of defence can help. At Nasstar, our threat detection and response service constantly monitors your network, allowing us to identify and isolate threats in near real-time, day or night. 8 - DDoS attacks A Distributed Denial of Service (DoS) attack occurs when a malicious attempt to affect the availability of a website or application is made by using multiple compromised or controlled sources. The aim is to exceed a website or application’s capacity to handle multiple requests, thus preventing the site from functioning correctly. You can protect your business against DDoS attacks by deploying firewalls and implementing an effective network monitoring strategy. We offer vulnerability management services to stay ahead of hackers and look for weaknesses before they find them. Other network security threats to look out for To round out the list of network security threats, we must include these often overlooked threats. Even in the most secure networks, there are documented cases where security breaches have been traced to someone within the organisation. 9 - Wrong users having access to the wrong systems You can use role-based access control to grant access to resources based on a person’s role in the company. This is an effective way to protect data and ensure your company’s information meets privacy and confidentiality regulations. 10 - Password sharing Creating a formal policy to manage risks and enforce clear rules about password sharing is essential for internal security. Your policy should include information about using strong passwords and procedures for handling, storing and sharing passwords (which should be avoided where possible). Multi-factor authentication can also enhance access to sensitive data by requesting login information from independent categories of credentials to successfully verify the user’s identity.
<urn:uuid:fb8f596e-7e67-4873-ac74-31fa58451093>
CC-MAIN-2022-40
https://www.nasstar.com/hub/blog/10-network-security-threats-threat-defences-protect-you
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00308.warc.gz
en
0.936781
1,735
2.53125
3
Windows 11 is a great operating system, but it can be difficult to use if you’re not familiar with all of the keyboard shortcuts. In this blog post, we will discuss 7 of the most useful keyboard shortcuts for beginners. These shortcuts will help you navigate Windows 11 more easily and save you time. Number 1 – Copy and Paste Copy and past shortcut is a keyboard combination that copies the selected text and stores it in the clipboard then paste it where the user wants. The copy and paste keyboard shortcut is usually Ctrl+C for copy and Ctrl+V for paste. But in some operating systems, like Windows 11, the copy and paste shortcut is usually Ctrl+Insert for copy and Shift+Insert for paste. There are also different copy and paste keyboard shortcuts for different purposes. For example, to copy an entire document, you can use the shortcut Ctrl+A to select all the text in the document and then press Ctrl+C to copy it. To copy just a selection of text, you can use the shortcut Ctrl+Shift+C to copy just the selected text. To copy a specific word or phrase, you can use the shortcut Ctrl+W to select the word or phrase then press Ctrl+C to copy it. And to copy an entire line of text, you can use the shortcut Ctrl+Shift+L to select the entire line then press Ctrl + C to copy it. The copy and paste shortcuts are very useful when you need to quickly copy and paste text from one place to another. Number 2 – Windows The Window Logo shortcut is a keyboard shortcut that allows you to quickly open the Window 11 start menu. To use the Window Logo shortcut, simply press the Window key on your keyboard. The Window key is typically located between the Ctrl and Alt keys on the left side of your keyboard. Once you press the Window key, the Window 11 start menu will appear on your screen. You can then use your mouse or keyboard to navigate through the start menu and launch the programs or files that you need. The Window Logo shortcut is a quick and easy way to access the Window 11 start menu, and it can be a useful tool for productivity. Number 3 – Command Prompt Command Prompt is a text-based interface for Windows 11 that allows users to enter commands and receive output. The Command Prompt shortcut allows users to quickly open the Command Prompt interface using a keyboard shortcut. To use the Command Prompt shortcut, press the Windows key and the letter “R” at the same time. This will open the Run dialogue box. In the Run dialogue box, type “cmd” and press Enter. This will open the Command Prompt interface. The Command Prompt shortcut is a convenient way to access the Command Prompt interface without having to navigate through the Windows 11 Start menu. Number 4 – File Explorer The File Explorer shortcut is a handy way to quickly open the File Explorer window in Windows 11. To use the File Explorer shortcut, simply press the Windows key + E on your keyboard. This will immediately open the File Explorer window, allowing you to browse your files and folders. The File Explorer shortcut is a great way to save time, and it can come in handy when you need to access your files in a hurry. So next time you need to open the File Explorer window, be sure to give the File Explorer shortcut a try. Number 5 – Taskbar Taskbar shortcuts are a handy way to quickly launch programs and open files. In Windows 11, you can create taskbar shortcuts for both desktop apps and Universal Windows Platform (UWP) apps. To create a taskbar shortcut, simply press and hold the Alt key, then press the letter or number that corresponds to the app you want to launch. For example, pressing Alt+1 will launch the first app on your taskbar, while pressing Alt+2 will launch the second app. If you want to launch a UWP app, press and hold the Windows key, then press the number that corresponds to the app you want to launch. Taskbar shortcuts are a great way to save time and improve your productivity, so be sure to give them a try. Number 6 – Print Screen In Windows 11, there are several different ways to access the Print Screen function. If you only want to take a snapshot of a specific window, you can first select the window and then press “Alt + Print Screen”. This will take a snapshot of just the selected window and save it to your clipboard. Again, you can paste the image into an image editing program and save it for later use. Finally, if you want to take a snapshot of your entire screen but do not want to save it as an image, you can press “Ctrl + Print Screen”. This will take a screenshot of your screen and open the built-in Snipping Tool. From here, you can choose how you want to capture your screenshot, and then save it as an image file or email it to someone directly. Number 7 – Settings Settings shortcuts are a new feature in Windows 11 that allows users to quickly access settings and options from the keyboard. To use settings shortcuts, simply press the Windows key+I on your keyboard. This will open the Settings app in Windows 11. From here, you can use your keyboard to navigate to any setting or option that you want to change. Settings shortcuts are a great way to save time when you need to adjust your settings. They also make it easier to change settings without having to use your mouse or trackpad. Windows 11 – Where to Start? If you are in need of no-nonsense, expert IT help for your business, get in touch with CloudTech24 today.
<urn:uuid:67647e10-3038-4738-bd35-f167932fe9e0>
CC-MAIN-2022-40
https://cloudtech24.com/2022/08/11/the-7-most-useful-keyboard-shortcuts-for-windows-11/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00308.warc.gz
en
0.825397
1,185
2.84375
3
Password fatigue is a common problem for employees who are often required to create, manage, and remember passwords for many different accounts. A common solution to this problem is for employees to reuse passwords across multiple accounts. However, while this can reduce the burden on employees and improve efficiency, it does so at the cost of security. Single Sign On (SSO) is a service that is designed to mitigate password fatigue without compromising security. Employees are presented with a single sign-on screen when authenticating to the environment, which verifies their identity. This authentication is then carried to other systems within the network, enabling employees to use them without remembering a password and logging in for each of them. Traditionally, most applications and systems manage authentication and access management individually. When a user wishes to log into a computer or an application, they provide a set of login credentials that are compared to the set kept on file. If the credentials are accepted, then the user is granted access to the desired resource. SSO keeps this process but applies it to authenticating to the network as a whole. When a user first logs into the network, their authentication information is transmitted to an authentication server, which validates their identity and the access controls assigned to them. After that, when a user wishes to log into a new system or application, their access request is forwarded to the authentication server. Based upon its built-in access control policies, the server tells the system or application to either allow or deny access. Since the application server has already verified the user’s identity and tracks it throughout their session, they no longer need to individually authenticate to each application or system that they use. This eliminates the need for these resources to implement their own authentication systems or for a user to create and recall a unique password for each resource. SSO centralizes access management for a network into a single authentication server. By doing so, it provides a number of different benefits to an organization and its employees, such as: The actual SSO protocol is secure and relies on the authentication server to manage and approve or deny access requests. As long as this server is well-protected and an organization’s access control policies are well-designed, then a malicious user or an attacker with access to a compromised account will have their access restricted to the permissions assigned to that account. The primary benefit and risk of SSO is that it allows a user to access everything after authenticating once. This means that an attacker with control over a legitimate account can access anything that account is permitted to access without being required to enter any additional passwords. However, the use of SSO means that an organization can more easily and effectively deploy solutions like MFA to make this scenario less likely. Additionally, while a user may not need to authenticate multiple times to access various systems, an organization can still perform behavioral analytics to identify anomalous or suspicious activity that could indicate a compromised account. If such activity is detected, the security team can take action to lock down the compromised account. Implementing SSO across an organization’s entire environment is possible with a standalone solution. However, it is much easier to deploy, configure, and maintain if the solution is designed to be integrated from the start. This requires an SSO solution to offer support for secure remote access, cloud-based deployments, and an organization’s on-premises data centers and endpoints. Check Point offers solutions in all of these areas, making it simple and painless to deploy SSO across the enterprise. To see Check Point’s solutions in action, you’re welcome to request a free demo.
<urn:uuid:4e035f1b-653c-459e-b61d-6b0e4974e83f>
CC-MAIN-2022-40
https://www.checkpoint.com/cyber-hub/network-security/what-is-single-sign-on-sso/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00308.warc.gz
en
0.945253
744
2.59375
3
Every operating system has got its own characteristics. The GUI is the same for the windows having the same family, but they may vary from the others like the Windows 8 is different from windows 7 and it happens in such sense that there is the start button for the windows in windows 7 while in windows 8, on would not find such button. Similarly the characteristics of the windows are different and hence, the drivers are different as well. I.e. for Bluetooth, the drivers that are installed on windows 8 are different than the drivers installed on the windows 7. Here are the windows and their some properties; The windows XP was released in the 2002 and the retailing for the windows got closed at 2008. But there is still the large number of the windows that is being run of the many of the computers. There are more than 500 of the licences which have got these windows and hence, it becomes important that one should understand that there are some various versions of the windows XP and also, that there are some minimum requirements and the recommendations which would have to be met. If f one is running these windows especially at home, they might have the windows edition which is known as the windows X P home edition. This is actually a version which has been created especially for the home uses. This version doesn't have many of the advanced categories and capabilities which were required for the smooth running of the windows in some business environment. The latest version which got targeted for the business environment was the windows XP professional. If one has the XP professional, that he would be able to get connected to the internet easily and he would be able to use the remote desktop which could allow someone to get connected to the home while using the computer at his office. There has been an important edition out there, known as the windows media centre edition. The idea behind this windows was to take one's computer and get it bonded with the television and the some other entertainment system. This thing required some large dis space and some lots of the processing power for this one. The reason was that the one was running recorded TV at computer. If one is working in some high computer environment, he would find the 64 bit windows XP where this specific processing takes place. Also, there is the version of the XP created as the Windows XP 64 bit. The windows vista came out in 2007 and after the release of these windows, there came some different versions of the same windows. Here, one could find so many options to choose from. In one environment, one would surely run into such windows like windows Vista Home. This window was named as the Home Basic. This system also didn't contain too many advanced features. Like, there was no active directory support. The other version was named as the Home premium. This version added some new abilities and one could write the own DVD as well. Also many of the games were available in the Home Premium. This was the first version which allowed one to have the media centres which could be supported in the operating system itself. Then came the Ultimate edition which actually contained all the features which the home edition. It had some additional features too like the usage of video as the background wallpaper. One could also encrypt the one's drive with the help of Microsoft's bit locker. There were also some additional languages packages which were available with these windows. Like there was the professional edition in the windows XP, there came the business edition in windows Vista. That edition was designed for business solutions and it allowed one to work on some workstation, connecting with the active directory. Also, it helped user encrypt the file system with the OS itself. These windows got released on 2009. This edition is basically the successor to the windows Vista. There are total 7 of the windows 7 version which are introduced. They are almost the same as one can find in the windows Vista. Windows 7 Starter is the very basic version. This version has been designed for the notebooks. The reason is that those notebooks are very small ones. One won't find so many of the graphical capabilities with that windows version. Another feature was the windows didn't really contain any DVD player in it. Also, there weren't any windows Aero because as mentioned above, the graphical capabilities of the windows were bad. There was also no server functionality which could be finding there in the windows Starter. It came only in 32 bit edition and there wasn't any 64 bit edition. Another version that this windows has is windows 7 Home premium. This version is available at some big stores and it is very widely used version. If one buys laptop, that's the version which came with many of the laptops as well. You might find many of the people out there having they key for this version. Ne would get this computer windows to get it run of the home computers only. It has the geographical aero support and it allows the DVD playing as well. Also, one can get connected to the internet anytime. Also, one can also create some webserver using this windows edition. The version also comes with the 32 bits and the 64 bit versions as well. One can use the 16 GB RAM in the processor having the 64 bit windows. Windows 7 Ultimate is the ultimate window, as the name indicated. If one wants to have each and every feature of the windows 7, then this window is the best since it contains all the features which are contained by the any other version of the windows 7. This windows version would let someone use the active directory, the desktop remote server, encryption techniques and can encrypt files and also, one can use some encryption of all the hard drive. If someone is now operating in some environment which has not all of the abilities but it can run well in some business environment, then one is looking for the Windows 7 professional. This window has almost the same functions at the home premium, but it has the ability to let user get connected to the windows domain. Also, that can be administrated through some device using some active directory. One can also be able to run some of the desktop host abilities in the windows 7 professional. It also could allow someone to have too much memory use and one could use up to the 192 GB of the RAM. As it has been mentioned earlier, there is some enterprise version which was here if someone had to run some volume licence. If one is managing some really large deployments of those windows, then this is the best version one should be using. It has allowed the user to have many of the languages. The windows 7 enterprise and the ultimate, both of them have the bit locker. So, one can get some really high level encryption there. Features:Following are the features of the windows that are being used; 32-bit vs. 64-bit: There are the two versions of the windows which are 32 bit and the 64 bit. That depend on the processor and it is obvious that 64 bit windows perform better than 32 bit windows. Aero, gadgets, user account control, bit-locker, shadow copy, system restore, Ready boost, sidebar, compatibility mode, XP mode, easy transfer, administrative tools, defender, Windows firewall, security centre, event viewer, file structure and paths, category view vs. classic view: The Aero is the windows interface. One can also find some gadgets in the desktop by using the right click and properties. The user account control allows one to have access to one account and the bit locker is the software which helps someone secures the windows. The shadow copy of the windows is the exact same copy of windows which can be done by windows restore. The downs firewall is the main defender of the windows and helps prevent virus and it can be changed from security centre. At side bar, one can find the way to get XP viewer which has the compatibility of let user view something in XP's view. One amazing gadget is the event viewer which can help mark an event too. For the files, one can see files have various formats and the paths means that where that file is located. The files can be viewed in many views which contain the classic and the category view as well. It might happen that one has been using the same version of the windows for over a long period of time and then he finds out that there is some new version available, and he would like to install that version as well. For talking advantage of the new features which are there in the new version, there is the perfect thing to upgrade the windows. While one upgrades the system from the one version to some another version, one might have some amazing advantages. By doing so, mostly, the user data remains same and the configuration stays same too. Hence, the things which include the emails, documents etc. which have been saved in the hard drive stay same, and they are not changed even a bit. So that is what it makes too simple for someone to first upgrade the windows, and then come back to it and start working which he has been doing before. The updating process is very easy. One can use the installation media which would be in some DVD file. There can be some ISO file too which one might have download from the Microsoft. One can easily run and then all the information is found by it automatically and would upgrade itself to some newer version. There are actually two methods using what the upgrade can take place. One can do some clean installation and can do the in place upgrade. The in place upgrade is that when one upgrades the windows, all the application stay same and nothing is changed at all. This can happen when someone puts in the DVD and then starts the process. Then, the upgrades take place and the OS doesn't get restarted and one can continue using it. The clean install means that the whole drive is erased first and then the windows are installed. It is done the same way like a windows is installed. This gives advantage when one doesn't want to have the same settings and the files and want to have new window where he can again install the software's he needs. Mostly people install the in place upgrade since it becomes easy for them to continue working again in the same environment. There one can find the compatibility for the windows 7 when gives one the right to get upgrades easily. It provides the chance to upgrade to some more of the powerful version of same type of windows. Like, if someone is using home premium, then the next update version he would be doing is the Windows 7 Professional. One would like performing the windows anytime upgrading since it's easy. This happens when the window is on. One doesn't really have to restart the system and one can do it while the computer stays on all the time. There is another thing that should be kept in mind that the upgrading and the downgrading are done only when there is other version. It cannot be done on the bases of 32 bits and the 64 bits OS. Like, if someone is using 32 bit, he can't use in place upgrading to get upgraded to 64 bit and if one is using the 64 bit, he can't downgrade to the 32bit using that tool. But, there is the migration tool given my MS which allows one to make the moves easy. There is some another great feature which is available in the market. That is the Upgrade Advisor. It is run automatically and it scans all of the hardware. Then, the applications are looked through and then the information is given that one should know about. Also, it tells when there is an upgrade and when someone can actually perform that. Also, one can contact the manufacturer for the machine too to know if there are some new drives available for the latest updates. Hence, one should know about all the versions that Microsoft windows have since all of them contain some special features which are specially designed to cater the needs of a specific segment. Also, the upgrading, downgrading of the windows should be done when someone feels that he want some additional features. For this purpose, one can take the help of the OS advisor which is already provided by the Microsoft. SPECIAL OFFER: GET 10% OFF Pass your Exam with ExamCollection's PREMIUM files! SPECIAL OFFER: GET 10% OFF Use Discount Code: A confirmation link was sent to your e-mail. Please check your mailbox for a message from email@example.com and follow the directions. Download Free Demo of VCE Exam Simulator Experience Avanset VCE Exam Simulator for yourself. Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
<urn:uuid:810c6336-58f2-4f7f-9db1-0821ff2325d4>
CC-MAIN-2022-40
https://www.examcollection.com/certification-training/a-plus-microsoft-operating-systems-features-and-requirements.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00308.warc.gz
en
0.973415
2,543
2.796875
3
The title of this post seems to be lost on some who are responsible for security architecture. One of my reflections on this past summer is that not everyone is aware of the difference between weaker and stronger forms of multifactor authentication. You have likely read about multifactor authentication, have used it with your social networking websites, or maybe you have used a form of multifactor authentication in a corporate environment. This is all very good news. The bell has tolled for single-factor username-and-password schemes and people are starting to realize that this old stalwart of authentication needs to be retired as soon as possible. Keylogging, man-in-the-middle attacks and social engineering techniques leveraged by cunning identity thieves are in the news every day. The time has come for multifactor authentication. Why? It makes the job of a malicious hacker more difficult. As with all attacks, malicious hackers are looking for ways to steal your identity. Nobody in the cybersecurity business is getting fired right now for suggesting that multifactor authentication should be used in their enterprise. It’s a good idea that has reached the executive ranks. But without understanding the offensive side of the security equation, there are some in the defensive side of cybersecurity who have forgotten that not all multifactor authentication techniques are equal. Simply, it’s smart to choose a multifactor authentication that matches the risks. This summer I spoke to security architects in large enterprises in both North America and Europe and their job was to protect one of three things: money, privacy or critical infrastructure. Some of these professionals were planning to employ SMS-based multifactor authentication. This is where users log in to a website and are challenged to enter a code that is sent to their mobile devices via text message (SMS). SMS-based multifactor authentication is better than single-factor username-and-password authentication. These professionals had every reason to be glad to be working on these projects. But I challenged some of them to explain to me why they did not choose a stronger form of multifactor authentication. “What’s wrong with SMS?” they asked. What bothered me was not that they were employing SMS, but that they did not know the weaknesses. In addition, I witnessed a demonstration at the Def Con 21 conference in Las Vegas this year where SMS messages were being intercepted — by a Femtocell device hacked by ethical researchers — and projected onto a screen. This was a friendly environment and nobody was hurt, but it laid bare the weakness of non-encrypted messages like SMS. There are other forms of multifactor authentication that are much stronger than SMS, and even easier for the end-user. An example includes innovative virtual smart credentials embedded onto mobile devices. The chain of communication is encrypted, and doesn’t require the user to type a code. It’s not often that better security can also mean a better user experience. Your money and privacy are important to you. Before you log in to a bank, conduct a transaction with your government, or turn on a pump at a critical infrastructure plant, you should consider that there are malicious individuals or groups out there who strive to obtain your identity for illegal gain. Making it more difficult for the bad guy means choosing a method of authentication that does not easily give away your identity. SMS multifactor authentication is a step above username-and-password solutions, but if what you are protecting is important to you, there are stronger methods.
<urn:uuid:f2149411-5976-47ca-9603-aedf58f0b208>
CC-MAIN-2022-40
https://www.entrust.com/ja/blog/2013/09/not-all-multifactor-authentication-techniques-are-equal/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00308.warc.gz
en
0.960862
713
2.703125
3
With an increasing reliance on remote work and digital communication, businesses are facing more cyber security risks than ever before—and closing your network vulnerabilities is the key to minimizing that risk. Take ransomware, for instance (just one of the types of attacks in the network security landscape). Ransomware is a type of attack where a hacker has gained access to a business’s sensitive information. This locks out access to critical systems or encrypts data to hold the information hostage for a large sum, sometimes in the millions. In 2020 alone, ransomware attacks cost US businesses $915 million dollars. In this article, we’ll be taking a deep dive into the most common types of information security vulnerabilities and what you can do to either prevent or minimize the risk of being the target of one of these harmful security breaches. What are Network Vulnerabilities? Generally speaking, a network vulnerability is a gap or weakness within your hardware, software, or internal processes. These cyber vulnerabilities are exploited by hackers and bad actors in an attempt to steal data, launch a phishing attack, deliver a distributed denial-of services (DDoS) attack, infect your system with malware, ransomware, a trojan horse or any other type of cyber attack. The Three Main Types of Vulnerabilities in Network Security Various network vulnerabilities that hackers target for a data breach can, and often do, include every element of your network such as: Each of these vulnerability types needs to be taken seriously when organizing your cyber security because each one presents its own set of unique challenges. Hardware-Based Network Security Vulnerabilities Any hardware that’s connected to your network is a potential access point to your private data. Hackers have developed all sorts of nefarious methods for gaining unauthorized access. Let’s take a look at the types of network security threats for hardware, so you can be better prepared. Firewalls act as a gateway to the internet, allowing or disallowing online traffic based on administrator configurations. The role of a firewall is to keep the good traffic flowing through and keep all the suspect traffic out. However, configurations that exist as default settings can sometimes install unnecessary services that unknowingly allow bad traffic to pass through. Oftentimes, the best way to find out if your firewall isn’t configured correctly is by having a penetration test performed by seasoned experts. |Need to Find Out Where Your Cyber Security Gaps are Hiding? | Our Penetration Testing Services can identify ALL the areas where you’re vulnerable. Unsecured Wi-Fi Access Points Wireless networks are one of the biggest network vulnerabilities for any individual, which goes double for businesses and organizations. Gaining access to a Wi-Fi network means the hacker just completely side-stepped your firewall, and now essentially has the keys to the kingdom. Once connected to your network they can search for publicly posted passwords, alter settings, steal data, install any type of malware they want, and generally rob you blind. That’s why it’s so important to always have a password associated with every Wi-Fi access point. Ideally, you’ll be changing these passwords once a month and use multi-factor authentication as well. Poorly Protected, Unauthorized Devices In today’s high-speed, digital world, it’s not enough to simply protect and encrypt devices that are on the company’s premises. More often than not, employees will use their own personal devices to complete work and access company data, and this poses a massive security threat. This glaring cyber vulnerability is capitalized upon by hackers because it’s common for a personal device to have far fewer protections than a company equivalent. But, once the personal device connects to your VPN and the hacker knows about it, they might as well be using a company computer in person, because that’s the level of access to your data they now have. Any personal devices that employees use to perform company work need to be vetted by your IT department or provider so that the appropriate encryption and protections can be installed. If you’re not sure what your level of risk is for employee personal devices, you should get a vulnerability assessment done to determine the likelihood of a security breach. |Want to Learn More About Cyber Security & Vulnerability Assessments? Check out these Blogs.| Software-Based Cyber Threats and Vulnerabilities From operating systems to outdated applications and everything in between, even basic networks can have massive cyber vulnerabilities that hackers can exploit. Here, we’ll go into the most common network vulnerabilities that software can possess. Old & Buggy Software Outdated software that is forgotten about and remains on your network is yet another access point for hackers and a vulnerability in your cyber security that poses a real risk. Frequent cyber security risk assessments can catch these vulnerable programs, but if you haven’t gotten one in a while, likely those old applications haven’t received a patch lately, and their plug-ins and add-ons are susceptible to hacking. Even in-house written code with buggy zero-day exploits are prime targets for bad actors and need to be mitigated as much as possible with a thorough sweep of your IT environment. Unlicensed Software Downloads A very common vulnerability in cyber security is when an employee downloads a piece of software that the IT management department doesn’t know about, and was provided from an unreputable source. Ironically, it’s sometimes the case that the current safeguards in your IT environment are too restrictive, and the employee has downloaded this software to solve a work-related problem that they couldn’t troubleshoot before. This type of vulnerability in network security is entirely avoidable by having your IT provider communicate frequently with your employees. Human/User-Based Network Security Threats & Vulnerabilities Untrained and unsuspecting users are easily one of the biggest factors on any network vulnerabilities list. People are fallible by nature and prone to making mistakes. That’s why we’ll take you through a few of the most common network security threats and vulnerabilities posed by the human element. Weak Passwords & Poor Authentication Practices People tend to create easy-to-guess, weak passwords because they’re easy to remember. But, that’s exactly what makes them the largest vulnerability in cyber security. The best way to help your employees create strong passwords is to use a password manager since it removes the need to remember them. Also, employing the services of a password checker can help determine if stronger passwords are needed. Additionally, setting up multi-factor authentication removes the possibility of one lucky guess costing you possibly millions of dollars in losses. Social Engineering & Deception Too often, a common network security vulnerability results from ordinary people simply being tricked and/or duped. Phishing attacks and scams operate on the principle of social engineering whereby a hacker will send a request for sensitive information or a money transfer to an employee from what looks to be a reputable email address. Phishing attacks are surprisingly successful, on average, but can be seriously mitigated, even prevented entirely with proper employee training and education. Protecting Your Business from Cyber Security Vulnerabilities and Threats If it feels like you’re always putting out cybersecurity fires as they crop up, and never getting ahead of the problem, you likely need a reliable outsourced cybersecurity firm to handle day-to-day IT monitoring and management. At CP Cyber, we’re highly equipped to defend your business from any type of attack in network security, and we’ve got the numbers and experience to back that up. Reach out to us today for a free quote and consultation, and get a handle on your cybersecurity once and for all.
<urn:uuid:5a85c054-e32a-42ff-9336-d4b5c91ccf6f>
CC-MAIN-2022-40
https://cpcyber.com/network-vulnerabilities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00308.warc.gz
en
0.926831
1,636
2.9375
3
Anecdotal evidence from bankers suggests that the cost of complying usually increases with new rules and regulations when large statutory changes are made to financial laws and rules of any country or region. This burden increases significantly when such changes are made especially after a financial crisis. New regulations stemming from the financial crisis has cost the six largest U.S. banks $70.2 billion as of the end of last year. Between the end of 2007 and the end of 2015, regulatory fines rose by more than 100% – or $35.5 billion- according to data from policy-analysis firm Federal Financial Analytics Inc. As per Federal Financial Analytics, the reporting costs come from a mix of requirements that are specific to these banks, e.g. particular capital surcharges that apply to banks with assets over $50 billion but impose the largest cost on the six biggest banks due to their size or risk To provide the context necessary to understand the regulatory costs on small banks, let us first have a look at the definition of a small bank. What is a Small Bank? Banks are typically classified as small or large based on their total asset size (i.e., the value of the loans and securities they hold), but there is no standard, commonly accepted threshold for what asset size constitutes a small bank. Some researchers define small as $1 billion or less in assets, whereas others define it as $10 billion or less. The Federal Reserve defines small & mid-sized banks as those banks with less than $10 billion in assets. Often, the term community bank is used as a synonym for small bank. The Office of the Comptroller of the Currency (OCC) defines community banks as generally having $1 billion or less in assets. The terms small bank and mid-sized bank as commonly used encompasses a disparate group of institutions, varying in size, activities, and charter. Asset size is not the most intuitive concept to understand what is meant by a small bank. To provide some additional perspective on the size of small institutions, it is informative to look at the number of employees at institutions of different asset sizes. At the end of 2014, a bank with approximately: - $100 million in assets had, on an average, 25 employees. - $1 billion in assets had, on an average, 214 employees. - $10 billion in assets had, on an average, 1173 employees. What is Regulatory Reporting Burden? Financial regulation reporting can result in both gains and costs. The cost associated with government regulation and its implementation is referred to as regulatory burden. In the banking world, regulatory burden can be borne by banks, consumers, the government, and the economy at large. A bank may have to face higher costs because it now must train its staff on how to properly apply the rules and may spend more time reviewing each application. Some of this cost may be passed on to consumers through higher interest rates and fees or fewer lending options. Regulatory burden on banks is manifested primarily in two different ways: operating costs and opportunity costs. - Operating costs (or compliance costs) are the costs the bank must bear in order to comply with regulation. For example, in response to a new rule, a bank may spend more money training its employees to ensure they understand the new rules, and the bank may have to purchase updated computer programs because the new rule defines concepts in ways that are incompatible with its old systems. Updating computer programs is an example of operating costs that are one-time costs borne upfront. Other costs, such as hiring additional compliance officers, are recurring costs that exist so long as the requirement is in effect. - Opportunity costs are the costs associated with foregone business opportunities because of the additional regulation. A bank may, for example, offer fewer mortgages because new regulations make mortgage lending more expensive and instead choose to perform a different type of activity that is now more profitable. Characteristics Determining the Regulatory Reporting Burden a Small & Mid-Sized Bank Faces: Size is one of the several factors that influences burden. The regulatory burden borne by a bank depends on what rules are applied to it (rulemaking) and how those rules are applied (supervision and enforcement). The factors that determine what rules are applied and how are as follows: - Charter- Because a bank’s primary regulator depends on its charter, to the extent that different bank regulators have different practices and policies, the charter will influence the regulatory burden. If the bank has a state charter, supervisory examinations can alternate between the state banking regulation and its primary federal regulator. Differences in regulation and regulatory burden between banks and credit unions are a perennial concern to the banks. - Risk profile- Not all banks pose the same risk of failure, risk to consumers, and risk to the overall system, so policymakers tailor some regulations and supervisory practices by risk profile in order to reduce regulatory burden. For, example, because small banks with higher supervisory ratings- signaling that they are perceived to be healthier- are, all else equal, less likely to fail they are examined less frequently and intensely than banks with lower ratings. - Business model- Certain activities will attract more regulatory scrutiny than others because they are riskier, more complex, pose more risk to consumers or broader economy, and so on. Different banks offer different types of services and engage in different types of activities, therefore some banks will have greater regulatory compliance costs because they are involved in more activities or lines of businesses that require oversight. In other words, the same activity will require a certain amount of regulatory compliance at any bank that undertakes it. Notably, if banks engage in certain activities such as operating in the securities or derivatives market, it could trigger additional activity-based regulation Hexanika: Compliance Made Easy Hexanika is a FinTech Big Data software company, which has developed an end to end solution for financial institutions to address data sourcing and reporting challenges for regulatory compliance. Hexanika helps establish a compliance platform that streamlines the process of data integration, analytics and reporting. Our software platform can develop and clean data to be sourced for reporting and automation, simplifying the processes of data governance and generating timely and accurate reports to be submitted to regulators in the correct formats. Our solutions also significantly reduce the time and resources required for everyday-regulatory processes, and are robust enough to be implemented on existing systems without requiring any specific architectural changes. To know more about our products and solutions, read: https://hexanika.com/company-profile/ Contributor: Akash Marathe Image Credits: www.dailydot.com Source: http://www.dispatch.com/content/stories/business/2016/03/25/1-regulatory-costs-hurting-small-banks.html Source: http://blogs.wsj.com/moneybeat/2014/07/30/the-cost-of-new-banking-regulation-70-2-billion/#:t3lnbKjgIaBajA Source: https://www.fas.org/sgp/crs/misc/R43999.pdf Source: https://www.communitybanking.org/…/Session3_Paper3_Cyree.pdf
<urn:uuid:f0663bb6-81ae-4a81-8824-58440d170ed4>
CC-MAIN-2022-40
https://hexanika.com/regulatory-reporting-costs-and-burden-for-small-mid-sized-banks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00308.warc.gz
en
0.944523
1,511
2.78125
3
India’s nationwide identification system, also known as Aadhaar, has come under plenty of criticism in the years since it was first announced and introduced. The verifiable 12-digit identification number issued by the government was lambasted for trying to lock citizens into a system that allowed the government to track their every move and purchase, which some saw as a worrying mission creep. At the same time, it also was plagued with issues that meant many people did not want to use it. Despite the problems, the system ended up being widely adopted. But a recent academic paper highlights further issues with the system that have been there since the beginning – and could spell disaster for anyone using it. The paper is the first comprehensive description of the Aadhaar infrastructure, collating information across thousands of pages of public documents and releases, as well as direct discussions with Aadhaar developers. Researchers from Johns Hopkins University studied the computer code that makes the Aadhaar system work and probed it for vulnerabilities. A key issue That’s where they found the problem. In the paper, the authors describe the first known cryptographic issue within the system. It could be carried out by reverse engineering the string of numbers that Aadhaar uses to instantiate the AES GCM, which can sometimes be duplicated and spoofed, enabling people to potentially open a bank account, fly internationally or get a mobile phone SIM card in someone else’s name. Luckily, a workaround prevents it from being exploitable at scale. However, they go further, categorizing and rating various security and privacy limitations and the corresponding threat actors, examining the legitimacy of alleged security breaches, and discussing improvements and mitigation strategies. The vulnerability with the cryptographic issue dates back to the early development of the app. According to the researchers, Aadhaar architects made the design choice that enabled the vulnerability to occur. They claimed they were aware of this potential issue but could not think of a better solution to the problem. The researchers suggest a better solution in fact is quite simple: using the entire timestamp instead of a few bits. The academics admit the workaround is still not perfect but is definitely better than the current implementation, assuming the implementation has not been updated. The vulnerability is one that is not, by the researchers’ own admission, currently exploitable. While the payload might be vulnerable, it is encrypted whenever it is communicated, they say. This makes it almost impossible for anyone to mount an attack off the back of the issue. Ultimately though, they are worried that if a targeted attack on one communication channel – perhaps through social engineering or other attack methods – would allow eavesdropping, then the attack surface is huge. If an attacker were successful, the researchers claim, they could potentially authenticate as someone else to any Aadhaar-based authentication system, such as banks or telecom providers. It’s a significant issue – and one to be considered for anyone trying to implement a nationwide identification system such as Aadhaar. “Almost all the issues we found were due to a set of challenges unique to a system at Aadhaar’s scale,” the authors conclude. More from Cybernews: Subscribe to our newsletter
<urn:uuid:feaaa61a-34b3-4c0a-98df-c4f964e43025>
CC-MAIN-2022-40
https://cybernews.com/security/indias-id-system-has-severe-cryptography-vulnerability/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00508.warc.gz
en
0.968942
642
2.640625
3
Students are living in a digital world and, these days, an ordinary sign or blackboard won’t cut it for educators who are looking to increase engagement in the classroom. The word “engagement” is the important part of that last sentence. There is a difference between engaging a student and simply displaying information to them. This is the major strength of Sharp Aquos interactive displays. Of course, with a wide range of Sharp Aquos models and uses for these vivid displays, it can be difficult to imagine the many uses for these displays beyond the obvious which is displaying content. This article will discuss the unique uses for Sharp Aquos interactive displays in the classroom and on campus. Help Students Learn in a Way They Understand The ultimate goal of any education plan is to help students take in new knowledge and retain that knowledge. Of course, teaching and lesson planning are much more complicated than that but that is the most basic description of education if one were to boil things down. Sharp Aquos interactive displays for classrooms allow students to learn in ways they’re already familiar with. In this digital world, there aren’t many who have not used a touchscreen display before they even reach school age. There have also been several studies that suggest this method of learning is very effective for students. Sharp Aquos interactive displays are for more than just displaying content. Educators and students can interact with the display to learn in ways that were simply not possible before. For those that have ever heard about learning by doing then this is a great example of bringing that concept into the classroom while also retaining some of the classic features of a typical whiteboard or blackboard. Navigate a New Space Intuitively Readers that have attended college or university know what it is like stepping onto campus for that first time. Students are equal parts excited and overwhelmed. Where is the library? Where is lecture hall Z999? How big is this place? Sure, paper maps are helpful – for those who have a sense of where they are and where they need to go. Instead, using Sharp Aquos interactive displays on a campus to help students navigate is helpful in overcoming those first-day-on-campus woes. The best part about these displays is that they can be used for a variety of tasks beyond simply finding where a classroom is. The interactive nature of the Sharp Aquos models allows students and faculty to access a wide range of information including upcoming events, important announcements, and more. Effectively, by using Sharp Aquos interactive displays, schools can reduce the need for multiple signs and information areas. Have Virtual Assemblies and Announcements The school assembly can be one of the most exciting times for students and one of the most stressful times for staff. Herding students into a gymnasium, keeping them attentive and quiet, then returning them to class is an art form, to say the least. Sharp Aquos interactive displays for classrooms give staff members much more flexibility when communicating. Of course, the gymnasium assembly will always be a fixture within a school schedule but there may be times where gathering the entire school at once is unnecessary. Perhaps a video assembly would be preferable. These displays also open up a variety of communication options for students and teachers. Virtual tours of sites instead of in-person field trips can reduce costs and the need for collecting permission slips. Video conferences with classrooms from around the world can bring pen pals closer than ever. The possibilities are truly endless with Sharp Aquos interactive displays, and with the imagination of children at work, these displays will certainly be used to their fullest potential. Take a Classroom or Campus to the Next Level With a range of Sharp Aquos models available, there is something for every size of campus and classroom. Whether it is immersing students in their lesson or simply taking the old basic sign display and kicking it up a notch, Sharp Aquos interactive displays can be the perfect solution. If you would like to learn more about the capabilities of these displays as well as the various Sharp Aquos models available, please contact CDS Office Technologies today.
<urn:uuid:31ab5bc0-b2b5-4735-9264-127add615fcd>
CC-MAIN-2022-40
https://www.cdsofficetech.com/can-sharp-aquos-interactive-displays-campus-classroom/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00508.warc.gz
en
0.946998
837
2.90625
3
What Is PCI Compliance? PCI Compliance History: What is PCI Compliance? The Payment Card Industry Data Security Standard (or PCI DSS) is a standard of controls created by the Payment Card Industry Council which is an agreed-upon set of requirements or specifications for entities directly or indirectly handling credit cards. The standard provides a technical and operational baseline for the appropriate acceptance or handling of cardholder data within a business environment. Stakeholders who are responsible to adhere to this standard are merchants, processors, acquirers, issuers, and service providers. For the purposes of the PCI DSS, a merchant is defined as any entity that accepts payment cards bearing the logos of any of the five members of PCI SSC (American Express, Discover, JCB, MasterCard, or Visa) as payment for goods and/or services. Note that a merchant that accepts payment cards as payment for goods and/or services can also be a service provider if the services sold result in storing, processing, or transmitting cardholder data on behalf of other merchants or service providers. For example, an ISP is a merchant that accepts payment cards for monthly billing but also is a service provider if it hosts merchants as customers. Sometimes referred to as “payment gateway” or “payment service provider (PSP)”. A processor is an entity engaged by a merchant or other entity to handle payment card transactions on their behalf. While payment processors typically provide acquiring services, payment processors are not considered acquirers unless defined as such by a payment card brand. Also referred to as “merchant bank,” “acquiring bank,” or “acquiring financial institution”. An Acquirer is an entity, typically a financial institution, that processes payment card transactions for merchants and is defined by a payment brand as an acquirer. Acquirers are subject to payment brand rules and procedures regarding merchant compliance. An entity that issues payment cards or performs, facilitates, or supports issuing services including but not limited to issuing banks and issuing processors. Also referred to as “issuing bank” or “issuing financial institution.” A business entity that is not a payment brand, is directly involved in the processing, storage, or transmission of cardholder data on behalf of another entity. This also includes companies that provide services that control or could impact the security of cardholder data. Examples include managed service providers that provide managed firewalls, IDS, and other services as well as hosting providers and other entities. If an entity provides a service that involves only the provision of public network access—such as a telecommunications company providing just the communication link—the entity would not be considered a service provider for that service (although they may be considered a service provider for other services). Where Do I Fall? PCI DSS must be completed for all entities handling credit card information. The level to which you are audited depends typically on the transaction volume, relative risk, and history of a breach. The levels of PCI Compliance are defined as level 1 through level 4, with level 1 being the highest level of audit where an external certified Qualified Security Assessor (QSA) must audit production systems and procedures to be in compliance with the standard. For level 2 through level 4 self-assessment questionnaires can be completed where a professional auditor is not leveraged and a self-attestation to appropriate implementation of the standards is in place. Ultimately the acquirer, stakeholders, or credit card brand(s) will determine what level of audit your business must undertake (VISA, MasterCard, JCB, American Express, Discover). Companies that complete an SAQ will have different SAQ(s)-(A-D) in which to choose from. These variety of SAQs are for different requirements that businesses are classified or how they conduct business, the most encompassing of which is the SAQ-D. The remainder of SAQ(s) are for specific business use cases such as using a virtual terminal, having no cardholder data storage, or use of imprint machines. If you are unsure of what SAQ to complete, contact your processor or a QSA firm for additional guidance. What does this do for me? Achieving and maintaining PCI DSS compliance has many profound effects on a business and will allow the business to remain operational in the ecosystem of credit card transactions. The requirements lay out the groundwork of a minimum standard to which to adhere and to maintain the capacity to interact with card holder networks. Some acquirers and/or service providers may not wish to do business with an entity that is not PCI compliant, as the risks of compromising data or processing fraudulent transactions increases without certification. The risk to the third-party vendors connecting and receiving information from your network is much lower with PCI, which may give better processing rates. Customers may know they are doing business with a PCI-compliant vendor which will assure them within their risk management strategy that the business is committed to a specific baseline standard that is acceptable for the acceptance of cardholder data or security functions therein. Adherence will assist in preventing data breaches or other costly bills associated with lack of compliance. The standard provides a baseline across the world where large and often segmented systems, policies, and processes are configured to the same security rigors. The security standard may be a springboard for additional data security frameworks such as NIST 800-53 or HIPAA. What is the standard? The standard is comprised of 6 groups of controls encompassing 12 requirement families surrounding the security of cardholder data. The standards encompass multiple aspects of the environment and business practices: Build and Maintain a Secure Network and Systems Protect Cardholder Data Maintain a Vulnerability Management Program Implement Strong Access Control Measures Regularly Monitor and Test Maintain an Information Security Policy 12. Maintain a policy that addresses information security for all personnel Additional information about the standards can be found on the PCI Security Council websites including testing criteria, report templates, and additional FAQs regarding PCI compliance. Know that each of these requirement families has many sub-requirements for the fulfillment of the standard. There are also recurring requirements which include but are not limited to Penetration Testing, Approved Scanning Vendors, Internal Scanning Requirements, and Firewall Review. These elements must be performed throughout the year of compliance to be proven at the time of audit. An audit is a single snapshot in time and reflects only what the system is at the time of audit. There should be no forward-facing statements, implementation plans, or Corrective Action Plans (CAPs) associated with the reports or SAQ. What Can I do Now? It’s not always easy to stay up-to-date with PCI compliance. At MegaplanIT, we understand the challenges and risks associated with card security. Sometimes, even knowing how to set your protection controls isn’t enough. That’s where our experience comes in – we have a team of certified security professionals and have the knowledge you need to ensure that your business stays compliant with their payment card data security standard (PCI DSS). Looking for a knowledgeable partner for your cybersecurity and compliance efforts? We're Here To Help! We look forward to talking to you about your upcoming Security Test, Compliance Assessment, and Managed Security Services priorities. Our expert security consultants and QSAs are fully certified and have decades of experience helping businesses like yours stay safe from cyber threats. Set up a time to chat with us about your biggest payment security and compliance challenges so we can partner with you to solve them! Share this post Industry Leading Certified Experts Subscribe To Our Newsletter & Stay Up-To-Date Explore Our Blogs Whitepaper | 10 min Read Developing An Effective Compliance Program This whitepaper provides organizations with a path forward. We will walk through aspects of an effective compliance program and how it can be valuable to your business. We will also outline critical steps towards developing and implementing a useful and effective Compliance Program. New Service Offering | Contact Us Ransomware Preparedness Assessment As new vulnerabilities emerge in response to ongoing geopolitical threats, are you confident that your organization could defend against a ransomware attack? If not or if you are unsure, MegaplanIT is offering a Ransomware Readiness Assessment free of charge for up to 50 Systems. ResourceGuide | 8 min Read Cybersecurity Roadmap For 2022 Companies need to be aware of their current state, where they need improvement, and how to be proactive moving forward. Dialing in on the key elements your organization will need to succeed is a great starting point to having a full-fledged plan in place, and it all comes down to the fundamentals. We're Here To Help We look forward to talking to you about your upcoming Security Testing, Compliance Assessments, and Managed Security Services priorities. We are ready to help and discuss more information with you on our comprehensive list of services. Make Our Team, Your Team! Our innovative IT security and compliance solutions are designed to deliver customized, cost-effective service on time—because your priorities are our priorities. With a highly qualified team of PCI-DSS QSAs, Penetration Testers, and Information Security Consultants here at MegaplanIT, we will assess your unique company and business environment and design a path to security that will fit all of your needs. Ransomware Assessment Preparedness Cybersecurity Roadmap For 2022 Developing And Maintaining An Effective Compliance Program As new vulnerabilities emerge in response to ongoing geopolitical threats, are you confident that your organization could defend against a ransomware attack? A Cybersecurity Roadmap details priorities and objectives to drive progress towards security goals. The roadmap follows a data-driven path based on answers to critical questions This whitepaper provides organizations with a path forward. We will walk through aspects of an effective compliance program and how it can be valuable to your business
<urn:uuid:c23b4b06-91ce-4a43-bf82-89016d9b3747>
CC-MAIN-2022-40
https://megaplanit.com/blog/what-is-pci-compliance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00508.warc.gz
en
0.927657
2,068
2.59375
3
Today In History May 21 878 Syracuse is captured by the Muslim sultan of Sicily. The Siege of Syracuse in 877–878 prompted the fall of the city of Syracuse, the Roman capital of Sicily, to the Aghlabids. The attack kept going from August 877 to 21 May 878, when the city, viably left without help by the focal Byzantine government, was sacked by the Aghlabid powers. Following their first arrival in Sicily in the late 820s, the Aghlabids had attempted a few times, without progress, to catch Syracuse. They had the option to bit by bit assume control over the western portion of the island, be that as it may, and in 875, another and vivacious senator, Ja’far ibn Muhammad, was selected, resolved to catch the city. Ja’far started the attack in August 877, however before long left it accountable for his child Abu Ishaq, while he resigned to Palermo. The Arabs were all around provided with attack weapons, while the occupants of Syracuse were left generally unsupported by the Byzantine armada, which was occupied with shipping marble for another congregation in Constantinople, and was then deferred by unfavorable climate. Thus, the blockaded masses confronted extraordinary hardships and starvation, portrayed in detail by the observer record of Theodosios the Monk. At long last, the Aghlabids figured out how to impact a penetrate in the toward the ocean dividers, and on 21 May 878 figured out how to get through it into the city. The protectors and a significant part of the people were slaughtered, while others, including Theodosios, were taken prisoner. The Byzantine patrikios who directed the resistance gave up with a couple of his men, yet they were executed following seven days, while a bunch of officers got away and carried the news east to the armada that had belatedly headed out to help the city. The Muslims couldn’t underwrite upon this accomplishment because of inner competitions, which even prompted a full-scale common war. Little scope fighting with the Byzantines proceeded with no side increasing an unequivocal bit of leeway until the appearance of the dismissed Aghlabid emir Ibrahim II, who in 902 energized the Sicilian Muslims and caught Taormina, adequately finishing the Muslim success of Sicily, albeit a couple of posts stayed in Byzantine hands until 965. 1819 1st bicycles (swift walkers) in US introduced in NYC On this day May 21, in 1819 the primary bike in the U.S. was seen in New York City. Then again called “velocipedes,” “quick walkers,” “leisure activity ponies” or “dandy ponies” for the dandies that frequently rode them, they had been imported from London that equivalent year. Pedal and chain bikes of today originated from the creation of Pierre Lallement of Nancy, France, who saw one of the dandy ponies in a recreation center and was enlivened to add a transmission to it. After a short stretch assembling them in France, Lallement chose to move to the U.S. There, with James Carroll of New Haven, Connecticut as his agent, he recorded the most punctual U.S. patent for a pedal bike. 1918 US House of Representatives passes amendment allowing women to vote The principal national ladies’ privileges show was held in 1850 and afterward rehashed yearly, giving a significant concentration to the developing lady testimonial development. In the Reconstruction time, the fifteenth Amendment to the U.S. Constitution was received, allowing African American men the option to cast a ballot, however Congress declined to grow emancipation into the circle of sexual orientation. In 1869, the National Woman Suffrage Association was established by Susan B. Anthony and Elizabeth Cady Stanton to push for a lady testimonial correction to the U.S. Constitution. Another association, the American Woman Suffrage Association, drove by Lucy Stone, was framed around the same time to work through the state lawmaking bodies. In 1890, these two gatherings were joined as the National American Woman Suffrage Association. That year, Wyoming turned into the main state to allow ladies the option to cast a ballot. 1956 US explodes 1st airborne hydrogen bomb over Bikini Atoll The United States started testing atomic weapons at Bikini Atoll in 1946. Nonetheless, early bombs were huge and clumsy undertakings that were detonated starting from the earliest stage. The reasonable use of dropping the weapon over a foe had been a simple hypothetical chance until a fruitful test in May 1956. The nuclear bomb dropped over Bikini Atoll was conveyed by a B-52 plane and discharged at a height of in excess of 50,000 feet. The gadget detonated at around 15,000 feet. This bomb was undeniably more remarkable than those recently tried and was evaluated to be 15 megatons or bigger (one megaton is generally identical to 1 million tons of TNT). Onlookers said that the fireball brought about by the blast estimated in any event four miles in distance across and was more brilliant than the light from 500 suns.
<urn:uuid:8609d160-279e-43d4-86e5-9f0a5e7bc4d8>
CC-MAIN-2022-40
https://areflect.com/2020/05/21/today-in-history-may-21/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00508.warc.gz
en
0.972677
1,069
3.484375
3
One of the most crucial things for the healthcare sector during the ongoing global pandemic, amidst many other competing priorities, is keeping a check on its cybersecurity measures. During the first half of 2020, HHS or the Department of Health and Human Services recorded a 50% increase in cybersecurity breaches in the field of healthcare itself. Such a massive outburst of data breaches not only puts the safety of patient data at risk but also openly highlights the vulnerabilities within hospitals and healthcare service providers exploited by cybercriminals in response to COVID-19. Of all the major drivers, COVID-19 has played the most excruciating role in driving a new wave of threats like ransomware attacks mostly targeted on hospitals and other healthcare entities. The impact of ransomware is expected to outshine as one of the biggest priorities of the cybercriminal's portfolio in the year 2021 also. That is why it becomes essential for organizations to take up a collective approach towards security and protect their patients as well as their own business. Healthcare Cybersecurity Challenges in India From healthcare, banking and shopping to studying and exploring the world, Indians use the internet massively. And as a result, the associated cyber crimes in India have also increased in proportion to the level of usage. Alongside the USA, India has become one of the top 10 spam-sending countries in the world. An online security firm Symantec ranked India among the top five countries to be affected by cybercrime. To our surprise, 75% of the data breaches in India are perpetrated by outsiders and 15% of these breaches are specifically targeted at the healthcare sector alone (Source: NITI Ayog, India). A variety of devices are used by internet users in India ranging from high-end smartphones to relatively cheaper ones that have little to no infrastructure for cybersecurity. This non-uniform system makes it difficult to provide a uniform security protocol across the nation and leads to widespread data security issues. Although the private sector in India remains a prominent player, it still fails to report and respond to security breaches in digital networks. Another prominent challenge is the lack of awareness among the users as far as privacy-enhancing technologies are concerned. Recent Healthcare Cybersecurity Attacks 2020 has been a catastrophic year because of a number of reasons, COVID-19 being the foremost of them. And despite being the sole saviour for humanity in these testing times, the healthcare industry itself has not remained untouched by cyberattacks. Here are some of the recent cybersecurity attacks which shook the healthcare industry: 1. Ransomware Attack on Blackbaud: It was in May 2020 that Blackbaud faced a massive ransomware attack wherein an estimated 10 million customer records were compromised. Although the company’s cyber-security team was able to stop the cyber attackers midway, the criminals still had a good chunk of data with them which consisted of name, health details, contact details, etc. 2. Cyberattack on Luxottica: An eye-care conglomerate, Luxottica, saw one of the worst cyber-attacks in the year 2020 when in August the attackers hacked the web-based appointment scheduling application managed by Luxottica which is used for scheduling appointments. It was found that the data about prescriptions, health insurance details, date and time of appointments, credit card information, etc. of around 8.3 lakh patients were stolen. 3. Aspen Pointe Data Breach: The cyberattack on Aspen Pointe was detected and revealed in the month of September 2020. The behavioural and mental health provider issued a statement that said that the data of approximately 3 lakh of its patients was compromised because of the attack. The company had to stop a majority of its operations as a result of the cyberattack for a number of days. A thorough investigation into this matter revealed that the hackers had gathered important information like bank account details, date of birth, contact details etc. from the targeted server. 4. Ransomware Attack on Magellan Health: The servers of Magellan Health were hit by a ransomware attack in the month of April 2020. Nearly 3.65 lakh patients and employees got impacted because of this cyberattack. Hackers had got access to security systems by leveraging a social engineering phishing scheme that impersonated a Magellan Health client and all of this was planned and done 5 days before the attack. Employee information such as confidential credentials, passwords, etc. and patient data like contact details, treatment information, health insurance account information, etc. was stolen. Strategies for Improving Healthcare Cybersecurity Healthcare systems are one of the major custodians of PHI or Protected Health Information. This serves as a valuable resource that can be used by threat actors to enable identity theft. Staying ahead on such a high level of threat requires a concentrated and proactive approach. Here we have listed a few measures of paramount importance that can enhance cybersecurity within your organization. 1. Make use of big data and analytics for making informed and strategic security decisions. The first step towards securing healthcare information is to discard obsolete technology of managing patient data and replacing it with latest technologies, which have higher resistance to fight cyber crimes. Security Information Event Management (SIEM) has been the traditional solution used in data centers. However, SIEMs cannot handle large volumes of data which makes it inefficient after a certain point. There is a tremendous need to employ intelligent analytics to automate mountains of data in a secure manner. Cybercriminals are continually devising better intelligence on security solutions, so they can assume less-visible behavioral patterns to better conceal their actions. Therefore, data must be analyzed quickly to identify actionable insights and keep attackers at bay. Big data and analytics convert unstructured log and SIEM data to a format that enables informed, strategic decision making, and does away with the ‘false-positives’ that afflict SIEMs. This allows security teams to quickly respond to threats before data leaves the network. 2. Prepare for an Incident Response Plan: Since cyber-attacks have become inevitable, the development of an effective Cyber Incident Response Plan (CIRP) has become vital for businesses that aim to stay ahead of their respective adversaries. Incident Response Plan (IRP) can enable organizations to prepare for the inevitable security incidents, recover thoroughly when attacks occur, and respond effectively to the evolving threats. 3. Deploy IAM: Identity and Access Management (IAM) is all about outlining and managing the access privileges and roles of users and devices to a variety of on-premise and cloud applications. Deploying stringent authentication and authorization capabilities on a centralized platform provides businesses and IT professionals with a consistent method of managing user access during the identity lifecycle and would certainly go a long way in improving cybersecurity. 4. Implement strict measures to tackle BYOD programs Bring-your-own-device (BYOD) programs are a huge concern for many healthcare organizations that permit their employees to bring their personal laptops, tablets and smartphones. While working, employees install mobile applications and use on their personal devices exposing corporate data to additional risks. We know as a fact that 98% of the Android applications have security vulnerabilities. These unsafe practices are widespread in the healthcare industry and IT departments rarely have the bandwidth of time and resources to do anything about it. Many healthcare organizations lack even the most basic mobile device management (MDM) & BYOD tools, policies, and processes. BYOD and mobile threats change almost constantly due to the proliferation of new mobile applications being written. Healthcare organizations need to implement adaptive technologies to manage identities and to better control the data being accessed. Appknox is a third-party tool that helps enterprises to tackle such security issues. It detects loophole and vulnerabilities in the mobile apps and report the problems. It also gives a compliant solution for you to fix these issues. 5. Understand HIPAA requirements and Healthcare compliance As I highlighted in my previous post, HIPAA compliance alone is not sufficient to build a rock solid security system. Though HIPAA is a standard compliance for healthcare industries, its law in itself is not foolproof in keeping the data safe. A good example of this is encryption. Though the law does mention about the Encryption but also leaves an element of uncertainty. According to the law - Encryption (Addressable) - Implement a mechanism to encrypt electronic protected health information whenever deemed appropriate. Hence, companies should review the HIPAA requirements properly and work on the compliance plan so that every single parameter gets covered to ensure full security. 6. Training your employees about the security risks involved in accessing links and attachments in email Employees invariably click on emails and other attachments even when they are told the risks involved. This gives an open door to hackers to attack our network and while 75 percent of attacks take only minutes to begin data exfiltration, they take much longer to detect. Securing email and web gateways will help reduce the instances of network breaches. This includes rewriting or sandboxing suspicious URLs to detect drive-by attacks and by deploying authentication, endpoint, network, and gateway controls that share information for an orchestrated reduction on the attack surface. Other measures include implementing a solid supply chain and vendor management system, promoting education training awareness (ETA), reducing access control lists (ACLs), and knowing what key intellectual property exists on the network and where it’s located. 7. Monitor your internal systems and logs for evidence of issues An automated bot or a process that can periodically run through the system to detect loopholes would be a great way to handle a threat. It will help you spot the vulnerability portion of the system in time and rectify it before much damage is done. If all of this becomes overwhelming for you, there are third-party security tools available to help you detect security loopholes in your system and offer compliance checks. At Appknox, we help healthcare businesses detect vulnerabilities and loopholes in the mobile applications. 8. Perform Regular Security Tests: Existing security vulnerabilities and other weaker areas of concern within the security infrastructure of your organization can be identified and mitigated by conducting regular security tests. Businesses can avoid costly data breaches and reduce many other detrimental impacts of a data breach by deploying highly innovative testing techniques like SAST, DAST and API testing. For thorough security scans, it is recommended to rely on highly trusted vendors like Appknox. Appknox is widely known for its advanced vulnerability assessments and penetration tests and with their vast test case coverage, you simply don’t need to worry about vulnerabilities anymore. The current pandemic has forced healthcare organizations to introduce widespread infrastructural changes to their business. Such a large scale transformation has also created gaps in their existing security systems. These gaps have given cyber criminals an inherent opportunity to exploit flaws and infiltrate within the firewalls of these organizations. In order to secure sensitive healthcare data and prevent serious security incidents, an “all in” approach in terms of security is required across the organization. A strong strategic commitment and adherence to the established best practices can certainly go a long way in ensuring an all-around security posture within healthcare organizations.
<urn:uuid:26fb9dc3-de89-4431-988f-035801651be4>
CC-MAIN-2022-40
https://www.appknox.com/blog/how-healthcare-companies-can-combat-cybercrimes
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00508.warc.gz
en
0.943833
2,274
2.59375
3
Apple loves bragging about how secure their devices are. Not without reason: there are lots of security features you probably use daily, including code autofill, password reuse auditing, Safari built-in privacy, and many more. Same for developers. For example, Apple doesn't release their source code to app developers for security reasons. And the owners of iOS devices can't modify the code on their phones themselves. But there are many other, less-known security features Apple uses to prevent their devices from being hacked. We will discuss how exactly Apple handles user data protection on their devices and what security measures they take. I've divided the article into two parts, covering popular iOS security features for user data storage and transportation. How Apple Handles Secure Data Storing Apple has an extensive Apple Platform Security guide I'll be referring to throughout the article. This guide covers hardware security, data encryption, system security, and many other security-related issues. iOS-powered devices come with an A7 (or later version) processor and have a Secure Enclave Processor (a coprocessor) that provides an additional security layer. This processor powers iOS security features in a hardware-accelerated way. Let’s start with the features Apple uses for secure data storing: 1. Apple App Sandbox Apps are one of the most critical elements of security architecture. While they give users productivity benefits, they may also affect the system's security and user data if not handled the right way. That's why users are supposed to download the iPhone, iPad, and iPod touch apps only from the App Store. Any company can create an app for iOS, but only the apps that comply with App Store guidelines will be published. And these apps run in a sandbox, a directory they can use to store data in. Sandboxing helps protect all user data from unauthorized access, as apps can only use the data stored in their home directory. If an attacker tries to exploit security holes in your app, the sandbox will use a defensive mechanism that limits the app's access to files, preferences, network resources, and hardware. 2. Data Protection API Data protection feature secures app files and prevents unauthorized access to them. It’s enabled as soon as the user sets a passcode for the device. This process goes unnoticeable for the user, is automatic and hardware-accelerated. Users read and edit files the way they always do, while the encryption-decryption process goes behind the scenes. There are four data protection levels: - No protection. The file is not encrypted and always accessible. - Complete until the first authorization (the default level). The file is encrypted until the user unlocks their device for the first time. It remains decrypted until the shutdown or reboot of the device. - Complete unless open. The file remains encrypted until the first time an app opens it. Then the data remains decrypted even in case the device is locked. - Complete. The file is accessible only when the device is unlocked. If you don't choose the protection level when creating a file, iOS applies the default security level automatically. Sure, it’s better to use the highest protection level Apple offers. But if you need to access files in the background while the device stays locked, complete data encryption may not be the best option for you. The keychain is a secure space used to store bits of data in an encrypted database. Each iOS application gets its own space in the keychain, the space no other app can access. There's no need to store encryption keys in your app: you rely on the system to provide the highest security level. Related Topic- Here's How iOS Jailbreak Really Works This feature is great for people who manage lots of online accounts and (in a perfect world) have a unique password for each. Remembering each new string of letters and numbers is impossible while writing them down is insecure. Same for using one password for multiple accounts. The keychain solves this problem by giving users a mechanism to store these chunks of data. It’s not limited to storing passwords, though. Users can also keep such information as credit card details or even short notes. How Secure Is Data Transmission Next to data safety stands the communication between an app and its remote counterparts. Here are the security measures iOS offers for this case: 1. App Transport Security There's a networking feature on iOS-powered devices called App Transport Security (ATS for short). ATS requires that all connections use HTTPS secured with Transport Layer Security (TLS) protocol—unlike standard HTTP connections that aren't encrypted. If connections don't meet security specifications, ATS blocks them. But it can be configured to loosen up these restrictions (which Apple warns against, claiming that 'it reduces the security of your app'). 2. TLS Pinning HTTPS connections are checked by default. The system inspects the server certificate and checks if the certificate is valid for this domain. In theory, this should prevent the device from connecting to malicious servers. In fact, there are loopholes for cyber attackers to perform so-called 'man-in-middle' attacks. They do it by compromising a certificate authority or changing the user's device settings to trust another malicious certificate. This way, attackers could access all messages sent between the client and the server. TLS pinning restricts which certificates are considered valid for a particular website, making sure the app communicates only with the verified server. iOS developers implement pinning by adding a list of valid certificates in their app bundle. The app checks if the certificate used by this server is on the list—and only then communicates with the server. 3. End-to-End Encryption End-to-end encryption provides the highest level of security when it comes to data transportation. The information is protected with a key combined with your device passcode—the detail only the owner knows. Messages are encrypted in a way that only the sender or receiver can decrypt. Neither Apple nor your services can read this data. Details like Apple card transactions (iOS 12.4 or later), health and home data, search history, payment information, Wi-Fi passwords, and Siri information are stored in iCloud secured by end-to-end encryption. 1. How does Apple protect user's privacy? Apple offers quite a few pretty stringent privacy controls and security features for iOS users, including those for data storage and transportation. 2. Is iOS really more secure than Android? Apple offers lots of security features and doesn't release its source code to developers. That's why Apple's iOS operating system has long been considered more secure than Android. Still, that doesn't mean it can't be hacked. 3. How does Apple handle secure data storing? The best-known iOS features for data storage: - Sandboxing (every app has a sandbox, a directory it can use to store data in) - Data protection API (secures app files and prevents unauthorized access to them - Keychain (a secure space used to store bits of data) 4. How secure is data transmission? iOS has the following features for secure data transmission: - App Transport Security (requires that all connections use HTTPS with TLS protocol) - TLS pinning (restricts which certificates are considered valid for a particular website) - End-to-end encryption (protects data with a key combined with the device passcode)
<urn:uuid:eac6eb8f-e23a-44f7-9130-032a5afb2982>
CC-MAIN-2022-40
https://www.appknox.com/blog/ios-app-security-6-ways-how-apple-protects-the-users-data
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00508.warc.gz
en
0.918594
1,553
2.859375
3
Every day we read about more or less sophisticated attacks against any kind of computing systems that allows threat actors to compromise targeted devices. What do you think if your life depends on the proper functioning of these devices? Security of medical devices is a critical topic approached many times by US authorities, last in order of time is related to an investigation run by the U.S. Department of Homeland Security on two dozen cases of suspected cybersecurity flaws in medical components and hospital equipment. The devices and equipments under investigation cover a wide range of systems, including medical imaging equipment and hospital networking systems. The authorities suspect that hackers have exploited flaws in these systems to run cyber attacks, according to the revelation of a senior official at the agency Reuters. The US ICS-CERT is assessing several products, including an infusion pump from Hospira Inc and implantable heart devices commercialized by Medtronic Inc and St Jude Medical Inc. Rumors refers that in one case is involved an alleged vulnerability in a type of infusion pump discovered by Billy Rios who declined to provide the name of the manufacturer. “Two people familiar with his research said the manufacturer was Hospira.” states the Reuters in a blog post. Despite there is no official news related to cyber attacks against these devices, the US Government fears that ill intentioned, could run a remote attack causing malfunction with dramatic consequences. The US ICS-CERT is working with manufacturers of medical devices to identify to expose confidential data or attack hospital equipment. “These are the things that shows like ‘Homeland’ are built from,” said the official, referring to the U.S. finction spy drama in which the fictional vice president of the United States is killed by a cyber attack on his pacemaker. “It isn’t out of the realm of the possible to cause severe injury or death,” added the official. In time I’m writing the US ICS-CERT hasn’t disclosed the name of the company under investigation, and Hospira, Medtronic and St Jude Medical declined to comment the events. Late 2012 the US Government Accountability Office (GAO) produced a report highlighting the necessity to secure medical devices such as implantable cardioverter defibrillators or insulin pumps. The recommendation was directed to the Food and Drug Administration (FDA) that was invited to approach the problem urgently considering incidents intentionally caused to some devices. The U.S. Food and Drug Administration, recently released guidelines for manufacturers and healthcare providers to improve the security of medical devices, also in this case the fear is that relate to intentional threats. “The conventional wisdom in the past was that products only had to be protected from unintentional threats. Now they also have to be protected from intentional threats too,” said William Maisel, chief scientist at the FDA’s Center for Devices and Radiological Health. He declined to comment on the DHS reviews. The researcher Billy Rios explained that he wrote a program that could remotely control the supply of the amount of drug for insulin pump, forcing them to inject a lethal dose. “This is a issue that is going to be extremely difficult to patch,” said Rios, that shared the results of his analysis with the DHS. The DHS is also investigating on alleged vulnerabilities affecting implantable heart devices from Medtronic and St Jude Medical, according to two people familiar with the matter. Both companies have declined comments and confirmed that they are considering security as a serious issue. (Security Affairs –Medical devices, US ICS-CERT)
<urn:uuid:d5c9fea2-348c-42e0-80fa-8b6e1986cc4b>
CC-MAIN-2022-40
http://securityaffairs.co/wordpress/29528/hacking/us-cert-testing-medical-devices.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00508.warc.gz
en
0.962871
740
2.609375
3
Slightly over one year ago, several major distributed denial-of-service (“DDoS”) attacks took place, including a major event affecting the domain name service provider Dyn, which caused outages and slowness for a number of popular sites, including Amazon, Netflix, Reddit, SoundCloud, Spotify, and Twitter. Now, a new Internet of Things (IoT) botnet, called IoT Reaper, or IoTroop, has been discovered by researchers and could present a threat that could dwarf the 2016 attacks and create a major disruption to internet activity around the world. As we have explained previously, at their most basic level, DDoS attacks work by sending a high volume of data from different locations to a particular server or set of servers. Because the servers can only handle a certain amount of data at a time, these attacks overwhelm the servers causing them to slow significantly or fail altogether. This prevents authorized users from being able to use or access the services being provided via the attacked servers. As we warned a year ago, we expect that these type of widespread outages may be more common in the future because of security weakness related to the Internet of Things, coupled with increased adoption of IoT devices in the United States and worldwide. Last year, at least some of the sizeable attacks were attributed to a malware variant named Mirai, which commandeered various internet-enabled digital video recorders (DVRs), cameras, and other IoT devices and was then utilized by attackers to launch multiple high-profile, high-impact DDoS attacks against various Internet properties and services. Mirai operated primarily as a “DDoS-for-hire” service in which attackers launch DDoS attacks against a target in exchange for payment, generally made in Bitcoin. While these Mirai-based attacks were successful in creating extensive outages, the method for gaining control over the IoT devices was relatively straightforward—it relied on using weak or default passwords on these devices. Conversely, as researchers from Netlab 360 and Check Point recently reported, a new IoT botnet, named IoT Reaper or IoTroop, builds on portions of Mirai’s code. Instead of exploiting passwords of the devices it infects, the new botnet uses known security flaws in the code of those insecure devices to take control of them and then searches for other vulnerable devices to spread itself further. Vulnerable devices include various routers made by leading manufacturers, such as D-Link, Netgear, and Linksys, in addition to the types IoT devices used by Mirai. Although there has been some confusion about the current size of the Reaper botnet, with current estimates ranging from 10,000 to 30,000 infected systems, Netlab 360 advised in an updated post that one of the Reaper control servers appears to have a queue of 2 million IoT devices that have been identified as vulnerable to a Reaper attack, but had not yet been compromised. If this number is correct, this would be a substantial increase of the infected devices used in the Mirai attacks. To date, the motivation behind Reaper is unknown, however researchers have found that it uses a Lua-based software platform that allows new code modules to be downloaded to infected machines. That means that, depending on the types of devices under its control, it could have the capability to shift its tactics at any time if the attackers simply distribute a new module to the command server. While some commentators have suggested that Reaper has some design flaws, including the fact that its control servers rely on static domain names and IP addresses and it communicates over unencrypted HTTP channels, which make it a less potent threat than Mirai, they acknowledge that Reaper could, one day, pose a serious threat because of its exploit mechanism, which targets specific firmware vulnerabilities, particularly because recently discovered exploits appearing in the malware suggests attackers are actively and diligently expanding the base of vulnerable devices Reaper may be able to infect. Indeed as Pascal Geenens, a researcher at security firm Radware, explained: “if [Reaper’s] developers were to substantially overhaul their malware to add new exploits and better protect its control infrastructure, Reaper has the potential to grow into an unprecedented size. What’s more, the developers’ use of the Lua programming language makes it easy to use Reaper for a variety of attacks beyond DDoSes.” With the increasing threat of these attacks, coupled with the number of different ways that they can be leveraged, organizations should take steps to prepare for, respond to, and mitigate some of the potential fall-out associated with a DDoS attack. Outlined below are some of the steps that organizations can consider to mitigate their exposures before, during, and after a DDoS attack. Before an Attack - Incident Response Planning. As with any potential security incident, effective planning can help reduce or eliminate some of the potential business harms and legal consequences of a DDoS attack before an attack occurs. Companies should include in their Incident Response Plan (IRP) emergency situations like DDoS or Ransomware attacks that have the propensity to affect critical business operations. E-commerce companies and others that rely heavily on website traffic may wish to identify “mission critical” resources and identify alternative solutions that can be used in the event of website failure following a DDoS attack. - Negotiating/Reviewing Contractual Liability. Losses of service could affect an organization’s contractual obligations; for example, unavailability of resources may impact uptime and reliability guarantees contained in Service-Level Agreements or other similar contract provisions. Contracting parties should be certain to consider and address these issues during the contract negotiation process to ensure that the risks associated with these incidents are properly allocated between or among the parties involved. Organizations may wish to address the potential repercussions from a DDoS in various contractual provisions, including: (i) revising force majeure provisions or other exceptions to contractual service guarantees to exclude downtime attributable to these type of incidents from uptime or reliability calculations; (ii) creating disclaimer or limitation of liability language in agreements that expressly limits or eliminates potential liability associated with the inability to perform transactions during a system or website outage; (iii) carefully drafting security incident notification clauses to avoid contractual liability where notice might be required under a contract, but would not be required under any other law or regulation; and (iv) allocating risk and liability for potential outages in terms governing limitations on liability and indemnity. - DDoS Mitigation. Organizations should consider retaining third parties like Akamai or Cloudflare to provide DDoS mitigation services designed to combat these attacks by absorbing or deflecting DDoS traffic. For companies that are already using these services, we recommend reviewing the level of services provided to ensure that they have an adequate amount of protection in light of the increasing data volume seen in some of the more recent IoT-based attacks. Historical levels of protection may be insufficient in light of the increasing numbers of IoT devices that are becoming more easily exploitable. - Documenting Security and Preventative Measures. In anticipation of potential litigation and regulatory enforcement, we recommend that organizations document the various security measures that are being implemented, including those designed to prevent and mitigate the effects of DDoS attacks. Documenting security practices and decisions as they are being implemented and made can help bolster arguments that a companies’ actions were reasonable under the circumstances. Although, in the context of litigation or a regulatory investigation, these actions will be viewed in hindsight by a court, jury, or regulator, contemporaneous information about these can significantly bolster defenses against claims of negligence or breach of contract by litigants or non-compliance by regulators. Companies should seek to implement a “reasonable” level of security and mitigation with respect to DDoS attacks to help defend against litigation. During an Attack - Establishing and Preserving Attorney-Client Privilege. A key consideration in the investigation of and response to any cyber incident is establishing and preserving attorney-client privilege or work-product doctrine protections. As we have previously outlined, important steps in preserving privilege include: (i) retaining or involving legal counsel early in the process, (ii) focusing the investigation on providing legal advice to the organization, including providing legal advice in anticipation of litigation and regulatory inquiries, and (iii) retaining forensic or security experts through legal counsel. - Balancing Remediation and Investigation Objectives. Unfortunately, remediating an attack and restoring operations may adversely impact evidence needed to investigate an incident. We recommend that organizations confer with forensic experts and legal counsel as soon as possible following the start of an attack to ensure that the actions taken in response will not compromise important evidence. - Involving Law Enforcement. Organizations often reflexively want to contact law enforcement in response to a data incident and while this may be beneficial in many circumstances, there are some legal considerations that organizations should weigh before doing so. The frequency and severity of these attacks has led to more attention from various law enforcement agencies and significantly more success in identifying and prosecuting attackers. Federal law enforcement agencies often have intelligence on various groups responsible for these attacks and, as a result, may be able to provide important information in responding to, containing, and remediating these attacks. However, law enforcement agencies may not always be able to share much information, particularly where the information relates to an ongoing investigation. Additionally, alerting law enforcement can result in having the agency become significantly more involved in, or even controlling, the investigation of the incident. Law enforcement involvement could impact privilege issues and, more generally, may not be ideal in all circumstances. We suggest that organizations consult with legal counsel to evaluate the potential advantages and disadvantages of notifying law enforcement based on their specific circumstances. - DDoS Mitigation. Companies should be aware that many DDoS mitigation vendors offer emergency DDoS hotlines or protection services that can be deployed for new customers, even where a company has not proactively secured such services. Engaging a DDoS mitigation service provider after an attack has started can help to reduce the length and severity of an attack, allowing a company to get its affected servers and websites back up and running more quickly. After an Attack - External Communications. When and how an organization communicates about a DDoS attack may impact its exposure and liability following an incident. These communications may include: (i) general communications about the incident with media, investors, customers, or regulators; and (ii) formal notifications ranging from those necessitated by legal or regulatory requirements to formal contractual notices necessary to exercise force majeure or emergency circumstances. - Further Investigation. Following an organization’s remediation and restoration efforts, it is often necessary to conduct a further investigation into the circumstances surrounding the attack and to determine whether and to what extent any legal obligations have been triggered. Depending on the organization’s capabilities and resources, it may be possible to leverage any incident detection measures the organization has to identify indicators of compromise and confirm that the malicious activities were limited to the DDoS attack. In some circumstances, retaining independent forensic investigators may be necessary to conduct a thorough investigation into whether any unauthorized access or acquisition to customer information or confidential business information occurred prior to or during the attack. - Preparing for Potential Litigation or Claims. As mentioned previously, DDoS attacks could result in litigation or regulatory scrutiny for a variety of reasons. Examples of potential actions stemming from such an attack include: - An action brought by financial services customers alleging consequential damages and lost profits based on an inability to access financial accounts or buy and sell stock during an attack; - Claims against service providers for failing to provide contractually-guaranteed service levels; - Claims based on allegations of the theft of customer information, trade secrets, intellectual property, or other confidential or protected information - Claims alleging negligence or fraud based on a company’s failure to adequately protect against a DDoS attack or appropriately limit liability in its agreements with customers. Anticipating and preparing for this type of potential exposure can better position the organization for defending against these claims. Relatedly, organizations are required to preserve potentially relevant information and documents once they reasonably anticipate litigation. To that end, organizations should consult with legal counsel to determine when it is appropriate to put litigation holds in place to ensure that they avoid potential spoliation issues and sanctions. Organizations in this position must also consider whether and how an assertion of privilege protections under the work-product doctrine may affect its preservation obligations. If a company asserts that materials have been prepared by and with legal counsel “in anticipation of litigation” and are therefore protected, it should consider whether this assertion also triggers an obligation to preserve evidence at that time.
<urn:uuid:f2306ea5-66be-478f-82bb-8c100c8442e0>
CC-MAIN-2022-40
https://www.dataprotectionreport.com/2017/11/discovery-of-new-internet-of-things-iot-based-malware-could-put-a-new-spin-on-ddos-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00508.warc.gz
en
0.946628
2,574
2.609375
3
Mac users are highly vulnerable to ransomware, but with the proper precautions, they can keep the risk to a minimum. Like many aspects of IT, cyber security is always in motion. No sooner do hackers develop a new method to launch attacks or breaches than security experts find new ways to block them, which hackers then set to work undermining. Few forms of attack have attracted more attention from both attackers and defenders than ransomware, or software that holds your files hostage. By taking proper security measures, you can keep your Mac safe from these and other harmful programs. Rundown On Ransomware Ransomware refers to programs that deny users access to their files, usually as leverage to make them pay a fee, or ransom. These programs may enter your system through hyperlinks or email attachments. They then prevent you from using key parts of your computer, either by encrypting individual files or, in rare cases, locking out the entire screen. Often designed by criminal syndicates or other powerful institutions, ransomware can be impossible to remove, forcing you to either pay the ransom or give up your device. Although Apple advertises its products as being resistant to malware, the recent Transmission scandal demonstrates that ransomware is indeed a threat to Macs. In this instance, Mac users downloaded a ransomware program while attempting to torrent. The program waited a few days before locking them out of their files, letting them back in only in exchange for bitcoins. Given the growing popularity of Apple products, attacks of this kind are likely to become more common. As serious as ransomware is, you need not be a security genius to avoid it. You can keep your devices safe through a few simple steps, namely: - Download Diligence– Be wary about downloading content over the Internet, especially from sites that you are not familiar with. Before obtaining files from a new site, use Norton SafeWeb or other site security tools to make sure it is safe. You can also google the name of the site and see whether other users have reported problems with it. - Email Examination– As with new sites, be wary of email attachments. Never open an attachment on an email from an address you are not familiar with. Even messages that seem to come from people you know could have been sent by dummy accounts, so contact friends and family over an independent channel before you open their attachments. - Bolstered Browsers– Avoid browsers that have been flagged as vulnerable. Mozilla Firefox and Google Chrome are generally considered the safest tools for web access. - Shore Up Your Systems– The more recent your operating system, the stronger its security measures will be. Regularly updating your Mac will thus bolster it against ransomware. In addition to preventing ransomware attacks, you can limit their impact if they do happen through redundancy. By making copies of key files and programs on separate devices, you let yourself quickly bounce back if an attack does succeed. For more information on protecting your Washington, DC or Baltimore business from ransomware and other threats, contact Hammett Technologies at (443) 216-9999 or email@example.com today.
<urn:uuid:0381709b-34a9-46cc-bde3-174f6c100044>
CC-MAIN-2022-40
https://www.hammett-tech.com/mac-management-protecting-your-apple-product-from-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00508.warc.gz
en
0.936095
629
2.6875
3
What Is PXE Boot and How Does It Work? PXE Boot Components. Advantages of Using PXE Boot. Short for “Pre-boot Execution Environment”, PXE boot is an important part of data center infrastructure and can be implemented through open-source software or vendor-supported products. It allows automated provisioning of servers or workstations over a network. Anyone working on infrastructure deployment of bare metal servers, embedded devices, and IoT devices can benefit from a more in-depth understanding of PXE. In its simplest form, the PXE environment is the process of having your device boot from its network card. Relevant instructions are required to boot the device into the PXE environment. The most common way of trying to do this is to configure your Dynamic Host Configuration (DHCP) server to store and serve this information. PXE Boot Components When discussing PXE, we need to address three characteristics: #1. PXE-capable Network Interface Controller (NIC) Keep in mind that not all NICs are the same. Many consumer-grade network cards don’t have PXE capabilities. However, this is rapidly changing as advances make it simpler to include extra features in cheaper devices. In data center grade servers PXE-capable NICs are standard. #2. The Dynamic Host Configuration Protocol (DHCP) DHCP allows the client to receive an IP address to gain access to the network servers. There are two types of actors in DHCP. The DHCP server and the DHCP client. While a DHCP server provides clients with an IP network configuration, a DHCP client runs on computers that join the network and request a configuration. #3. A Trivial File Transfer Protocol (TFPT) Server TFTP is a simple UDP-based protocol for receiving or sending a file and it’s easily implemented in firmware environments where resources are limited. TFTP has no directory listing, authentication, or authorization, therefore you must know the exact path of the file you intend to download. So, how does the PXE boot work? I will try to explain the PXE workflow as clearly as possible. First, the PXE process allows for the client to notify the server that it uses PXE. Second, if the server uses PXE, a list of boot servers with the available operating systems is sent to the client. The client finds the boot server it needs and receives the name of the file to download. The client then downloads the file using Trivial File Transfer Protocol (Trivia File Transfer Protocol) and executes it, loading the operating system. Ultimately, if the server is not equipped with PXE, it ignores the PXE code preventing disruption in the DHCP and Bootstrap Protocol (BP) operations. Advantages of Using PXE Boot Many organizations face major issues that can be solved with the help of PXE boot, which can automate provisioning or installation of operating systems on numerous machines. Windows and Linux OS already have mechanisms to automate installation. Normally, you create a seed file or configuration. The seed file provides answers to the questions asked by the OS installer. For Linux, examples of this are Debian Preseed or Redhat kickstart files. However, you will still need access to the installation media on CD/DVD-ROM or a USB drive. Having a human dealing with the USB drive is time-consuming and prone to error. The benefits of using PXE boot, however, are not few: - Fewer technical installers; - Less time spent per server; - Fewer errors due to automation; - Centralized and easy to update OS installation tools. PXE is a standard-based approach to solving the problem of getting the OS onto the system without a human being putting media (USB, CD/DVD-ROM) in the system. It does this by bootstrapping the machine over the network. When you want to maintain or install the system for multiple computers without inserting a CD or USB into these computers one by one, you can try the PXE boot to install the system. If your computer does not start properly and cannot be started by loading an image file on the internal hard drive, you can also try the PXE boot. If the client does not have a CD-ROM drive or USB port available or does not have a CD or USB image, then you can try the PXE boot to start multiple client computers in the LAN. Additionally, with PXE, the client machine doesn’t need an operating system or even a hard disk, it can be rebooted in the event of hardware or software failure, allowing the administrator to diagnose and fix the problem, and, ultimately, new types of computers can easily be added to the network since PXE is vendor-independent. Wrapping It Up… As explained above, some of the benefits of PXE are that you can boot a machine without any attached storage device, which makes them more efficient and also costs less. You also wouldn’t need to carry a USB device around with all the recovery utilities you need, you can just boot a malfunctioning computer from the network and diagnose it by using a system rescue toolkit or a backup recovery system. Booting from the network is much more complicated to set up than just writing a USB stick with your favorite recovery system, but it means you generally only need to set it up once for your entire network and it can be reused over and over again without wondering if the USB stick or SD-card is faulty while doing recovery. All in all, PXE is a very powerful tool for automating and managing the provisioning and updates of data center infrastructure, embedded devices, IoT devices, and even workstations. I hope I’ve provided you with a clear understanding of PXE basics. What are your thoughts on the matter? I would love to read your comments in the section below!
<urn:uuid:dd925c65-23d5-4a73-be52-89e9b94de009>
CC-MAIN-2022-40
https://heimdalsecurity.com/blog/what-is-pxe-boot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00708.warc.gz
en
0.917463
1,257
2.703125
3
Businesses are flooded with constantly changing thresholds brought on by seasonality, special promotions and changes in consumer habits. Manual monitoring with static thresholds can’t account for events that do not occur in a regularly timed pattern. That’s why historical context of influencing events is critical in preventing false positives, wasted resources and disappointed customers. Typically, when forecasting a metric’s future values, its past values are used to learn patterns of behavior. However, in many cases, it is not possible to produce accurate forecasts without additional knowledge about what influences a metric’s behavior. Specifically, the observed patterns in the metric are often influenced by special events that are known to occur at specific times, both in the past and the future. What is an Event? An event is a special occurrence in time that is known in advance of it happening. An event recurs in the lifecycle of a metric but not necessarily on a regular basis or cadence. This is better explained with a few examples: - Holidays that aren’t a fixed date but rather are dependent on a time of the month or year. Consider the U.S. observation of Thanksgiving, which is always the fourth Thursday of November; the Jewish observation of Hanukkah, which may occur at any time from late November to late December; or the Muslim observation of Eid al-Fitr, whose date of celebration is dependent on the cycle of the moon. - Sales and marketing events that are often tied to a season or special celebration. Examples would be the Black Friday shopping day(s) that follow the U.S. Thanksgiving holiday; Back to School sales; post-holiday clearance sales; or Amazon’s annual “Prime Day” sale. - Sporting events, which may be local or regional. For example, both the Super Bowl and the World Cup have a big effect on sales of beer and snack foods, attendance at sports bars, and sales of team merchandise. In a more local case, regional sporting events can have a similar effect. - Other examples of events include weather (blizzard days, heavy rains, hurricanes, etc.); financial events (earnings releases, bank holidays, changes in interest rates, etc.); and technical events (deployment of new software version, hardware upgrades, etc.). These events are generally known (or at least can be anticipated) in advance. Even weather predictions have become accurate enough to know when significant weather events are going to happen in a particular locale. In the context of a machine learning (ML) based business monitoring system, events are useful in two ways: - Understanding why an incident occurred for the purpose of root cause analysis (e.g., the increase in app crashes occurred right after a new version release indicates that a bug in the new version caused the errors). - To improve the accuracy of the ML based monitoring. By taking into account the expected influence of an event on the metrics being monitored, you can avoid false positives, reduce false negatives, and improve forecasting accuracy. What is an Influencing Event? An influencing event is an event that has predictable and measurable impact on a metric behavior when it occurs. For example, Cyber Monday is an influencing event on metrics that measure revenues for many e-commerce companies in the U.S. The impact of that event is almost universally a dramatic increase in revenues during the event. If a machine learning business monitoring system does not consider the influence of such an event on the revenue metrics of an e-commerce company, the spike in revenues would appear to be an anomaly, and a false positive alert might be sent. On the other hand, when the influence of the event is accounted for, it can help identify real anomalies. For example, if this year’s revenue on Cyber Monday is lower than the expectation learned by the system, an alert highlighting a drop in expected revenues will be sent, ideally in real time, so remediation actions can be taken to bring it back to the expected levels of revenue. An influencing event can impact the baseline of a metric before, during and after the event takes place. To understand the logic of that statement, consider this example: Christmas is an annual event. Depending on the metrics you are monitoring, this event has multiple effects, both good and bad, that happen before Christmas Day, on Christmas Day, and after Christmas Day has passed. - For merchants measuring revenue from sales, the days before Christmas are spike days. Christmas Day itself is a slow sales day for those merchants who are open for business. The days immediately following Christmas Day can see spiking sales again as shoppers look for post-holiday bargains. - For rideshare companies, there can be an uptick in riders before the holiday as people socialize and get out and about, but Christmas Day is a drop day as people tend to stay at home that day. Sample Patterns in a Real Business Scenario There is a computer gaming company that occasionally runs events (called “quests”) to stimulate interest in the game. Quests happen multiple times per month at irregular intervals and each quest spans several days. For example, a quest might run for five days and be done, and the next one starts in ten days, and the one after that starts 15 days after the second quest ends. An object of the game is to collect “coins” and the total coin count is one of the metrics the company measures. During a quest, the coin count has an interesting pattern: high on the first few days of the quest, then a smaller spike, and then returning to a normal steady pattern at the end of the quest. It looks something like this: The gaming company wants to monitor the coin counts during a quest to learn if there is anything unusual happening with the game. For example, if coin counts are down considerably from the normal usage pattern, it could mean that gamers are having trouble logging into the game. That would certainly be something to look into and remedy as soon as possible. This is why anomaly detection and alerting are so important. In the scheme of machine learning and anomaly detection, these quests are influencing events that occur at irregular times. We can’t apply a seasonality model to the machine learning process because the quests aren’t seasonal; nor are they completely random. They are irregular, but important, nonetheless. If the machine learning took place without consideration for the influencing events, the forecast of the coin counts compared to the actual coin counts would look something like the graph below. The shaded area represents the forecasted (baseline) range and the solid line is the actual data. It’s a very inaccurate forecast, to say the least. There are many false positives in the timeline, and if a real issue with the game occurred during this period, it would not be detected as an anomaly. However, if the ML model were to be told when a quest is going to start – after all, quests are scheduled, not impromptu – the model could learn the pattern of the event. The baseline could learn the pattern and it could be taken into account each time there is another quest. The resulting forecast versus actual looks something like this: You can see the forecast is much more accurate, even with a very complicated pattern. Take note of the small square marker (circled in red) at the bottom left of the graph. This is the indicator that tells the model a quest is starting. When this marker is sent before the start of a quest, the forecast algorithm understands how to treat the data coming in because it has seen this pattern before. In mathematical terms, the influencing event is called a regressor, and it’s critical to incorporate it into the forecast algorithm to ensure better accuracy. The example below shows a real issue that happened during a quest. Because the baseline was accurate, the drop in activity was detected and the issue was quickly fixed to get the game up and running as normal. Challenges of Learning the Impact of Influencing Events You can see just how important it is for accuracy that a model learn the impact of an influencing event. This is far easier said than done. There are some relatively difficult challenges in having the mathematical model accurately and automatically learn the impact of influencing events. The three main challenges are: 1. Being able to automatically identify if a group of historical events has any influence on a given time series To a ML model, it’s not inherently apparent if a group of events – like Black Friday occurring over a period of years, or the gaming company’s quests over the span of a year – has an actual impact on the metrics. The first part of the challenge is to figure out if that group of events does have an impact. The second part is, if the group of events is shown to have an influence, how can occurrences of the events be automatically identified without human intervention? For example, with the gaming company, it’s measuring many other metrics besides the coin count, so how can you tell if it is indeed the quest that has an influence on the coin count and not something else? And how can this be recognized automatically? 2. If the group does have an influence, being able to identify accurately and robustly the pattern of the influence, both before and after the event date So you’ve determined that the group of events has an influence on the metric’s pattern. An event has a pattern, and the challenge is to learn this pattern robustly and accurately. There are two main factors making it hard: Separating the event effect from the normal pattern: The pattern of the event needs to be separated from the normal pattern of the metric occurring at the same time – e.g., a metric measuring viewership during an event like the Superbowl is composed of the normal viewership pattern and the added viewership due to the Superbowl itself. To accurately and robustly learn the pattern of influence of the event, applications of techniques such as blind source separation are required – and the assumptions behind those techniques require validation during the learning process. Causal and non-causal effects: A complication is that sometimes there is an impact even before the event starts. You can’t assume the impact of an event will start just when the event starts 3. Given a set of many events, automatically group them to multiple groups of events, where each group has a similar influence pattern and a clear mechanism for identifying from the event description to which group it belongs All sorts of groups of events can have an influencing event on a particular metric. Sometimes different events can have an almost identical pattern. If these events can be grouped together, the learning of the pattern and its impact can be faster and easier. Say you are measuring revenue for a rideshare company. This company sees spikes on New Year’s Eve in all major cities and on St. Patrick’s Day in New York and Boston because people like to go out to celebrate these days. The patterns of ridership for these events are almost identical. When you have lots of these types of events with similar patterns, you want to group them because that makes learning about them more accurate. What’s more, the groupings provide more data samples so you can do with less time in history to learn the pattern. Despite the challenges highlighted above, being able to automatically include influencing events in the machine learning model is critically important for two key reasons. First, it reduces the generation of false positive alerts, and second, it enables capturing when a special event you are running is not acting as normal. Consider an e-commerce company whose Black Friday sale event has lower sales than expected. By incorporating this influencing event in the ML model, the merchant can see that sales are off for some reason and can begin investigation of the cause before revenues are significantly impacted.
<urn:uuid:00836b80-669e-4690-a2d1-396a67567888>
CC-MAIN-2022-40
https://www.anodot.com/blog/how-influencing-events-impact-business-monitoring/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00708.warc.gz
en
0.948983
2,430
3.171875
3
Intellectual Property (IP) Theft Intellectual property (IP) theft is a crime that impacts copyrights, trademarks, patents, and trade secret material. This crime occurs when a person knowingly takes, uses, misappropriates, or otherwise steals property that is protected under intellectual property laws. In the United States, most intellectual property theft is prosecuted as a federal crime. The penalties for intellectual property theft can be serious, including hefty fines and imprisonment. Definitions and Examples Related to Intellectual Property Theft Copyright Copyrights protect original works as soon as a creator transposes the work from their mind to a tangible form of expression (e.g., writing, recording). Examples of copyrighted material include: Artistic works, such as a novel, poem, photograph, movie, lyrics to a song, musical composition in the form of sheet music, sound recording, painting, or plan for a building Technology, such as a software application, video game, code for a programming tool, or code for a website Business materials, such as a database, marketing plan or business plan, numbers and calculations in a financial forecast, manual for the operation of equipment, technical specifications for a device, or proposals Trademarks are unique symbols or word(s) used to represent an organization or its products and services. Once registered, that same symbol or series of words cannot be used by any other organization. The three types of trademarks are: 1. Arbitrary and fanciful trademarks These are not in any way connected to products or organizations in any other way than being created for that purpose. Fanciful trademarks are terms created for the purpose of creating a trademark, such as Exxon, Kodak or Lexus. Arbitrary marks are created using existing words and associated designs, such as Apple, Amazon, or Dove. 2. Suggestive trademarks These are designed to trigger consumers’ imagination to conjure and associate a characteristic to a product, such as Coppertone, KitchenAid, or Airbus. 3. Descriptive trademarks These directly describe the product or service and, as such, are generally considered eligible for trademarking unless the description becomes associated with the company that produces the product and takes on a secondary meaning, for example, American Airlines, Western Digital, or International Business Machines. This is the grant of exclusive rights to use an innovation and exclude others from using it to produce products or services. There are three types of patents: Confidential information that is used to create products or services. Examples include patterns, formulas, programs, methods, processes, designs, or codes. The FBI and Intellectual Property Theft With regard to intellectual property rights, the FBI’s objective is to disrupt and shut down individuals and organizations (i.e., international and domestic) that develop or sell counterfeit and pirated goods. The FBI also works to stop anyone who steals, distributes, or makes money from the theft of intellectual property and trade secrets. The FBI has a group dedicated to intellectual property theft, the Intellectual Property Rights (IPR) program. The IPR program is comprised of special agents and analysts who have extensive experience in various related areas, including intellectual property, money laundering, organized crime, cybercrime, national security, undercover operations, and data analytics. The IPR investigates theft of trade secrets, counterfeit products, and copyright and/or trademark infringement cases. The FBI’s IPR program is headquartered at the IPR Center, where FBI program managers have oversight over the FBI’s dedicated IPR field agents. These agents’ focus includes counterfeit goods that pose a health and safety threat and the theft of trade secrets. In addition, the IPR Center brings together domestic and international law enforcement and government partners. Through the IPR Center, partners are able to engage with each other to leverage their various resources to interdict, investigate, coordinate referrals, and conduct training. The IPR Center team also engages with non-government organizations to stay up to speed on intellectual property theft issues, hosting events across the country and around the world. Common Intellectual Property Theft Scenarios Intellectual property theft results from both external threats and malicious insiders. While these attacks can be both opportunistic and targeted, intellectual property theft is primarily the result of highly-targeted attacks where the perpetrators are seeking specific proprietary information. Both foreign and domestic actors lead these attacks. Attack Vectors for Intellectual Property Theft One of the most common causes of intellectual property loss is human error. Users can accidentally expose intellectual property through technology, such as: - Collaboration tools (e.g., Asana, Slack) - Email when IP is sent, because: - Email sent to the wrong address - Hidden content in an attachment (e.g., an Excel tab) - IP forwarded to a personal email account - Recipient of IP forwards the email - File sharing (e.g., Dropbox, Google Drive) - SMS or instant messaging apps One of the more dangerous intellectual property threats comes from malicious insiders. These people use their insider access to either exfiltrate intellectual property or help an external person or group gain access. Malicious software is a common attack vector used to gain access to intellectual property. From trojans to worms, malware remains an effective point of entry for intellectual property theft. These attacks target both organizations’ systems as well as those of service providers. An effective approach used for intellectual property theft is privilege abuse. When a user intentionally or accidentally misuses legitimate privileges that they have been granted, it is known as privilege abuse. Despite these privileges being legitimately granted, the privileges can be illicitly used to access intellectual property. Best Practices for Preventing Intellectual Property Theft - Define how third-party systems (e.g., business partners, suppliers, customers) must handle and secure intellectual property that is shared with them directly or through applications. - Educate employees about how to properly use personal devices when handling intellectual property. - Educate employees about intellectual property theft and how to defend against it. - Have an accurate inventory of intellectual property to help employees understand what needs to be protected and keep this list up to date by regularly checking with those who create IP to ensure that new material is tracked. - Identify intellectual property by using watermarks, banners, or labels that make it clear that the information is proprietary. - Know where intellectual property is stored and used, paying close attention to printers, copiers, and scanners, which are fertile ground for intellectual property theft. - Restrict access to and monitor cloud applications and file-sharing services that are company-managed or part of shadow IT. - Secure intellectual property physically and digitally, using data security tools, systems, and procedures as well as locked doors, surveillance, and other techniques. - Think like a spy; consider ways that spies would be able to gain access to intellectual property, so additional security measures can be put in place. Preventing Intellectual Property Theft by Malicious Insiders - Foster a positive culture Create an environment where employees are satisfied with their jobs and work environment. - Know employees Conduct thorough candidate screenings to determine if an applicant has a criminal record or financial issues that could drive them to commit or facilitate intellectual property theft. - Provide a mechanism for anonymous reporting Empower employees to share any indications that there might be intellectual property theft risks or incidents. - Share information about policies Have policies for enforcing intellectual property theft defense that are clearly defined and communicated to all employees. How to Respond to Intellectual Property Theft There are several ways to respond to intellectual property theft. In cases where it is believed that the incident was accidental or not malicious, the matter can often be handled without taking legal action. Following are the main steps that are often taken in response to intellectual property theft or misuse. A cease-and-desist letter is used in cases where it is believed that the incident is either accidental or not malicious. The letter conveys that intellectual property has been misused or taken without permission and that this must stop or legal action will be taken. Cease-and-desist letters are not private or confidential. They usually include these items, at a minimum: - Information about the intellectual property in question - Details outlining the type of infringement (e.g., patent, copyright, trademark) - What actions are expected to solve the issue (e.g., remove material from a website, stop using trademarked material) - Time period that the offender has to respond - Next steps that will be taken if the matter is not addressed Mediation (a.k.a. alternative dispute resolution or ADR) is commonly used in lieu of taking an intellectual property theft dispute to court. This approach can be relatively quick—scheduled within a few weeks and finished in anywhere from a few hours to several days. In addition, mediation is private and completely confidential. Mediation also gives the parties a great deal of control over the process and outcome, unlike courts, where decisions lie in the hands of a judge or jury. This is often the approach taken when parties to contracts or relationships are involved in intellectual property theft disputes. Examples include trademark licenses, franchises, technology contracts, multimedia contracts, distribution contracts, joint ventures, research and development contracts, employment contracts, mergers and acquisitions where intellectual property assets assume importance, sports marketing agreements, and publishing, music, and film contracts. In these cases, mediation offers several advantages in these scenarios, including helping parties: - Expedite settlement - Maintain confidentiality - Preserve their relationship - Prevent damage to brands and reputations - Retain control over the dispute settlement process The type of intellectual property theft will dictate whether a case is handled a civil case, criminal complaint, or both. Copyright, trademark, and patent infringement can all be handled in civil court. Criminal cases are pursued for intellectual property theft that is clearly malicious and for gain, such as stealing trade secrets. In the event that legal action is taken, victims of intellectual property theft can receive varying levels of compensation, including the following. - An injunction to stop the use of the intellectual property, including removing a product from the market - Payment for losses - All or a share of the offender’s profits from the use of the intellectual property - Attorneys’ fees Criminal penalties can be severe, including fines ranging from $250,000 to $5 million and/or between three and twenty years in prison. In addition, stolen property, documents, and material can be seized. Perpetrators of intellectual property theft also risk the loss or suspension of operating licenses—personal and business, including disbarment or loss of a franchise. Recourse Under the Digital Millennium Copyright Act (DMCA) If the intellectual property theft involves material being posted on the internet, additional avenues for remediation are available under the Digital Millennium Copyright Act (DMCA). In these cases, the takedown notices can be sent directly to the offender’s web hosting and other service providers, such as search engines and ad networks that promote the website. The content of the message is similar to a cease-and-desist letter. However, it should also include the URL where the intellectual property is posted and the expected response from the service provider, such as removing the material from circulation, removing it from the index, or taking the site offline. Suggestions for identifying service providers - Search WHOIS to find the name of the domain’s registrant, who will often also be the owner - Send a DMCA notice to the service provider if the owner’s information is listed as private - Pursue a federal court order, which is made possible under DMCA, if the service provider will not facilitate contact with the offender Patent Infringement Initial Next Steps In cases where patented intellectual property has overlaps causing infringement, a Request for Reexamination can be submitted to the United States Patent and Trademark Office (USPTO). The request for reexamination has to be based on the belief that the patent was wrongfully granted, because the intellectual property was already covered in another patent. Protect the Most Sensitive Data from Theft All sensitive data must be protected as a breach can lead to fines, reputational damage, and loss of revenue. Intellectual property represents what is arguably the most valuable of all sensitive data. A strong defense against intellectual property theft requires the involvement not just of IT and security teams, but the entire organization—from staff to executives. The frequency and costs of intellectual property theft continue to rise. Across all organizations, it is imperative to raise awareness, implement controls, and engage all users in defensive efforts. Egnyte has experts ready to answer your questions. For more than a decade, Egnyte has helped more than 16,000 customers with millions of customers worldwide. Last Updated: 21st February, 2022
<urn:uuid:d5a18dfd-1eab-4403-8a6a-6dcd020a0c73>
CC-MAIN-2022-40
https://www.egnyte.com/guides/life-sciences/intellectual-property-theft
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00708.warc.gz
en
0.91347
2,715
2.796875
3
If you own a Mac or PC, odds are you’ve used your laptop’s Thunderbolt port to connect another device to your machine. Thunderbolt ports are convenient for charging other devices using your laptop or desktop’s battery power. However, a new flaw called Thunderclap allows attackers to steal sensitive information such as passwords, encryption keys, financial information, or run detrimental code on the system if a malicious device is plugged into a machine’s port while it’s running. So, how can attackers exploit this flaw? Thunderbolt accessories are granted direct-memory access (DMA), which is a method of transferring data from a computer’s random-access memory (RAM) to another part of the computer without it needing to pass through the central processing unit (CPU). DMA can save processing time and is a more efficient way to move data from the computer’s memory to other devices. However, attackers with physical access to the computer can take advantage of DMA by running arbitrary code on the device plugged into the Thunderbolt port. This allows criminals to steal sensitive data from the computer. Mind you, Thunderclap vulnerabilities also provide cybercriminals with direct and unlimited access to the machine’s memory, allowing for greater malicious activity. Thunderclap-based attacks can be carried out with either specially built malicious peripheral devices or common devices such as projectors or chargers that have been altered to automatically attack the host they are connected to. What’s more, they can compromise a vulnerable computer in just a matter of seconds. Researchers who discovered this vulnerability informed manufacturers and fixes have been deployed, but it’s always good to take extra precautions. So, here are some ways users can defend themselves against these flaws: - Disable the Thunderbolt interface on your computer. To remove Thunderbolt accessibility on a Mac, go to the Network Preference panel, click “OK” on the New Interface Detected dialog, and select “Thunderbolt Bridge” from the sidebar. Click the [-] button to delete the option as a networking interface and choose “Apply.” PCs often allow users to disable Thunderbolt in BIOS or UEFI firmware settings, which connect a computer’s firmware to its operating system. - Don’t leave your computer unattended. Because this flaw requires a cybercriminal to have physical access to your device, make sure you keep a close eye on your laptop or PC to ensure no one can plug anything into your machine without permission. - Don’t borrow chargers or use publicly available charging stations. Public chargers may have been maliciously altered without your knowledge, so always use your own computer accessories. Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:9a1907a0-f841-4864-ab2a-2412ec5c6903>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/internet-security/thunderclap-flaws/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00708.warc.gz
en
0.893313
574
2.9375
3
Only five years ago, a mandate to overhaul the infrastructure of an entire branch of the government would have sent shivers through the CIOs of government agencies. The cost of such a mandate would be significant with new technology investments, maintenance costs, licensing fees, consulting costs, and more. The same mandate would have had proprietary software vendors and sales teams smiling with the potential to make large revenues for their companies. Today, there is a very different technology landscape with the rise of cloud technology offering enterprise class functionality at a fraction of the cost. In the public sector, no one questions that freedom of information increases accountability, informed public participation and greater collaboration. But what happens if one government agency uses a different format to access information than another, or if a citizen cannot access government information? The result can lead to uninformed decision making by government officials and a lack of participation by citizens. To have the desired effect, freedom of information is contingent on being able to access it in a format that allows everyone to read and collaborate around. In order to implement the ideas of transparency, participation and collaboration there are a number of guidelines that agencies must adhere to with regard to managing and accessing information. Existing regulations impact the way people share knowledge, content and data. As a result, it is essential that any plans take into account guidelines related to how documents and records including electronic information are managed, stored and accessed. Transparency involves creating publicly available websites that make accessible high value data and content that was not previously available in a downloadable format. The first step is to have in place departmental activities, staffing, organisational structure and a process for analysing and responding to congressional requests for information is required. An open planning and rollout process needs to be followed in implementing a cloud government strategy that specifies how transparency and integration of public participation as well as collaboration will be improved. Progress should then be tracked and monitored. This together with aggregate statistics and visualisations will provide a clear assessment of the state of open government as well as its progress over time. By having a single repository that syncs between confidential and collaborative cloud environments for both document and records management, government agencies can take the complexities out of managing separate systems while at the same time decreasing technology costs. The goal of an open government is to create more transparent and greater collaborative environment between its agencies, departments, officials and citizens. In order to achieve this, the government needs to rethink how it communicates and the tools it uses to do so. By embracing cloud and social technologies, government information can be made more accessible and equip citizens with new methods of participation and interaction. As we shift towards embracing these open government ideals, we must remind ourselves that the need for content hasn't changed. Thanks to the new challenges brought on by mobile and cloud technologies, every organisation is looking for ways to make content more easily accessible and collaboration simple to manage. Now this can all be done with the touch of an app. Luckily there is a new paradigm combining some of the best practices using on-premise open source and cloud technology. This hybrid model can provide enterprise-class functionality and security that even passes the government's strict stamp of approval. John Newton is CTO and chairman of business critical document management specialist Alfresco. (opens in new tab)His career in content management dates back many years - in 1990, John co-founded, designed and led the development of Documentum, a leader in content management since acquired by EMC.
<urn:uuid:a2d20fc0-4a7f-44dc-a624-ee0b26a5da06>
CC-MAIN-2022-40
https://www.itproportal.com/2013/07/19/the-future-of-open-government-in-the-cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00708.warc.gz
en
0.93535
702
2.734375
3
The below video demonstrates this attack in action: Though the embedded WebKit library used by Messages for OS X executes in an applewebdata:// origin, an attacker can still read arbitrary files using XMLHttpRequest (XHR) GET requests to a file:// URI due to the lack of a same-origin policy (SOP). By abusing XHR to read files, an attacker can upload a victim’s chat history and attachments to a remote server as fast as the victim’s Internet connection will allow. The only user interaction required for a successful attack is a single click on a URL. Furthermore, if the victim has the ability to forward text messages from their computer (SMS forwarding) enabled, the attacker can also recover any messages sent to or from the victim’s iPhone. Want to know all the gritty details? Then keep reading. Messages for OS X Messages for OS X depends upon an embedded version of the HTML rendering engine WebKit for much of its user interface. When messages are sent or received, HTML is inserted into the DOM to render the UI and any accompanying attachments/media content. All messages sent through the application are rendered in a DOM; hence, common client-side web vulnerabilities can affect the Messages for OS X application. When testing the Messages for OS X client, arbitrary protocols schemes were found to be automatically converted into links and inserted into the DOM. The example URIs below are all inserted as links into the WebView when messaged: Since the attacker’s code is executing in a full WebKit implementation, XMLHttpRequest is available at runtime. One of the most notable differences between an embedded version of WebKit and a web browser like Chrome or Safari is that WebKit does not implement any same-origin policy (SOP) because it is a desktop application. An attacker can take advantage of this to read files from the local filesystem without violating the same-origin policy by sending XMLHttpRequest GET requests to file:// URIs. The only requirement is that the attacker must know the full file path. Relative file system paths (e.g., ~/.ssh/id_rsa) cannot be used. The Messages for OS X application DOM can execute the following to read the /etc/passwd file: function reqListener () // send back to attacker’s server here var oReq = new XMLHttpRequest(); After being converted into a URI payload, the code resembles the below: When clicked in the Messages application, the following prompt appears: Due to the OS X application sandbox, files were only accessible if they were located in ~/Library/Messages/* and some other non-user system directories such as /etc/. Taking the Messages Database and Attachments When messages and attachments are received by Messages for OS X, they are saved in this directory: These messages’ textual content and other metadata are stored in a SQLite database located at: This database also contains the locations for all attachments located on a user's machine. To steal this database, and subsequently all attachments ever received or sent by a victim, a more advanced attack payload becomes necessary. The Exploit Overview The following steps need to be carried out before data can be successfully removed by an attacker. - Obtain the current user (~cannot be used) - Generate a full path with the username for the chat.db file i.e., - Use XMLHttpRequest to read the db database and query it for attachment's file paths - Upload the database and all attachments using XMLHttpRequest (or HTML5 WebSockets) We can determine the currently logged-in user by requesting and parsing/Library/Preferences/com.apple.loginwindow.plist. This file is readable from in the OS X application’s sandbox, which makes it trivial to construct the full path to the user's chat.db. The attacker next performs a little obfuscation to make the URL’s appearance more realistic: If a victim clicks the above URI in the Messages for OS X application, the victim's entire chat history and all associated attachments are transferred to the attacker. Exploit code for this is available on our Github. Client-side content injection flaws are no longer limited to the browser, and haven’t been for some time now. While it’s certainly helpful for developers to use web technologies such as WebKit or its more dangerous kin nw.js to build desktop applications, these libraries can have adverse effects on application security when used naively. It is clear that embedded web frameworks allow common vulnerabilities, such as cross-site scripting (XSS), to be leveraged in a more devastating way than previously possible. This vulnerability also demonstrates the power a URI has over your machine. To the novice user, a URI is simply a link to a website, but this is only one variant of a much more complicated ecosystem. Much like an email attachment, users should never click a URI unless you trust the source it came from, and even then exercise prudence. We’d like to thank Apple for communicating with us throughout this process and their cooperation in quickly remediating this vulnerability. Subscribe to Bishop Fox's Security Blog Be first to learn about latest tools, advisories, and findings. Thank You! You have been subscribed.
<urn:uuid:f03eb03d-edce-4a2a-bbf9-9cfe26fb81cc>
CC-MAIN-2022-40
https://bishopfox.com/blog/break-client-recovery-of-plaintext-imessage-data
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00708.warc.gz
en
0.857347
2,048
2.546875
3
“I woke up and turned around to find a gun in my face.” This is something that you may read in a book by Alistair MacLean, but with the threat of IoT looming over us, this is what anybody using IoT will face–except that the gun is invisible, held by a nameless hacker 200 kilometers away. Much has been written about how secure–insecure, rather–IoT is. As with all forms of technology, security takes the back seat because people often concentrate on other features. IoT is no exception and the threat of Simple Service Discovery Protocol (SSDP) looms large over it. What is SSDP? Wikipedia says that SSDP has been around since 1999 (ironically, Kevin Ashton coined IoT in the same year) and is a network protocol for advertisement and discovery of network services. SSDP comes enabled by default on IoT devices; they use it discover each other on a network. This means that SSDP can be used to compromise a network using IoT. Some reports are already highlighting the danger of SSDP–NSFOCUS, in its bi-annual DDoS Threat Report (April 2015) said that more than 7 million SSDP devices globally could be exploited. Arbor Networks monitored 126,000 SSDP reflection attacks in JFM 2015 compared to 83,000 in OND 2014. In May 2015, Akamai said that SSDP attacks–which were not observed at all in the first half of 2014–accounted for over 20 per cent of the attack vectors in 2015. This shows how hackers are shifting focus. A blog entry on sucuri.net says that, while UDP (User Datagram Protocol) DDoS attacks are common and can be blocked by rule sets, SSDP attacks are rarer, which means that CIOs, CSOs and other tech people will take some time to come to grips with it. But while it is easy to patch servers, with IoT, it could be tougher–IoT relies not on one big device but on hundreds, perhaps thousands of small sensors. Changing them–for security or other reasons–will require firmware upgrades, which will take time to implement. The growth of IoT is so fast–Gartner said that around 26 billion IoT objects will be present in 2020, while IDC said that the worldwide market for IoT will touch $7.1 trillion in 2020–everything is at risk Consider an example–if your car has IoT sensors that automatically tell the manufacturer about the status of critical components, a hacker may be able to use this channel to hack into the automobile company’s secure servers. He could then use some system to turn off petrol flow (that will be IoT enabled too) to a lot of cars, thus (hypothetically) bringing many cars to a standstill. This is no pipe dream–IHS Automotive says that the number of cars connected to the Internet worldwide will touch 152 million in 2020. Any one of them could be a starting point for a hacker. I don’t know about you, but I’m really scared. So scared that I sleep with a Smith and Wesson .44 magnum revolver under my pillow. I feel safe for now, but it’s just a matter of time before the damned thing gets an IoT sensor…
<urn:uuid:4ee99339-d48f-4da9-a7fb-8ab02e44e798>
CC-MAIN-2022-40
https://www.cio.com/article/218328/simple-service-discovery-protocol-adds-to-iot-complexity.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00708.warc.gz
en
0.952553
684
2.515625
3
The new array is spread over more than 6,000 square-feet of rooftop covering IBM's India Software Lab in Bangalore. The solar array is capable of providing a 50-kilowatt supply of electricity for up to 330 days a year, for an average of five hours a day. By employing unique high-voltage DC power conditioning methods – and reducing AC-DC conversion losses – the new IBM solution can cut energy consumption of data centers by about 10 percent and tailors solar technology for wider use in industrial IT and electronics installations. In many emerging markets, electrical grids are undependable or non-existent. Companies are forced to rely on expensive diesel generators. That makes it difficult and expensive to deploy a lot of computers, especially in the concentrated way they're used in data centers. Using IBM's solution, a bank, a telecommunications company or a government agency could contemplate setting up a data center that doesn't need the grid. The solution, in effect, creates its own DC mini-grid inside the data center. High-voltage, DC computer servers and water-cooling systems are beginning to replace traditional, AC-powered servers and air conditioning units in data centers. IBM's Bangalore array is the first move to blend solar-power, water-cooling and power-conditioning into a "snap-together" package suitable to run massive configurations of electronic equipment. "The technology behind solar power has been around for many years, but until now, no one has engineered it for efficient use in IT," said Rod Adkins, senior vice president, IBM Systems & Technology Group. "We've designed a solar solution to bring a new source of clean, reliable and efficient power to energy-intensive, industrial-scale electronics." IBM plans for the Bangalore solar-power system to connect directly into the data center's water-cooling and high-voltage DC systems. The integrated solution can provide a compute power of 25 to 30 teraflops using an IBM Power Systems server on a 50kW solar power supply. "This solar deployment, currently powering almost 20 percent of our own data center energy requirements, is the latest in the investments made at the India lab to design an efficient and smarter data center," said Dr. Ponani Gopalakrishnan, VP, IBM India Software Lab. "Ready access to renewable energy in emerging markets presents significant opportunities for IBM to increase efficiencies, improve productivity and drive innovation for businesses around the world." IBM plans to make the new solar-power technology available to clients.
<urn:uuid:a10cc751-8c2c-4117-a381-d06d1b4499f2>
CC-MAIN-2022-40
https://www.missioncriticalmagazine.com/articles/84522-ibm-rolls-out-first-solar-array-designed-for-highvoltage-data-centers-and-industrial-use
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00708.warc.gz
en
0.917884
519
2.75
3
Thank you for Subscribing to CIO Applications Weekly Brief What is behind the Popularity of Digital Financial Services? The fintech business, or digital financial services, is altering our lifestyle and economy to be more productive. Fremont, CA: Digital financial inclusion is a vague topic that highlights neglected communities' digital access to conventional financial services. Digital financial inclusion is now possible even in the most distant parts of the globe, thanks to the digital transformation of financial services. Nevertheless, addressing unsolvable economic issues in the age of cloud computing and next-generation high-speed broadband connectivity. Digital financial inclusion paves the path for easy integration of advantages, whether through government support or transfer funds from foreign bank accounts. The following are the three essential components of digital financial inclusion: - Digital Transactional Platforms: These platforms acquire and organize user data digitally while maintaining the highest degree of confidentiality. - Retail Agents: Retail Agents are individuals who have access to services for transmitting funds, converting the fund's digital coin to actual currency. - Mobiles are examples of electronic gadgets that access the system. Portable computers or laptops that can access data remotely with no problems.Rural society would benefit immensely from digital financial inclusion since it gives simple access to financial transactions. Digital Finance Literacy Digital financial literacy refers to the capacity for using electronic devices and technologies to conduct financial transactions to the best of one's ability. The ability to access digital resources for financial transactions is entirely subjective. The two most important considerations in using digital money tools are safety and wellbeing. The cashless economy requires digital finance knowledge. Digital Financial Services Digital finance encompasses various financial services provided via digital channels such as point-of-sale, ATM, and cash deposit machines, all of which are interconnected—using the internet to offer and access such applications. Financial technology, or FinTech, is a term that describes digital financial services.The digital era has helped the growth of FinTech companies making everyone’s digital lives more manageable. Whether it's paying utility bills or hailing a cab, everything is now online. The FinTech business, or digital financial services, is altering our lifestyle and economy to be more productive. Let’s see few Digital Financial Services (DFS) Benefits: - Easily accessible from any location - Very simple and effective - Saves time and efforts by not having to wait in lines making a fund transfer. - Each transaction is kept up to date in real-time. - Decision-making simplicity - In impacting a transaction, dependability and flexibility are essential. - Environmentally friendly - With the use of technology, one may expand one customer base. - By using omnichannel digital marketing, the company may expand their reach
<urn:uuid:1f8fd900-3cad-4dc7-9cf0-14811aa47639>
CC-MAIN-2022-40
https://www.cioapplications.com/news/what-is-behind-the-popularity-of-digital-financial-services-nid-8108.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00108.warc.gz
en
0.89621
567
2.78125
3
Multi-Factor Authentication: The Ins and Outs Multi-factor authentication, often referred to simply as MFA, is what we call an additional layer of security that, today, we can enable on most of our online accounts for better security. Unfortunately, MFA is often overlooked. Whether it’s because users don’t know what it is or how it works remains a mystery. Today, we’ll explore MFA. We’ll look into what it is, how it works, and why it is important. Hopefully, by the end of this post, you’ll be rushing to enable MFA on all of your online accounts. What is MFA? Usually MFA is described as an additional layer of security. Technically speaking, MFA is an access management component that requires users to provide two or more factors of authentication to access an account. Essentially, MFA requires users to provide extra proof of identity besides their username and password. Think of MFA as an extra lock on your door. Unfortunately, misconceptions about MFA exist and often deter users from using it and taking advantage of the security that it provides. The misconceptions seem to be prevalent in the business world. Organizations tend to think that rolling out MFA for the entire company is difficult and cumbersome and could be counterproductive. The reality of the matter is the opposite. With today’s security technologies, enabling MFA for company-wide use can be done quickly and with virtually no interruptions. And once it is done, the benefits that MFA brings to the table far outweigh any possible inconveniences that a company might face during implementation. How does MFA work? MFA works by employing a variety of technologies to authenticate the user once they try to access their online account. With MFA enabled, a user first needs to enter their username and passwords, but besides these credentials, the user is also asked to authenticate their identity by some other means. Once the two factors are authenticated, the user is granted access to their account. One of the most popular MFA factors is known as one-time passwords (OTP); these are the 4-8 digit codes that are sent to you via SMS, email, or authentication app. Types of MFA factors A variety of factors could be used by MFA to authenticate the user. Here are some of the most common ones. What you know (knowledge factor) The knowledge factor typically consists of a password, PIN, passphrase, or security questions and their answers known only to the rightful account holder. For the knowledge factor to work correctly, the user must enter the correct information requested by the online application. What you have (possession factor) Before we had smartphones that we could use for MFA, people carried tokens or smartphones to generate an OTP that would be entered as a factor of authentication. These days, smartphones are the primary physical tools that we use to generate an OTP, usually via authenticator apps. However, physical security keys are also available as a possession factor, which are often considered one of the most secure options when it comes to MFA types. What you are (inherence factor) As an additional factor of authentication, users today can use biometric data. Such data includes the person’s fingerprints, facial features, retina scans, voice recognition, and other biometric information. Biometric authentication is gaining more traction by the day, as authentication is frictionless when compared to other types. Where you are (location factor) Location-based authentication usually checks the user’s IP address and their geo-location. Users can whitelist certain geo-locations and block others. If the login attempt comes from an unrecognized location, MFA blocks the access to the account and vice versa. Why is multi-factor authentication important As cybercrime continues to increase in frequency and sophistication, individuals and companies alike look for effective and simple ways to ensure the security of their online accounts. MFA provides just that. When bad actors are able to steal passwords and usernames, they can easily gain unauthorized access to accounts and network systems. But with MFA enabled, even hackers with the correct login credentials would need to get through an additional layer of security, whether it’s OTP, biometric authentication, or other means of MFA. All of that complicates things for attackers because for a successful hack they would need to somehow have access to smartphones or other devices related to the user. Given that up to 80% of data breaches are related to poor password habits in one way or the other, MFA can significantly improve your security. Reports also indicate that the volume of brute force attacks grew by 160% starting in may 2021. But that’s not all. Security experts and researchers continue to see an increase in phishing attacks, which are usually at the top of the hacking funnel. As cybercrime continues to rise in prominence, MFA is quickly becoming a critical part of everyone's security, whether it's an individual or a large organization. Password security for your business Store, manage and share passwords. 30-day money-back guarantee The number one thing that MFA brings to the table is enhanced security. MFA works hand in hand with strong passwords to ensure the best possible security. It makes it harder for devious parties to access accounts or system networks without factored authentication. This applies to both individuals and organizations. However, for businesses, MFA also helps with compliance. Security standards such as the GDPR and HIPAA require the highest level of security to protect sensitive user data and MFA can be that additional layer of security that helps businesses comply with security standards. Additionally, MFA can boost a company’s reputation among its customers if it offers MFA as an additional layer of security for their accounts. These days, customers trust and appreciate businesses that take precautions to protect them seriously. MFA types that NordPass Business supports NordPass Business is a secure and intuitive password manager purpose-built to facilitate smooth and secure password management in a corporate environment, and it comes equipped with three MFA options: an authenticator app, a security key, and backup codes, which can come in handy when you don’t have access to the authenticator app or a security key. NordPass supports major authenticator apps such as Google Authenticator, Microsoft Authenticator, and Authy. Besides MFA, NordPass Business is packed with a variety of advanced security and productivity features. Not only does NordPass allow users to create complex and unique passwords on the spot and store them in an encrypted vault, but it also can autofill login credentials and autosave new ones with just a few clicks. Furthermore, with NordPass Business, organizations can regularly check for weak, old, or reused passwords with Password Health and check if any of company-related domains or emails have been compromised in a data leak with the Data Breach Scanner. A business password manager is quickly becoming a ubiquitous tool for any company wishing to succeed in today's digital world. If you are interested in learning more about NordPass Business and how it can fortify corporate security and even bring business closer to cyber insurance eligibility, do not hesitate to book a demo with our representative. Subscribe to NordPass news Get the latest news and tips from NordPass straight to your inbox.
<urn:uuid:8f7e9812-facd-421e-bf2f-817234890227>
CC-MAIN-2022-40
https://nordpass.com/blog/what-is-multi-factor-authentication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00108.warc.gz
en
0.946212
1,526
3.203125
3
The Simple Network Management Protocol (SNMP) defines a standard mechanism for remote management and monitoring of devices in an Internet Protocol (IP) network. A device or host that supports SNMP is an SNMP entity. There are two classes of SNMP entities: SNMP managers that request information and receive unsolicited messages and SNMP agents that respond to requests and send unsolicited messages. SNMP entities that support SNMP proxy functions combine the functions of both SNMP manager and SNMP agent. There are two classes of SNMP operations: solicited operations such as 'get' or 'set', with which the SNMP manager requests or changes the value of a managed object on an SNMP agent; and unsolicited operations such as 'trap' or 'inform' messages with which the SNMP agent provides an unsolicited notification or alarm message to the SNMP manager. The 'inform' operation is essentially an acknowledged 'trap'. All SNMP operations are transported over the User Datagram Protocol (UDP). Solicited operations are sent by the SNMP manager to the UDP destination port 161 on the agent. Unsolicited operations are sent by the SNMP agent to the UDP destination port 162. In IOS, The acknowledgement sent by the SNMP manager to an SNMP agent in reply to an 'inform' operation is sent to a randomly chosen high port that is chosen when the SNMP process is started. As IOS implements both an SNMP agent and SNMP proxy functionality, the SNMP process in IOS starts listening for SNMP operations on UDP ports 161, 162 and the random UDP port at the time it is initialized. The SNMP process is started either at the time the device boots, or when SNMP is configured. The high port is chosen via the following series of steps: A random number between 49152 and 59152 is IOS checks to see if that UDP port is already being used. If not, that UDP port is selected to receive SNMP 'inform' acknowledge If the port is already in use, IOS increments the port number by 1, and checks again, incrementing until an open port is Therefore, the port chosen may be higher than 59152 although this is In this vulnerability, the IOS SNMP process is incorrectly attempting to process SNMP solicited operations on UDP port 162 and the random UDP port. Upon attempting to process a solicited SNMP operation on one of those ports, the device can experience memory corruption and may reload. SNMPv1 and SNMPv2c solicited operations to the vulnerable ports will perform an authentication check against the SNMP community string, which may be used to mitigate attacks. Through best practices of hard to guess community strings and community string ACLs, this vulnerability may be mitigated for both SNMPv1 and SNMPv2c. However, any SNMPv3 solicited operation to the vulnerable ports will reset the device. If configured for SNMP, all affected versions will process SNMP version 1, 2c and 3 operations. This vulnerability was introduced by DDTS CSCeb22276 and has been corrected with DDTS CSCed68575.
<urn:uuid:f9932550-6793-4948-b473-1f58e1e6ddfc>
CC-MAIN-2022-40
https://www.cisco.com/c/en/us/support/docs/csa/cisco-sa-20040420-snmp.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00108.warc.gz
en
0.846579
715
2.765625
3
Security is an important issue for many business owners and managers. Many work with their IT department or an IT partner to ensure their network and systems are secure from threats. But what about your email, social media and bank accounts? The weakest link of online accounts is your password, hackers know this and that's what they target. Do you take steps to ensure that you have a strong password? If you want to minimize the chances of your password being hacked, here are five things you should NOT do. While short passwords may be easier to remember, they are also easier and quicker to hack. The most common way to hack passwords is by using brute force: Developing a list of every possible password, then trying this list with a username. Using a mid-range computer like the one many have on their desk, with a normal Internet connection, you can develop a list of all potential passwords astonishingly quickly. For example it would take 11.9 seconds to generate a list of all possible passwords using five lowercase characters (a,b,c,d,etc.) only. It will take about 2.15 hours to develop a list of all possible passwords using five of any computer character. Once a hacker has the list, they just have to try every potential password with your user name. On the other hand, a list of all 8 character passwords with at least one special character (!,@,%,etc.) and one capital letter would take this computer 2.14 centuries to develop. In other words, the longer the password, the harder it will be to hack. That being said, longer passwords aren't impossible to hack, they just take more time. So, most hackers will usually go after the shorter passwords first. The way most hackers work is that they assume users have the same password for different accounts. If they can get one password, it's as simple as looking through that account's information for any related accounts and trying the original password with the other accounts. If one of these happens to be your email where you have kept bank information, you will likely see your bank account drained. It's therefore important to use a different password for every online account. They key here is to try and use a password that's as different as possible. Don't just add a number or character onto the end of a word. If you have trouble remembering all of your passwords, try using a password manager like LastPass. This article published last year on ZDnet highlights the 25 most popular passwords. Notice that more than 15 contain words from the dictionary, and most of the rest are strings of common numbers. To have a secure password, most security experts agree that you should not use words from the dictionary or number combinations that are beside each other (e.g., 1234). Some users have passwords where they replace letters with a number that looks similar, for example: h31lo (hello). Most new password hacking tools actually have combinations like this built in and will try a normal word, followed by replacing letters with similar numbers. It’s best to avoid this. What we mean by this is using information that can be easily found on the Internet. For example, doing a quick search for your name will likely return your email address and social media profiles. If you have pictures of your kids, spouse, pets, family, their dates of birth, etc. on your Facebook profile and have put their names in captions, it's possible for a hacker to see this (assuming the pictures are shared with the public). You can bet that they will try these names as your password. You would be surprised with the amount of personal information on the web. We suggest searching for yourself using your email address(s), social media profile names, etc. and seeing what information can be found. If your passwords are close to what you find, it would be a good idea to change them immediately. There are numerous things you can do to minimize the chance that your passwords are stolen and accounts hacked.
<urn:uuid:fd597bed-70ce-4763-b0d3-dbc7973a042c>
CC-MAIN-2022-40
https://www.dcsny.com/technology-blog/5-important-password-donts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00108.warc.gz
en
0.94732
818
2.875
3
New age automation software for today's enterprise What is Robotic Process Automation? The process by which businesses can train robots or computer software to emulate and routine or repetitive responses generally given by employees into higher ROI and time efficacy. RPA utilizes the robots to perform tasks by interacting with digital systems to execute a business process just like humans do. Unlike humans, robots never sleep, perform regular error free tasks and cost a lot less than them. Most repetitive human user actions are mimicked by the RPA robots as they log into applications, copy and paste data, move files and folders, fill in forms and extract structured and semi-structured data from documents. Difference between RPA and other enterprise automation tools Lesser cost of adoption and much faster ROI are the key contribution of a successful RPA integration. More so, among various digital technologies RPA is less intrusive and does not require to replace existing software applications and systems in place causing no or less disruption. With RPA, cost efficiency and compliance are no longer an operating cost but a byproduct of the automation. An RPA software has the ability to adapt to changing environment, exceptions and new situations. It can be trained to capture and interpret the actions of specific processes in existing software. Later, it can on its own learn to manipulate data, self-correct, respond and initiate new actions and communicate with other systems autonomously. Benefits of an effective RPA system With the world going digital, IoT, Big Data, Digital Content Management and machine learning, Robotic Process Automation software robots offer added vantage points. These rule following applications do not get tired making no mistakes with compliance and consistency. Properly instructed and trained RPA robots lead to improved compliance with reliable execution of tasks and reduce risks. Most importantly digital users have full control over their performance and existing regulations and standards can be easily applied and followed. It thus reduces processing costs and bring in positive return on investment. Employees find themselves free from engaging into unnecessary repetitive processes and focus on productive work instead relieving them from the rising pressure of work. Scalability is another important advantage of adopting RPA by an organization. It is like having an army of employees (robots) equipped to dispose an overflow of transactions. An organization integrated with RPA can easily scale up and down based on processing loads. Robots can be trained in tens and thousands at the same time to give consistent performance crossing over the limitations of human effort. More so, an RPA generated data is free from subjectivity and human bias. Hence such data, when integrated with analytics tool, aids in exact insight and understanding for taking decisions in order to accomplish process targets and strategic goals. RPA - Way Ahead According to a global market insight report RPA market is expected to touch $5 Billion by 2024. The potential and curiosity behind RPA is reportedly driving its growth and future development process. Its capabilities not just to make tasks smoother, easier, flexible and efficient is to boost cost savings making it researchers and businesses most favorite point of interest MORE FROM OUR BLOGS Optimized data storage & accessibility with API-based integration for effective data management Espire designed and implemented a solution architecture to build microservices based on API-led Integration layer for data flow among existing systems, the microservices-based architecture on the Mulesoft Integration Platform provided easy accessibility & reusability of data Improve business experience and operational efficiency in reinsurance with agile solutions As a leading Digital Transformation and Total Experience Leader, Espire is helping businesses in the insurance and reinsurance sector drive impeccable customer experience, employee experience and business experience by designing agile technology solutions to fast-track growth and enhance total experience. Here is a Snapshot of our expertise in Reinsurance. Streamlining and personalizing online voting system for delivering total experience Many enterprises have started to look at digital voting platforms for delivering a seamless voter experience while streamlining the election process for both the organization and the voters. This provides voters’ the flexibility and mobility to cast their votes securely, from anywhere using any device. This blog highlights the functionalities offered by online voting platforms and how they can provide an edge to achieve Total Experience goals. 7 strategic technology trends for business to unlock growth in 2022 Gartner's Strategic Technology Trends for 2022 provide businesses a deep insight and understanding of futuristic technologies that are necessary for building resiliency and achieving sustainable business growth. Espire is helping businesses across industries achieve total experience at scale and attain true digital transformation to bolster business growth. Digital transformation in energy and utilities top 4 use cases of data analytics and bi to maximize Data Analytics and BI capabilities are disrupting the energy and utilities sector, allowing them to tap into their archival data to design scalable solutions for streamlining operational strategies and delivering enhanced customer experience Espires top 10 webinars for fy20 21 a win win for us our partners and customers With FY20-21 in hindsight, in this blog, we share a list of Espire’s top 10 webinars from the past financial year that resulted in win-win for us, our partners and our clients! Global Customers Served Resources Certified on multiple leading technologies Years of Experience in Digital Transformation & Total Experience
<urn:uuid:8d8a332b-5215-4b1e-ace0-565365406b11>
CC-MAIN-2022-40
https://www.espire.com/blog/posts/robotic-process-automation
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00108.warc.gz
en
0.91763
1,098
2.8125
3
A remote radio unit (RRU) in a radio base station system can include a cyclic prefix (CP) module with a CP adder for downlink channel processing and a CP remover for uplink channel processing. The RRU can be configured to communicate with a base band unit (BBU) via a physical communication link and can communicate with a wireless mobile device via an air interface. Remote Radio Units are generally installed in towers and are controlled by a controller placed inside a closed shelter on the ground nearby the tower. The connection between the RRU and the controller is generally optical. The RRU and the controller form the BTS or Base Transceiver Station which is widely used in cellular communication.
<urn:uuid:14580fe7-7ba0-4aa3-8548-28ee01ca465b>
CC-MAIN-2022-40
https://www.exfo.com/en/resources/glossary/remote-radio-unit/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00108.warc.gz
en
0.9307
143
2.6875
3
For Who Course Students and professionals with zero or minimal experience of information security and information technology. What you will learn - Cybersecurity Fundamentals is an online course designed to give computer science specialists a fundamental knowledge of cybersecurity. The main goal of this course is to help cybersecurity beginners prepare for the certification exam on Fundamental Cybersecurity. Kaspersky Academy Trainer - This course provides detailed knowledge, provided by Kaspersky Lab specialists, of network and application security, incident response, digital forensics and malware analysis — everything that is necessary to gain a fundamental knowledge of cybersecurity.
<urn:uuid:4d54febf-1c9a-47b9-bf0b-4dfb530d1904>
CC-MAIN-2022-40
https://academy.kaspersky.com/courses/cyber-security-fundamentals/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00108.warc.gz
en
0.834917
143
2.6875
3
Use Face Crop to crop around faces in an image. Face Crop uses computer vision to automatically detect faces in images. A rectangular crop focused on either all of the faces or the biggest face is then applied. There are two face detection algorithms to choose from: Deep Neural Network and Cascade Classifier. Depending on your images, one may be more effective than the other. To help you select the best options for your images, use the Preview feature in Image and Video Manager . You can upload one of your own images and test both algorithms to see which works best. Deep Neural Network algorithm. Select deep neural network for improved accuracy and to control the level of confidence in face detection before applying the algorithm. When you select deep neural network, faces only need to be discernible when the image is scaled up or down to 300x300 pixels to be eligible for detection. Cascade Classifier algorithm. If you select the cascade classifier algorithm you need to make sure that the faces you are detecting are at least four percent the size of the largest dimension (height or width) of the image. To be eligible for detection, faces also need to be at least 20x20 pixels in size. For example, if a face is 50x50 pixels in size and the image width is 1700 pixels, the cascade classifier algorithm will not accurately perform the face crop. Four percent of 1700 is 68, so this face is too small in relation to the image width. If however, the image width is only 1000 pixels the cascade classifier algorithm can successfully perform the face crop. The face is more than four percent of the image width. In most cases, it is sufficient to set the focus and padding and accept the defaults for all other transformation settings. The maximum and minimum size settings scale the cropped image to the width and height you specify. The scaled result has the same aspect ratio as the crop dimensions. If you set only one dimension (width or height), Image and Video Manager scales the cropped area to that size, maintaining the aspect ratio of the region of interest. If you add the IMQuery transformation to a policy, use the im variable, and select Face Crop, you can use a query string to automatically crop any face in an image using this policy. See Syntax and Examples for the syntax for the query string parameter. Updated about 1 year ago
<urn:uuid:eaf0fa16-24a7-41e1-8a11-4e2566184d75>
CC-MAIN-2022-40
https://techdocs.akamai.com/ivm/docs/face-crop
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00108.warc.gz
en
0.856869
487
2.984375
3
October 25, 2018 With the completion of the MIRKA2-RX mission, the student members of the Small Satellite Group of the University of Stuttgart field tested technology that will be used in a future CubeSat mission. The MIRKA2-RX mission that launched on March 18th, 2016, consisted of a micro re-entry capsule (MRC) and the newly developed Low Orbit Technical Unit Separator (LoTUS) which were both integrated within a REXUS program rocket. The European REXUS/BEXUS program supports scientific and technological experiments on research rockets and balloons, sending two of each into space every year. 132 seconds after lift-off, and while the REXUS rocket was at apogee, a pyro cutter onboard the separator would cut the wire securing the MRC to the separator carriage, thus ejecting it out into the upper atmosphere. The ejection would also trigger a mechanical switch enabling battery-assisted data collection from pressure, temperature, acceleration and radiation sensors placed inside the MRC. While the mechanical switch wasn’t successfully activated during the successful ejection, the MRC’s landing on snow-covered Swedish tundra, was. Now active, the MRC used its 9603(N) modem and antenna to send back telemetry via Rock Seven (now trading as Ground Control)’s API and server to the MIRKA2-RX team’s own server, allowing the team to locate the device. The separator integrated into the MIRKA2-RX mission was also designed to fit inside a forthcoming CubeSat Atmospheric Probe for Education (CAPE) mission developed by the University of Stuttgart Institute of Space Systems (IRS). The CAPE mission will test heat shield materials and a pulsed plasma thruster.
<urn:uuid:67c214be-2e42-40f3-a0bd-181c5f2c1c9e>
CC-MAIN-2022-40
https://www.groundcontrol.com/us/blog/mirka2-rx-mission/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00108.warc.gz
en
0.914657
394
2.59375
3
By: Microtek Learning Feb. 02, 2022 Last Updated On: Apr. 01, 2022 Database manages a large amount of data daily. Accuracy: Database is a collection of data which has checks and parameters present in it which proves to be accurate. The data is highly secure and precise. Easy Update: With the use of various database languages, it is easy to update and maintain the data. Security: Database ensures the data security. Access to the information within a database is secured by providing user login to authorized members. Data Integrity: Integrity of the data is maintained with the help of various parameters and constraints. This ensured data integrity by using various constraints in the databases. Database makes sure that the data present in it is precise and compatible. Easy to Research: Searching of data is easy within database with help data query language and it allows evaluation of the data as well. A database is a tabular set of data which has rows and column in it. It is stored securely in the organizational system and accessed when required. Databases are complex structures and they are developed with the help of various attributes and keys for better working and understanding of the data tables. Databases are designed in a tabular form. They have rows and columns which are easy to understand and fetch the data quickly. Numerous databases work as query languages for writing and searching data. The database is used to manage the internal operations of an organization. The database notes down the online interactions with customers and suppliers. Databases are administrative information and are more specialized data in a specific field. An example includes a computerized library system, flight reservation system, computerized past inventory system, and many content management systems that store websites as a collection of web pages in a database. A relational database is a connection between data related to each other and, it provides storage and access to such data. A relational database has rows and columns which record a unique id called the key. The columns in the database hold a record that has a value for each attribute. These attributes make it easy to develop the relationship between various databases. In relational database there are two types. There are base relations and derived relations. The base relations are which stored and accessed by relation but they also store the data. In derived relations the data is simply evaluated but not stored. This database does not have SQL query language but it provides facilities for storage and access data. It is mostly used in big data and web applications. The prominent non-relational database provided by Azure is the cosmos database. It is also called IMDB or a main memory database system (MMDB). In-memory database depends on the main memory for computer data storage and is faster than disk-optimized databases. Developing In-memory database is easier and needs less computer instructions which remove delay for searching while querying the data. In-memory provides faster provides faster results by reducing the seek time. But the volatile RAM is one of the major issue which happens in times of power loss. In-memory is upgraded with the use of non-volatile RAM technology. In-memory databases run at high speed and maintain data recovery in the occurrence of power failure. Microsoft provides services to support the SQL database with Azure. Being an Elastic Pool we can share types of databases whether single or multiple. We can migrate our on-premises data center to Microsoft Azure without any complex configuration. It depends on the customer whether they want to shift their databases from on-premises to Azure Cloud with least effort and optimization. We can use licensing of the on-premises data center. While using Azure cloud for the database storage, Microsoft Azure is responsible for maintaining the data, patch management and providing services When the data from on-premises is deployed on Azure cloud, it is replicated for backup purposes. There services that are deploy data from on-premises to cloud. Azure Database services are listed below: Migration service: This service behaves as a middleware which is used to transfer the data from on-premises data center to Azure cloud. Synchronization: As the data is being sent from on-premises data center to Azure cloud, this service makes sure that the data present on both ends is in a synchronized manner. Stretch Database: While transferring the data from on-premises to Azure, the data is divided into two types as hot and cold. The data which is new that is the hot data is kept on-premises and the data which is cold that is he old data is kept on Azure cloud. Data Factory: It does transformation, extraction, and loading. When a data is being transferred from on-premises to Azure cloud, it needs some conversion while loading it. Data factory helps with this loading and conversion. Security Center: As data is being stored on the cloud, it is the primary concern about security. To keep the data secure there are various measures taken with the help of firewall rules. It can be configured stating what type of traffic can be gain access to such data by restricting IP addresses. This will create limitation and reduce cyber attacks. Cosmos DB: It is the SQL data store that is available in Azure. It works with low latency and is highly available. Azure Active directory: Authorized person is only provided access to such data with the help of Azure Active Directory. The data sent over from the on-premises data center is stored and managed at Azure Cloud datacenter. Microsoft Azure makes sure that there is no loss of data and the data on cloud is always available to the user. With maintaining the data availability they also look after the data security. Azure datacenter handles various functionality of data while storing it on cloud. Azure manages the bugs and tries to fix it over. In case of failover, it manages the data and finds the cause of a potential hazard. Patch management and replication is also performed at the Azure Datacenter. There are 3 scenarios for the implementation of SQL database: This scenario depends on the type of service the user needs. If they already have on-premises SQL datacenter but still want to migrate to Azure cloud. But in such a situation they do not want to have minimum changes, compatibility and want to spend less. They can opt for the Manages Instance. Azure supports multiple databases on the cloud datacenter, but it can also deploy and manage only a single database as well which needs different set of resources. Multiple databases can be deployed which share same set of resources. This article provides you with basic information and an overview of the Azure database service and its functionalities. You can visit our website Microtek Learning to explore all the related courses with Azure SQL services. We provide you with an enhanced learning environment to upgrade skills for your development. Check out the courses and connect with us to know more!
<urn:uuid:39859120-d820-49d8-b97a-b06884334cd7>
CC-MAIN-2022-40
https://www.microteklearning.com/blog/azure-database-administrator-dp-300/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00309.warc.gz
en
0.928207
1,465
3.4375
3
Parents are worried about kids and obsessed with parental control restrictions on kids’ phones. Most parents are under the impression that their kids’ device is only safe when there are parental restrictions. On the contrary, a device is more reliable when anti-virus practices are implemented alongside the kid safety app. While screen time control is essential, today let’s focus on how parents can ensure kid’s phone does not get infected by a virus. What Is A Computer Virus? A virus is a malware that interrupts the normal functioning of a device by replicating itself and injecting its code. The malware becomes an active content in the computer system in the form of scripts, executable files, etc. Malware is prevalent in different forms and categories like Trojan, Computer virus, spyware, ransomware, and adware, among others. Hackers intend to add malicious code in the target’s machine, which acts against the computer user. Why Do Virus Attackers Target kids? Devices are under attack by hackers to confiscate private data from the victim. The malware is primarily designed to slide away confidential information to blackmail the victim for money or solely for sadistic pleasure. Several hackers use kid’s devices to wash off their hands from dirty activities like stealing data from a large organization. When authorities track the offenders, the IP is traced to a kid’s device. Teens or young children are soft targets for hackers. More often, their systems are void of any safety measures like Anti-Virus to scan and filter the harmful data. Most online criminals take advantage of this fact and regularly search for child victims. Another dangerous idea behind adding a virus to teens’ device is to get hold of the private data like photographs and videos. Believe it or not, these media elements play an essential role in asking for ransom in exchange for individual photos and videos. How Does an Anti-Virus Software Help? Let’s scrutinize how an Anti-Virus app benefits kids’ smartphones, given the time children spend on phones. Anti-Virus apps use a tool to scan through all the data, incoming data, incoming apps, and existing apps for the virus. The tool is successful in eliminating risky files from the system. Besides performing Anti-Virus tricks, these tools offer security checks like blocking unknown callers and cleaning up the device off temporary files. A major reason behind installing a reliable Anti-Virus app in your teen’s phone is to help them in understanding which files and apps are corrupted. The indicators and warnings are very useful in helping kids in differentiating right from wrong. There are several other premium services conferred by the software, but users need to pay a minimum or hefty sum depending on the app. Is an Anti-Virus enough for kid’s data protection? According to us, Anti-virus is the first layer of protection for the kid’s device. Their devices are safe and contain a filter to distinguish the infected files. However, there needs to be a tool that can help children and parents at a time when the data is already lost. Anti-Theft apps are the answer to your teen’s stolen and misplaced device. Data is the primary entity of the phone, and a device is stolen with the same intention. Once your teen’s private and confidential data goes into the wrong hands, such a situation is not only risky for your child, but it may have unavoidable repercussions for the future. Online offenders often use data for unknown or illegal activities. In some cases, the photos are used to impersonate kids and use their accounts on social media. Again, the Anti-theft app does not prevent the stealing of phones or misplacing it. But it helps in protecting the device and data in several ways. Let’s explore a few capabilities offered by Bit Guardian Parental Control: - Ringer -Just like the phone ringer, the Anti-Theft application helps in locating and drawing the attention of the misplaced device by setting off the ringer. The good news is that it is capable of ringing even when the phone is in silent mode. - Device location – Using the application’s GPS enabled device finder, parents locate the teen’s phone and track the device using the provided location. - Factory Reset – Data can be protected only by setting a remote factory reset and destroying every piece of information from the phone. Although, data cannot be retrieved again but at least this process ensures that the data does not fall into the wrong hands. Should Teens Use the Apps Together or Either of the Apps Can Help? Anti-Virus and Anti-Theft apps are not exclusive to each other; they go hand in hand. Parents must confirm the presence of both apps on their kid’s phones. One app will keep the phone safe from the virus while the other will ensure data loss does not fall in the wrong hands. One way to get the best results from the apps is implementation at an early age where Anti-virus will secure the phone from day zero, and Anti-Theft will act as the insurance certificate.
<urn:uuid:f8061b04-d45a-41e0-8c8a-be47920c0a27>
CC-MAIN-2022-40
https://blog.bit-guardian.com/need-for-anti-virus-apps-in-kids-device/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00309.warc.gz
en
0.915631
1,074
3.125
3
This chapter serves as an introduction to the rest of the book by describing top-down network design. The first section explains how to use a systematic, top-down process when designing computer networks for your customers. Depending on your job, your customers might be other departments within your company, those to whom you are trying to sell products, or clients of your consulting business. After describing the methodology, this chapter focuses on the first step in top-down network design: analyzing your customer's business goals. Business goals include the capability to run network applications to meet corporate business objectives, and the need to work within business constraints, such as budgets, limited networking personnel, and tight timeframes. This chapter also covers an important business constraint that some people call the eighth layer of the Open Systems Interconnection (OSI) reference model: workplace politics. To ensure the success of your network design project, you should gain an understanding of any corporate politics and policies at your customer's site that could affect your project. The chapter concludes with a checklist to help you determine if you have addressed the business issues in a network design project. Using a Top-Down Network Design Methodology According to Albert Einstein: The world we've made as a result of the level of thinking we have done thus far creates problems that we cannot solve at the same level at which we created them. To paraphrase Einstein, networking professionals have the ability to create networks that are so complex that when problems arise they can't be solved using the same sort of thinking that was used to create the networks. Add to this the fact that each upgrade, patch, and modification to a network can also be created using complex and sometimes convoluted thinking, and you realize that the result is networks that are hard to understand and troubleshoot. The networks created with this complexity often don't perform as well as expected, don't scale as the need for growth arises (as it almost always does), and don't match a customer's requirements. A solution to this problem is to use a streamlined, systematic methodology in which the network or upgrade is designed in a top-down fashion. Many network design tools and methodologies in use today resemble the "connect-the-dots" game that some of us played as children. These tools let you place internetworking devices on a palette and connect them with local-area network (LAN) or wide-area network (WAN) media. The problem with this methodology is that it skips the steps of analyzing a customer's requirements and selecting devices and media based on those requirements. Good network design must recognize that a customer's requirements embody many business and technical goals including requirements for availability, scalability, affordability, security, and manageability. Many customers also want to specify a required level of network performance, often called a service level. To meet these needs, difficult network design choices and tradeoffs must be made when designing the logical network before any physical devices or media are selected. When a customer expects a quick response to a network design request, a bottom-up (connect-the-dots) network design methodology can be used, if the customer's applications and goals are well known. However, network designers often think they understand a customer's applications and requirements only to discover, after a network is installed, that they did not capture the customer's most important needs. Unexpected scalability and performance problems appear as the number of network users increases. These problems can be avoided if the network designer uses top-down methods that perform requirements analysis before technology selection. Top-down network design is a methodology for designing networks that begins at the upper layers of the OSI reference model before moving to the lower layers. It focuses on applications, sessions, and data transport before the selection of routers, switches, and media that operate at the lower layers. The top-down network design process includes exploring divisional and group structures to find the people for whom the network will provide services and from whom you should get valuable information to make the design succeed. Top-down network design is also iterative. To avoid getting bogged down in details too quickly, it is important to first get an overall view of a customer's requirements. Later, more detail can be gathered on protocol behavior, scalability requirements, technology preferences, and so on. Top-down network design recognizes that the logical model and the physical design may change as more information is gathered. Because top-down methodology is iterative, some topics are covered more than once in this book. For example, this chapter discusses network applications. Network applications are discussed again in Chapter 4, "Characterizing Network Traffic," which covers network traffic caused by application- and protocol-usage patterns. A top-down approach lets a network designer get "the big picture" first and then spiral downward into detailed technical requirements and specifications. Using a Structured Network Design Process Top-down network design is a discipline that grew out of the success of structured software programming and structured systems analysis. The main goal of structured systems analysis is to more accurately represent users' needs, which are unfortunately often ignored or misrepresented. Another goal is to make the project manageable by dividing it into modules that can be more easily maintained and changed. Structured systems analysis has the following characteristics: The system is designed in a top-down sequence. During the design project, several techniques and models can be used to characterize the existing system, new user requirements, and a structure for the future system. A focus is placed on understanding data flow, data types, and processes that access or change the data. A focus is placed on understanding the location and needs of user communities that access or change data and processes. A logical model is developed before the physical model. The logical model represents the basic building blocks, divided by function, and the structure of the system. The physical model represents devices and specific technologies and implementations. With large network design projects, modularity is essential. The design should be split functionally to make the project more manageable. For example, the functions carried out in campus LANs can be analyzed separately from the functions carried out in remote-access networks, virtual private networks (VPNs), and WANs. Cisco Systems recommends a modular approach with its three-layer hierarchical model. This model divides networks into core, distribution, and access layers. Cisco's Secure Architecture for Enterprises (SAFE) and Enterprise Composite Network Model (ECNM), which are discussed in Part II of this book, "Logical Network Design," are also modular approaches to network design. With a structured approach to network design, each module is designed separately, yet in relation to other modules. All the modules are designed using a top-down approach that focuses on requirements, applications, and a logical structure before the selection of physical devices and products to implement the design. Systems Development Life Cycles Systems analysis students are familiar with the concept that typical systems are developed and continue to exist over a period of time, often called a systems development life cycle. Many systems analysis books use the acronym SDLC to refer to the life cycle, which may sound strange to networking students who know SDLC as Synchronous Data Link Control, a bit-oriented, full-duplex protocol used on synchronous serial links, often found in a legacy Systems Network Architecture (SNA) environment. Nevertheless, it's important to realize that most systems, including network systems, follow a cyclical set of phases, where the system is planned, created, tested, and optimized. Feedback from the users of the system causes the system to then be re-created or modified, tested, and optimized again. New requirements arise as the network opens the door to new uses. As people get used to the new network and take advantage of the services it offers, they soon take it for granted and expect it to do more. In this book, network design is divided into four major phases that are carried out in a cyclical fashion: Analyze requirements. In this phase, the network analyst interviews users and technical personnel to gain an understanding of the business and technical goals for a new or enhanced system. The task of characterizing the existing network, including the logical and physical topology and network performance, follows. The last step in this phase is to analyze current and future network traffic, including traffic flow and load, protocol behavior, and quality of service (QoS) requirements. Develop the logical design. This phase deals with a logical topology for the new or enhanced network, network layer addressing, naming, and switching and routing protocols. Logical design also includes security planning, network management design, and the initial investigation into which service providers can meet WAN and remote access requirements. Develop the physical design. During the physical design phase, specific technologies and products to realize the logical design are selected. Also, the investigation into service providers, which began during the logical design phase, must be completed during this phase. Test, optimize, and document the design. The final steps in top-down network design are to write and implement a test plan, build a prototype or pilot, optimize the network design, and document your work with a network design proposal. These major phases of network design repeat themselves as user feedback and network monitoring suggest enhancements or the need for new applications. Figure 1-1 shows the network design and implementation cycle. Figure 1-1 Network Design and Implementation Cycle The Plan Design Implement Operate Optimize (PDIOO) Network Life Cycle Cisco Systems teaches the Plan Design Implement Operate Optimize (PDIOO) set of phases for the life cycle of a network. It doesn't matter exactly which life cycle you use, as long as you realize that network design should be accomplished in a structured, planned, modular fashion, and that feedback from the users of the operational network should be fed back into new network projects to enhance or redesign the network. Learning the Cisco steps is important if you are studying for a Cisco design certification. For that reason, the steps are listed here: Plan. Network requirements are identified in this phase. This phase also includes an analysis of areas where the network will be installed and an identification of users who will require network services. Design. In this phase, the network designers accomplish the bulk of the logical and physical design, according to requirements gathered during the plan phase. Implement. After the design has been approved, implementation begins. The network is built according to the design specifications. Implementation also serves to verify the design. Operate. Operation is the final test of the effectiveness of the design. The network is monitored during this phase for performance problems and any faults, to provide input into the optimize phase of the network life cycle. Optimize. The optimize phase is based on proactive network management which identifies and resolves problems before network disruptions arise. The optimize phase may lead to a network redesign if too many problems arise due to design errors or as network performance degrades over time as actual use and capabilities diverge. Redesign may also be required when requirements change significantly. Retire. When the network, or a part of the network, is out-of-date, it may be taken out of production. Although Retire is not incorporated into the name of the life cycle (PDIOO), it is nonetheless an important phase. Figure 1-2 shows a graphical representation of the Cisco PDIOO network life cycle. Figure 1-2 PDIOO Network Life Cycle
<urn:uuid:aea6bdde-b4d3-4879-943d-173a1be3027b>
CC-MAIN-2022-40
https://www.ciscopress.com/articles/article.asp?p=328773&amp;seqNum=5
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00309.warc.gz
en
0.938457
2,347
2.65625
3
The potential for election tampering, hackers taking out internet-connected traffic cameras prior to the Inauguration in Washington D.C., and recent massive data breaches are part of our daily news feed and the cybersecurity dangers of our wonderfully connected world. As our reliance on network-enabled devices continues to grow and, particularly in the areas of government and citizen service, we see the astounding benefits of such connectivity, but the omnipresent threats of cyber attack are becoming... The potential for election tampering, hackers taking out internet-connected traffic cameras prior to the Inauguration in Washington D.C., and recent massive data breaches are part of our daily news feed and the cybersecurity dangers of our wonderfully connected world. As our reliance on network-enabled devices continues to grow and, particularly in the areas of government and citizen service, we see the astounding benefits of such connectivity, but the omnipresent threats of cyber attack are becoming even more overwhelming. There is a missing component in most cybersecurity efforts — geographic information and, when it comes to protecting our nation’s cyber infrastructure, it is a very important piece. Cyber threats affect more than just the information technology infrastructure of an agency or command. These threats cause disruptions to its entire network that can impact its principal business functions and mission. As such, cybersecurity should be assessed in terms of its direct contribution to the successful execution of an organization’s primary missions. Organizations can no longer ignore cyber threats or delegate security to the information technology department alone. Cyber defense must be integrated into traditional security activities, such as physical and personnel security as part of an overarching effort to protect business operations from both external and internal threats. Cybersecurity activities must be prioritized and aligned to strategic business objectives. Geographic information system (GIS) technology is the foundation needed to establish shared situational awareness for interdisciplinary activities. Utilizing GIS will help to improve cyber defense and enable a cross-disciplinary approach to providing organizational mission assurance by helping prioritize the availability of IT systems based on mission priorities. By combining traditional cyber indicators with a geospatial platform, organizations can quickly discover and prioritize all manner of cyber threats, both natural and manmade, intentional or accidental, by creating a comprehensive model that integrates all available data. The result is organizationwide agility that combines physical and cyber activities when responding to service interruptions and complex intrusions. It also prioritizes preemptive actions that can prevent disruptions or mitigate their impact. Missions or business activities conducted by personnel and organizations can be prioritized. People use devices (desktop computers, mobile devices) to interact with systems to conduct their missions and business activities. Devices and systems are connected to networks to exchange information and the data needed for those activities. The geographic layer serves as the common integrating framework across all layers. Integration is achieved by geo-locating all nodes, including people, user devices and infrastructure devices, and the network segments that connect them within and between layers. Geospatially enabling a common operational picture (COP) allows users to consider the effect of non-cyber, physical events in relation to cyber devices as well. Traditional geospatial datasets, such as weather, crime patterns and physical security threats can provide value to cyberspace operators when assessing risk to their communications networks. Regardless of the cause of the disruption, cyber operators must be able to anticipate the risk of failure for certain, critical devices and then determine the mission impact of those device failures. Connecting cybersecurity activity to a geographic layer provides the foundation from which shared situational awareness can be achieved. A truly comprehensive GIS platform must be able to support user workflows, collaboration and the dynamic situational awareness necessary to meet a variety of mission requirements. The technology that can deliver these capabilities is available on many networks and from devices such as tablets and smartphones, providing personnel with access to information and data to support decisions for awareness, prevention, protection, response and recovery. The location intelligence that runs through a GIS platform can be quickly accessed, understood and shared to support coordinated actions. The power of GIS combines location with cybersecurity activity and other data to better anticipate, detect, respond to and recover from threatening security incidents. The technology is easily integrated into an organization’s existing command and control structure to ensure that leadership has access to complete and accurate data for decision making. In fact, GIS platforms are already widely used in national security agencies, including defense, national intelligence, critical infrastructure protection and emergency management. Integrating the power of location intelligence with cybersecurity data allows organizations to make better decisions before security is compromised, rather than when it is too late. Jeff Peters is the director of the national government sector for Esri. How location intelligence digitally transforms government to foster better collaboration
<urn:uuid:4707ade6-45da-4e9e-a67e-e698a2b7f909>
CC-MAIN-2022-40
https://federalnewsnetwork.com/commentary/2018/04/putting-the-geospatial-in-cybersecurity/?readmore=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00309.warc.gz
en
0.935732
958
2.546875
3
Lock modes specify different kinds of locks. The choice of which lock mode to apply depends on the resource that needs to be locked. The following three lock types are used for row- and page-level locking: - Shared (S) - Exclusive (X) - Update (U) The following discussion concerns the pessimistic concurrency model. The optimistic concurrency model is handled using row versioning. A shared lock reserves a resource (page or row) for reading only. Other processes cannot modify the locked resource while the lock remains. On the other hand, several processes can hold a shared lock for a resource at the same time—that is, several processes can read the resource locked with the shared lock. An exclusive lock reserves a page or row for the exclusive use of a single transaction. It is used for DML statements (INSERT, UPDATE, and DELETE) that modify the resource. An exclusive lock cannot be set if some other process holds a shared or exclusive lock on the resource—that is, there can be only one exclusive lock for a resource. Once an exclusive lock is set for the page (or row), no other lock can be placed on the same resource. The database system automatically chooses the appropriate lock mode according to the operation type (read or write). An update lock can be placed only if no other update or exclusive lock exists. On the other hand, it can be placed on objects that already have shared locks. (In this case, the update lock acquires another shared lock on the same object.) If a transaction that modifies the object is committed, the update lock is changed to an exclusive lock if there are no other locks on the object. There can be only one update lock for an object. Update locks prevent certain common types of deadlocks. (Deadlocks are described at the end of this section.) Table 1 shows the compatibility matrix for shared, exclusive, and update locks. The matrix is interpreted as follows: suppose transaction T1 holds a lock as specified in the first column of the matrix, and suppose some other transaction, T2, requests a lock as specified in the corresponding column heading. In this case, “yes” indicates that a lock of T2 is possible, whereas “no” indicates a conflict with the existing lock. The Database Engine also supports other lock forms, such as latches and spinlocks. The description of these lock forms can be found in Books Online. At the table level, there are five different types of locks: - Shared (S) - Exclusive (X) - Intent shared (IS) - Intent exclusive (IX) - Shared with intent exclusive (SIX) Shared and exclusive locks correspond to the row-level (or page-level) locks with the same names. Generally, an intent lock shows an intention to lock the next-lower resource in the hierarchy of the database objects. Therefore, intent locks are placed at a level in the object hierarchy above that which the process intends to lock. This is an efficient way to tell whether such locks will be possible, and it prevents other processes from locking the higher level before the desired locks can be attained. Table 1 Compatibility Matrix for Shared, Exclusive, and Update Locks Table 2 shows the compatibility matrix for all kinds of table locks. The matrix is interpreted exactly as the matrix in Table 1. Table 2 Compatibility Matrix for All Kinds of Table Locks
<urn:uuid:c64e12fe-e70f-477b-94f4-ac1bd5dfa0f2>
CC-MAIN-2022-40
https://logicalread.com/sql-server-lock-modes-mc03/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00309.warc.gz
en
0.895386
717
3.109375
3
Japanese researcher Kento Oki has discovered a bug in PatchGuard that could be exploited by an attacker to load unsigned malicious code into the Windows operating system kernel. The PatchGuard, also known as Kernel Patch Protection, is a software protection utility that has been designed to forbid the kernel of 64-bit versions of Windows OS from being patched in order to prevent rootkit infections or the execution of malicious code at the kernel level. The feature was first introduced in 2005 with the x64 editions of Windows XP and Windows Server 2003 Service Pack 1. The news was first reported by The Record that also pointed out that the vulnerability has yet to be addressed by the IT giant. “In an email last week, Kento told The Record he did not report the bug to Microsoft because the company previously ignored three other PatchGuard bypasses discovered in the past years and knew the company wouldn’t be rushing to fix it.” reported The Record. The issue is considered very dangerous because all 64-bit versions of Windows support the PatchGuard feature. Patching the kernel could allow attackers to run malicious code as kernel mode, which means that malware could run with the highest level of privileges could be undetected by common security solutions. Over the years security experts devised multiple attacks to bypass the PatchGuard, such as the GhostHook hooking technique. Microsoft always downplayed the severity of Kento-like attacks because they require that the attackers could run the code with admin privileges, but the IT giant points out that with this level of permission it is already possible to take over any Windows system. Anyway, Microsoft did not patch the PatchGuard bypass attacks that were devised by researchers in the last couple of years, the company labeled the issue a security non-issue. Experts pointed out that these hacking techniques could be used to plant rootkits into Windows systems and bypass security measures. (SecurityAffairs – hacking, PatchGuard)
<urn:uuid:e2c5ad12-060c-4ad4-80d6-aa44ed609c9f>
CC-MAIN-2022-40
https://securityaffairs.co/wordpress/118427/hacking/microsoft-patchguard-kpp-bypass.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00309.warc.gz
en
0.956885
397
2.8125
3
PyTorch and TensorFlow are two popular machine learning libraries used for advanced data analytics and prediction. Facebook’s Artificial Intelligence Research (FAIR) developed PyTorch lab. Google’s Deep Brain team developed TensorFlow. We use both frameworks for deep learning. In this blog, we will educate you about the origins of PyTorch and TensorFlow and discuss the use cases for each of them. Deep Learning Concepts: Highly intelligent computer programs capable of ‘learning’ from data have been around for a couple of decades now. The latest ones use an ingenious technique called deep learning. Deep learning employs an artificial neural network, replicating human brain function, with three or more layers. These neural networks try to learn from enormous sets of data. Now, while a single layer of neural networks can make approximate predictions on its own, the multi-layer structure of the neural network that makes deep learning algorithms very accurate. Deep learning is behind a good deal of artificial intelligence (AI) applications and services that do automation and perform analytical tasks without human intervention. These applications are everywhere around you. From something as simple as a weather app on your smartphone to more sophisticated state-of-the-art technologies like self-driving cars. What is PyTorch? PyTorch is an open-source machine learning library developed by FAIR labs. It was first introduced in 2016 and is available as free software with BSD license. It derived the name from a combination of two words you are probably familiar with: Python and Torch. Python is PyTorch’s user interface. Torch is one of the first machine learning libraries released back in 2002. The name “Torch” here has a significant reference: PyTorch shares some of its C++ backend with Torch. This helps programmers to program on it using C/C++. Advantages of using PyTorch. One of the key advantages of PyTorch is that it uses Python as the primary programming language. Python is one of the most popular language used for machine learning because of its versatility and ease of use. As part of the Python package ecosystem, PyTorch is also fully compatible with other popular Python libraries as NumPy and SciPy. As the base is Python, PyTorch is relatively easier to learn compared to other machine learning frameworks. Its programming syntax closely mimics that of many popular programming languages, like Java and Python. Besides the ease of usage, PyTorch now sports a new hybrid user interface that allows you to work in two user modes: eager mode and graph mode. Eager mode is better for R&D projects, whereas graph mode offers great functionality in a C++ runtime environment. Disadvantages of using PyTorch PyTorch currently lacked a coherent model serving in production. Torch Serve, the model serving component is in currently under experimentation. It does not have an extensive monitoring and visualization interfaces like TensorFlow’s TensorBoard. Therefore, data handling is more complex in PyTorch. What is TensorFlow? TensorFlow originates from Google’s own machine learning software, which was later refactored and optimized for production. As a result, they released TensorFlow to the world as an open-source machine learning library in 2015. TensorFlow’s name is again a conjunction of two keywords: Tensor and flow. The most basic data structure in TensorFlow is called a “Tensor”. You can perform operations on these tensors by building stateful data ‘flow’ charts (like a flowchart). Data science teams see TensorFlow as the go-to production-grade library. Being one of the earliest modern machine learning software available, TensorFlow has garnered a huge user base for itself. Its popularity declined a little after PyTorch came out in 2016. However, Google’s 2019 release of TensorFlow 2.0 significantly changed the play. TensorFlow 2.0 update made the software more accessible and user-friendly. Advantages of using TensorFlow You can get started on your project quicker with TensorFlow because of the heaps of data and pre-trained models that are already built in. All TensorFlow developers have full access to this data in Google Collab Notebooks. Collab Notebook is a collaboration workspace provided by Google and third parties. TensorFlow has had a history of being easily deployable on most machines via its model serving component. This makes the software extremely scalable. PyTorch has started open-source serving libraries experimentally only in 2020. Disadvantages of using TensorFlow. Google’s product update from TensorFlow 1.x to TensorFlow 2.0 changed a lot of notable features. Developers familiar with the previous version may difficulty. This isn’t a concern for one’s starting new. TensorFlow is slower than its competitors. While we can use it for dealing with industrial data sets, it takes more time. PyTorch vs TensorFlow: Which one to use? It depends on what you’re wanting to do. Both PyTorch and TensorFlow have their merits and demerits. It is better to compare the two using certain parameters to understand their strengths and weaknesses. Here are some aspects you could use to make a further judgment: TensorFlow is the clear winner for deployment. PyTorch launched its serving-library Torchserve recently, whereas TensorFlow has been offering services like TensorLite and TensorFlow.js for years. PyTorch’s overall functionality and ease of use make it ideal for researchers and students. TensorFlow is extremely scalable and easily deployable and is, therefore, a favorite for production. PyTorch allows developers to enable training across multiple GPUs with just a single line of code in parallel mode. To implement this in TensorFlow, you need to write a lengthier program. Easy debugging is another major factor that makes PyTorch the perfect platform for new deep neural networks developers. We can debug it using Python’s regular debuggers that most users are already familiar, like PyCharm Debugger and PDB. For TensorFlow, however, developers must learn the library’s debugger. - Mobodexter, Inc., based in Redmond- WA, builds internet of things solutions for enterprise applications with Highly scalable Service Mesh Edge Clusters that works seamlessly with AWS IoT, Azure IoT & Google Cloud IoT. - Want to build your Computer Vision solution – Email us at [email protected] - Join our newly launched marketing partner affiliate program to earn a commission here. - We publish periodic blogs on IoT & Edge Computing: Read all our blogs or subscribe to get our blogs in your Emails.
<urn:uuid:74ea562f-bf68-4158-b0df-b281e64eb1f3>
CC-MAIN-2022-40
https://blogs.mobodexter.com/pytorch-vs-tensorflow-which-deep-learning-framework-should-you-use/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00509.warc.gz
en
0.923339
1,484
3.265625
3
Securing the flow of Big Data In the digital age, information is the new currency. And in order to get information, enterprises are mining data – and lots of it – for the knowledge that it can yield. On the scale of Internet commerce or social networks, the amount of data can be pretty large - think of the hundreds of millions of smartphones and end-user devices. On the scale of consumer, medical, scientific, or research data, it can be gigantic, as sensors and instruments can collect vast amounts of raw data, whether from a single source (such as instrumentation of a GE aircraft engine during a flight) or from the projected 26 billion devices that will make up the Internet of Things. The Gold Rush we currently see for collecting and analyzing Big Data, which in turn is being fed increasingly by the Internet of Things, is creating greater challenges for the networks and security of the data centers in three key areas: First, there is Aggregation. Increasingly, rather than processing and reducing the raw data at the data source to a more manageable volume, raw data is being transferred and stored centrally – because it now can be – so that it can be analyzed in different ways over time. Today, enterprises are transferring terabytes of data over long distances every day. The sheer quantity of data is forcing core network and data center upgrades, such as 100GbE switching fabric, to deal with individual data transfers at 10Gbps or even higher. This also creates challenges for perimeter security, such as firewalls, as many vendor solutions are not designed to handle such large inflows and sessions. For example, a firewall that boasts 10 GbE ports or 40 Gbps aggregate throughput may not actually have internal processing paths all the way through to handle an individual 10Gbps flow. LAN congestion from normal enterprise campus traffic may further saturate network appliance CPU or memory resources, causing large flows to stall or even drop. Next comes Processing. Big data flows are not symmetric – the raw data that goes in does not necessarily go out in the same form and volume. Instead, the data kept in storage arrays is analyzed typically by an intermediary set of servers, then further reduced and delivered – often by web server front-ends – as a reduced set of insights before exiting the data center. This means higher bandwidth with an increasing proportion of lateral, or east-west traffic, within the data center, instead of north-south traffic that is going out to the Internet or elsewhere. Many studies show that east-west traffic now accounts for up to 70% of the data center traffic, and this trend will continue to increase with the growing amount of big data analytics. East-west traffic needs to be segmented and inspected, not just for blocking lateral movement of advanced persistent threats and insider attacks, but to secure the data itself, some of which can be sensitive if disclosed or leaked. Network security architectures need to evolve from perimeter or gateway security oriented to a multi-tiered, hybrid architecture where more east-west traffic becomes virtualized and abstracted with the adoption of server/network virtualization and cloud computing. Last, there is Access. As part of Big Data, data is being archived for long periods, who is authorized to access which data, and for what purposes? Often there is not just a single data set, but rather multiple repositories of data that may be combined and analyzed together. Each set of data may contain certain sensitive or confidential information, and may be subject to specific regulations or internal controls. Further, there is often not just one group of analysts or researchers, but over time many constituents seeking to gain different insights. A large pharmaceutical company provided a good example where their Big Data research efforts were open to not just internal employees, but also contractors, interns, and visiting scholars. For each of them, a separate analytics sandbox needed to be created, authorizing and auditing specific entitlements to identified data sets that could be accessed and combined. In such context, IT organizations need to fundamentally re-think network security instead of taking incremental steps to meet evolving data center security needs. In many cases, the data center infrastructure itself is presently being consolidated and transformed due to not just Big Data, but ongoing cloud computing and SaaS initiatives as well. As part of this transformation, IT should consider an architecture that is: • High-performance – support the larger volumes of data with higher network throughput and with high-speed ports (e.g. 40G/100G fabric) and high port density, but also be scalable and elastic to accommodate ever-growing data sets • Secure – augment perimeter security with increased internal segmentation to secure east-west movement of data and monitor for advanced and insider threats •Consolidated – integrate multiple security functions from core security functions like firewalling/VPN, anti-malware and intrusion prevention to advanced threat protection, strong authentication and access control Finally, customer may consider the opportunities to leverage Big Data itself for better security. With more control and monitoring points being deployed throughout the network and the data center, plus with SIEM (security information and event management) and log management tools being able to aggregate increasing amounts of security logs and event data, more security analytics and insights are possible to better protect not only Big Data but also the data center as a whole.
<urn:uuid:8c690998-c656-42a1-86be-0cd29dd724d1>
CC-MAIN-2022-40
https://enterprise-security.ciotechoutlook.com/cxoinsight/securing-the-flow-of-big-data-nid-1061-cid-52.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00509.warc.gz
en
0.936376
1,078
2.71875
3
The widely known Stuxnet incident made sure that the mere mention of SCADA vulnerabilities is enough for security experts to pause. After all, sabotaging industrial control systems (of which SCADA is just one type) can lead to unprecedented and serious consequences: loss of lives and critical infrastructure, catastrophic pollution, and so on. It’s no wonder, then, that each new exploitable SCADA vulnerability is given due attention, especially when they are discovered by engineers from a prominent technology and software provider for the energy sector like Cimation, whose clients include giants like Shell and Chevron. At the Black Hat security conference in Las Vegas, Eric Forner and Brian Meixell, two of the companies engineers, have held a practical demonstration on a simulation rig of the exploits they have come up with to interfere with the normal functioning of valves regulating the pressure and flow within a systems like oil wells and pipelines. They took remote control of the Programmable logic controller (PLC), were able to make the pumps go on and off, and even send data reporting nothing was out of the ordinary or reporting the opposite of what was happening to the Human Machine Interface (HMI), which would then send it to the human operator who oversees and controls the functioning of the system. The attack they demonstrated is possible only if the PLC that is targeted is connected to the Internet and have an old Ethernet module plugged into them. These module with their ancient Linux installations are crucial for compromising the PLC and, unfortunately, these two conditions are met by tens of thousands PLCs in use all around the world. According to The Register, the two engineers also said that it’s possible – even though less unlikely – for attackers to compromise PLCs that are not kept on the company network.
<urn:uuid:bc593d97-f437-4336-9d00-9ad90abf02f7>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2013/08/02/engineers-demonstrate-plc-hack-on-mock-oil-rig/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00509.warc.gz
en
0.961635
364
2.515625
3
Hackers and bad actors continue to get more creative and persistent in their attempts to access your data. All it takes is one vulnerability in your organization to let them in and create a data breach. In fact, the number of data breaches is growing steadily. In 2021, by the end of September, 1,291 data breaches had occurred, surpassing the total number of breaches in 2020 by 17 percent. The financial impact of these breaches is spendy and growing. Over the last two years, the average cost of a data breach increased almost 10 percent—from $3.86 million in 2020 to $4.24 million in 2021. Protecting your data from unauthorized access is critical to maintaining its confidentiality, integrity, and privacy. It means safeguarding your customer information, employee records, data collection, and transactional data from fraudulent activities like identity theft, phishing, and hacking. It involves training your employees on cyber security policies and practices to ensure data privacy and protection. By actively protecting your data, you reduce the risk of a data breach and the financial impacts when one occurs. But data protection is no small undertaking. To help you make sense of it, this guide outlines what you need to know to ensure you do data protection right. It explains: - Why data protection is important - Why data protection fails - How to provide effective data protection - Tips and resources to guide you along the way Keep reading to learn how to put your data protection plan to work. What is data protection Each day, billions of people around the world exchange their personal and financial data over the internet. The bulk of that data is based on purchases, information requests, and use of digital services. The plethora of shared data has given rise to the number of attack surfaces and, consequently, the demand to secure it from data leaks, theft, and corruption. Protecting this information is critical to prevent hackers from stealing or compromising it during a data breach. Governments and industry regulators define data protection as encompassing: - Data safety: Basic data protection measures that require all organizations to create data backups and enforce data retention processes. - Data security: Data flow security that includes encryption protocols, stiff authentication processes, and threat monitoring solutions for faster incident response times. - Data privacy: Continuous monitoring of third-party access to data and documentation of all data that leaves their ecosystem. As data privacy continues to create challenges, consumers and organizations have pushed for regulations to keep their information safe from hackers. In response, industry regulators and government agencies have issued stringent data governance and protection standards worldwide. One of these standards is the General Data Protection Regulation (GDPR)—a European Union (EU) law on data privacy and protection. In the UK, the government issued its own privacy law called the Data Protection Act 2018. And the US has several data privacy standards, including the California Consumer Privacy Act (CCPA) of 2018, Gramm-Leach-Bliley Act, and Health Insurance Portability and Accountability Act (HIPAA). Why data protection is important Data protection is important for the following reasons: - Sustainable compliance: Strong data protection means you achieve sustainable compliance and avoid legal trouble and major fines. In 2018, British Airways learned this lesson the hard way. The company failed to protect over 400,000 personal records—a GDPR violation—costing it a £20 million ($27 million) fine and more after victims filed private lawsuits. - Good data lifecycle management: Having the right infrastructure and monitoring technology automates data flows and streamlines communications within your ecosystems. You can safely store data at rest, creating end-to-end data protection coverage for optimized security. - Better disaster recovery (DR) capabilities: No organization or database is immune to cybercrime. Strong data protection measures enable your organization to quickly respond to security threats, mishaps, and attacks. While your security team pinpoints the issues and restores backups, your legal team works to stay compliant. As the average cost of a data breach exceeds $4.24 million this year, organizations can’t afford not to protect their data. By following data protection laws, you reduce your company’s risk of a data breach, avoid high-priced fines, and maintain your reputation. Essential data protection terminology As you prepare for better data protection in your organization, you must first understand these terms: - Breach. An event when a threat actor accesses data, network connections, or devices without authorization. Also called a security breach or data leak. - Compliance. Implementation of technological and practical security measures to meet a third party’s regulatory or contractual requirements. Examples include GDPR, HIPAA, and SOC 2. - Cyberattack. Unauthorized access to a computer system, or network that’s intent on destroying or controlling technology systems by changing, deleting, locking, or stealing the data in them. - Cybercrime. Theft of, or damage to, information caused by threat actors using technology or technical devices. Often the result of social engineering attacks, such as phishing, identity theft, and hacking. - Cyber security. A strategic combination of technology, networks, hardware, software, systems, and training to protect information from unauthorized access. - Cyber security awareness. Education and training programs for employees to learn how to protect themselves and their organization from and prevent cybercrimes. Part of an organization’s overall security policy. - Data center. A physical location where an organization stores its data. - Data protection. The practice of ensuring data safety, security, and privacy by using data protection software and following government regulations and organizational policies and procedures. - Data protection officer. A key role in overseeing proper data handling, retrieval, and storage. - Data protection standards. Industry and governmental laws and regulations for organizations to safeguard personal and sensitive data to maintain its safety, security, and privacy. - Data protection awareness. A culture created by continuously training employees on the importance of data protection, privacy, and security. - Hacker. A person who gains unauthorized access to systems, networks, or data by using technical skills and technology. - Malicious actor. An entity with the potential to break through an organization’s security. Also referred to as a threat actor. - Malware. A harmful computer program hackers use to access and destroy sensitive information using methods like trojans, viruses, and worms. More formally known as malicious software. - Risk. The potential for exposure or loss that can result from a cyberattack or data breach. - Security. The combination of people, policies, and tools to protect an organization’s assets and property. - Security posture. An organization’s cyber security readiness is demonstrated by its employees and technology to protect its technical infrastructure, network, information, and equipment from a cyberattack. - Simulation training. Cyber security training that mimics real-life attacks as they occur in an employee’s workflow. - Threat. The potential for a hacker, insider, or outsider to access, damage, or steal an organization’s information, intellectual property, or data. Also referred to as a cyber threat. - Virus. Malicious code (malware) that damages or steals data from computers and other devices as it spreads. - Vulnerability. A flaw in the software code, system configuration, or security practices that hackers look for to gain unauthorized access to a system, network, or data. - Zero trust. A layered security approach in which internal and external users must be authenticated, authorized, and validated to gain access to applications or data. Why data protection fails Your cyber security strategy is a complex mix of tools, policies, procedures, and training. Any vulnerabilities or weaknesses in that strategy can mean a failure in data protection, opening your organization up to a data breach. Take a look at the most common reasons data protection strategies fail. No data protection officer Data protection officers (DPOs) plan, set up, and enforce frameworks to protect data and safeguard sensitive information according to organizational requirements. In doing so, they help maintain business continuity while managing disaster recovery—a key aspect of today’s data protection laws. Despite data protection regulations that require a DPO, many organizations neglect to fill the role. Only when it’s too late, they realize they probably needed one. Without a DPO, your organization is at greater risk of experiencing a data breach and fallout from it, including: - Compromised data integrity - Loss of reputation and trust - Legal action - Heavy fines from regulatory agencies In fact, any of the following failures in data protection come with the same risks. Inconsistent data protection standards Data governance, privacy, and protection are a global priority for consumers and governments. As hackers and threat actors become increasingly clever in their tactics, governments around the world have established strict regulatory compliance for data protection, such as GDPR, CCPA, and HIPAA. Despite government and regulatory agency efforts to enforce data protection standards, many organizations don’t follow them or don’t follow them consistently. Their lack of compliance puts the safety, security, and privacy of their data at risk of becoming compromised in a cyberattack. Ineffective data protection software The world produces a lot of data—74 zettabytes (ZB) projected for 2022 and 175 ZB estimated by 2025. Hackers and threat actors are anxious to get their hands on this data, constantly looking for weaknesses in data protection so they can claim it. Unfortunately, organizations might choose not to invest in the right data protection solutions because of a lack of resources or added expense. As a result, they end up with: - Unsecured data - Unprotected data privacy - No way to back up, recover, and restore their data Without the right mix of data protection software in place, organizations increase their chances of data theft, loss, and misuse, especially of sensitive data. Insufficient data center protection practices Data centers hold vast amounts of information and are often the backend of critical business services. For these reasons, they’re a prime target for external attacks, including distributed denial of service (DDoS), ransomware, and brute-force attacks. Insufficient data center protection happens when organizations skimp on protecting the following areas: - Physical environment - Physical and remote access - Data and network - Hardware and software Taking shortcuts on protecting your data center is a big risk for any organization. Yet many do, only to have their entire data center come down in a flash from just one attack. Lack of cyber security awareness Your employees are the glue that holds your data protection strategy together. Yet, employees cause 95 percent of all data breaches. After all, they’re human, and humans make mistakes. The problem is a lack of cyber security awareness training. Without effective cyber security awareness training in place, employees aren’t aware of rules and protocols for: - Following data handling, protection, and security - Complying with safe security practices - Reacting to social engineering schemes like phishing - Reporting cyber security threats and data breaches As technology evolves, hackers continuously adjust their tactics, intensifying the damage and lasting impact they create. Despite cyber security protection strategies, hackers and threat actors will continue to look for the weakest link—human error. How to achieve successful data protection The key to successful data protection is to create a well-rounded strategy that covers every aspect of your data: where it’s stored, how it’s used, and how it’s shared. To start on the path to successful data protection, begin with the following five steps. 1. Appoint a data protection officer These days, with the increasing rates of cyberattacks and data breaches—and no slowdown in sight—organizations need a dedicated data protection officer. Hire or outsource the role or promote an existing employee to the role. Regardless of how you go about filling the DPO role, choose a highly qualified person with a strong background in cyber security, compliance audits, and leadership. By appointing a DPO, you have someone to: - Oversea the big picture of data beyond the physical and technological aspects of security. - Ensure your organization complies with data privacy and protection laws. - Make sure your organization passes all audits. - Educate your employees on data handling protocols and compliance requirements. The DPO enables you to have a sole person to focus on your organizational data and establish a strategy to protect it. 2. Follow the principles of data protection The principles of data protection help secure the personal data your organization uses, stores, and shares. Specifically, they protect the names, addresses, phone numbers, email addresses, and credit card information of your employees, customers, and third parties. The principles vary across governments and industries but address data fairness, transparency, accuracy, storage, integrity, and confidentiality. Each principle plays a key role in protecting data and data privacy. Follow the data protection principles for your industry and regions where you do business. By following them, you achieve: - Sustainable compliance - Good data lifecycle management - Improved disaster recovery capabilities 3. Choose the right mix of data protection software Dedicated data protection software solutions play a pivotal role in your security stack. Solutions are available for all sizes of organizations—from small-to-medium businesses to enterprises. They also cover on-premises, hybrid, and cloud environments. Data protection doesn’t come from just one solution but a combination of them to: - Ensure proper data backup, recovery, and restoration in case of an attack. - Secure their data by using encryption, authentication, and access control. - Safeguard data privacy through policy enforcement and data governance. By choosing the right mix of data protection software that covers these three areas, you’ll be more equipped to follow data privacy and protection laws and keep your data safe. 4. Protect your data center Data center protection goes beyond just securing the servers and networks it houses. It means investing in the right tools, solutions, and resources to: - Secure the physical environment by finding an optimal location, raising the floor, and preparing for natural disasters, particularly damage from fire, water, or pests. - Restrict and monitor physical and remote access by maintaining vigilance, layering access, and securing remote access. - Safeguard your data and network by taking on a zero-trust posture and reviewing security policies. - Update the hardware and maintain the software that runs your data center. - Establish a data backup and run regular data backups. - Segment your data center network to limit the extent of a data breach if one occurs. Take the extensive measures to monitor and protect your data center around the clock, both physically and virtually, to keep it secure. 5. Create cyber security awareness One of the most critical parts of every cyber security and data protection strategy is to create a culture of awareness across your entire organization. That culture starts with a cyber security awareness training program. Building cyber security awareness is most effective when you: - Train all employees based on their role, localization, and cultural differences. - Prioritize the key areas that require training based on risk, such as types of phishing. - Deliver text-based content in shorter bites right in the workflow. - Continuously train employees all year long. - Measure effectiveness based on program metrics, not click rates. - Identify and reduce the number of high-risk employees. - Evaluate the return on investment of your training solution based on data analytics. By providing continuous training, you can adjust your program as needed to account for new types of cyber threats as they arise to keep your employees informed, aware, and prepared to react to them. Safeguard your data with data protection training When you provide effective data protection training, you give your employees a better understanding of safe data handling practices for data entry, processing, storage, and sharing. You also make them more aware of data protection and privacy laws, so they understand the criticality of protecting your data. When providing data protection training to your employees, include the following topics. Government and industry compliance requirements Governments worldwide have pushed for tight regulations to protect the privacy of citizens. These regulations have resulted in creating well-known standards—each with its own criteria that organizations must follow to avoid fines and penalties. If your company must abide by the government and industry standards related to your business, include them in your data protection training program. Go through the key regulations, the audits your company must pass, and your employees’ roles in upholding those standards. Data center security strategy Your data center security strategy requires a multilayered approach. The more layers it has, the better it will protect the confidential information it holds. These complex layers can be difficult for employees to understand, so it’s critical to include them as part of your data protection training. In particular, include the following general areas, but outline as many essential details as needed: - Both physical and remote access to the data center - Data handling practices - Security for your network, data, hardware, and software Document each policy, so your employees can refer to them and help keep your data center secure. Safety protocols for personal data Hackers use brute-force attacks and social engineering campaigns to try to gain access to your systems, networks, and sensitive organization and employee data. To prevent these threats from reaching your personal data, you might have safety protocols in place. The policies are only helpful if your employees reinforce them. Therefore, make sure your data protection training program includes your organization’s protocols for protecting personal data. For example, include password requirements, use of multi-factor authentication (MFA), and single sign-on (SSO). Also, address protocols about credential sharing and security codes. Supply chain policies As data breaches to the supply chain increase and intensify, ensure you have policies in place to secure your supply chain. This guidance helps employees be better prepared to face an attack in an intelligent, strategic, and secure way. So, include supply chain policies as part of your data protection training. Address your policies for selecting and verifying suppliers, conducting risk-level assessments for third parties, and securing your software supply chain from end to end. Also, include supply chain risk management practices specific to the supply chain. Cyber security risk assessment Regular cyber security risk assessments help identify corporate assets that can be affected by a security breach and how well your access controls can protect them. As part of your training program, make sure employees understand the importance of cyber security risk assessments. Explain the findings of the reports to help them understand where your organization has vulnerabilities and how they can be more effective in protecting them. Breach reporting protocols All data privacy regulations require organizations to report data breaches immediately. Once reported internally, your organization must notify authorities and the victims whose personal information has been compromised. In your data protection training, make sure employees know the procedures to follow when a breach occurs and after one happens. The protocols may vary by role, team, and department, so make sure everyone knows what to do and who to contact for questions. Phishing simulation training As a first step in preventing phishing attacks, include phishing simulation training. With effective training, some employees can detect and protect themselves against phishing, but the key is to transform your entire organization’s overall security culture. Phishing simulations provide interactive, hands-on training to help employees learn about and react to phishing threats. You can deploy real-life phishing simulations right in your employees’ workflow. When combined with real-time engagement statistics and insights, your security team can determine the right course of action for employees to take next. Security awareness training for all employees To keep up with the ever-changing, mischievous ways of cyber criminals, train employees on hacking threats and trends so they can be aware of them and respond confidently to prevent them. This “awareness” starts with cyber security awareness training for all employees across the organization, regardless of their role. Provide security awareness training at regular intervals by using bite-sized, customizable content that you can adapt to employees by job role, team, department, or geographic location. As with phishing simulations, use data to gauge the success of your training program, so you know which employees need more or specialized training and which ones can advance to the next topic. Tips for effective data protection training Sure, you have all the latest technology to secure and monitor your data. But the real protection starts with training all employees in your organization to achieve effective data protection. Follow these tips to ensure your data protection training program engages your employees in effecting behavioral change toward data protection and cyber security awareness. 1. Deliver bite-sized, text-based training Deliver Continuous Awareness “Bites” (CAB) in the form of short, text-based training, so employees can learn at their own pace and on their own time. This approach gives them to repeat and diverse situations backed by data-driven training. It also creates multiple engagement opportunities for optimal retention. By using highly adaptable and customizable text-based bites, your employees get the essential training they need to support your organization’s data protection policies. 2. Embed training in your employees’ workflow Deploy your bite-sized, text-based training by using a just-in-time learning approach right in your employees’ inbox as part of their regular workflow. Attach your training to events to make them relevant and memorable and to create greater engagement. Then, when your employees find themselves in a vulnerable position, they’re motivated to learn how to avoid repeating an error. 3. Run continuous training all year long Change your employees’ behavior toward cyber security and data protection by deploying a training program that runs year-round. Use an autonomous solution that can run every day, all year long. The training program must automatically adapt for each employee and continue sending each bite until they complete the learning. 4. Customize training based on your employees’ roles Customize your training so it reflects the end user’s perspective with immediate relevance. This way, they’ll be motivated to take time to learn. The training is most effective when you can adapt content based on each employee’s role, experience, geographic location, and preferred language. 5. Measure training effectiveness based on real-time data Regularly monitor your data protection training program’s progress to determine its effectiveness and optimize it as needed. When you measure your data protection training, you must be able to clearly identify your high-risk employees, gauge the mean-time between failures (MTBF), and determine the resilience of your teams, departments, and overall organization. 6. Use a machine learning-based platform to change employee behavior Advanced machine learning in data protection training uses your organization’s training data to analyze employee performance statistics. It tailors continuous learning to each employee’s weak spots and follows a just-in-time learning approach. Ensure your data protection training program leverages data science and machine learning to identify and minimize high-risk groups in the organization, including new employees, employees with access to sensitive data, and serial clickers. 7. Provide compliance training Your data protection training program is critical to your organization’s ability to comply with the leading data protection regulations. Make sure your data protection training program keeps current on data protection compliance and regulations as they evolve. By having your employees follow them, you create greater security between your business and your customers from potential cyber threats. You also reduce your risk of paying large fines when a data breach occurs. Resources for data protection As you plan your data protection training program, follow the guidance in our list of resources. Kick-off a meaningful security awareness program Organizations want to train for everything, but they struggle to train for anything effectively. When creating an effective data protection awareness program, focus on three key areas: the most important threats, employee needs, and providing continuous training. Learn what each area entails and how to move from theoretical learning to modifying employee behavior in 3 Tips for Kicking-off a Meaningful Security Awareness Program. Follow tried and true cyber security awareness tips Cyber security awareness is just as critical to your organization as the security measures you take to protect your home. Your organization simply can’t afford to be without it. By establishing cyber security awareness policies and practices, you position your employees and organization to avoid cyberattacks and keep your business operating at full speed. To increase cyber security awareness in your organization, follow the 13 can’t-miss cyber security awareness tips. Train global and diverse workforces Top-tier manufacturing company SodaStream is best known as the maker of the consumer home carbonation product of the same name. When the company experienced a steady increase in phishing attacks, they realized a need for cyber security training across their workforce—from manufacturing to management. Learn how the company trained their employees in How to Train a Global and Diverse Workforce to Reduce the Risk of Cyberattack. Comply with SOC 2 requirements SOC 2 compliance ensures organizations have proper procedures in place to safeguard private information and quickly mitigate cases when data leaks happen. It has become the seal of approval required by organizations to assure customers that their personal information is secure. To ensure your organization passes SOC 2 compliance, you must complete seven steps. Learn what these steps are and download a corresponding checklist in The Only SOC 2 Compliance Checklist You Need. Launch your data protection program In this guide, you learned how to deliver an effective data protection training program. As you launch your program, include these seven essential practices: - Deliver continuous data protection and cyber security training for your employees to create awareness and change. - Provide employees with a hands-on learning approach that’s easy to put into practice. - Identify low-risk to high-risk employees so you can target specific interventions based on their risk level. - Optimize employee learning experiences based on predictive analytics collected by your training program. - Close the security gap between your employees and organization by providing real-time feedback. - Tackle employees’ attitudes and beliefs about threat risks and attacks head-on. - Adopt a scientific training method that brings together learning expertise, data science, and automation. Without the right data protection and cyber security awareness training program in place, cyber threats and attacks will persist. Follow these seven essentials for your cyber security awareness employee training program to reduce malicious attacks caused by employee error. Get data protection training from CybeReady Achieve success with your training program by choosing a platform based on learning expertise, data science, and automation. CybeReady makes data protection and security awareness training easy and effective for organizations. Learn how when you request a demo!
<urn:uuid:0f7d1ebf-7190-46f6-92b8-ac55be68e38a>
CC-MAIN-2022-40
https://cybeready.com/no-nonsense-guide-to-data-protection
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00509.warc.gz
en
0.912249
5,589
3.015625
3
Perhaps the most striking point about last week’s huge DDoS attack, which took down more than 80 big websites and online services, is that the criminals behind the attack accomplished it not by particularly sophisticated or cutting-edge means, but by creating a veritable army of consumer connected devices — what we call the Internet of Things (IoT). In this post we explain the critical concepts and how this incident is connected with every one of us. On October 21, lots of Americans woke up to find some of their most popular websites were unavailable. No watching Netflix, no transacting business through PayPal, no online gaming with Sony PlayStation. And they couldn’t even tweet about the problem — Twitter was down as well. In all, 85 major sites were either showing signs of stress or simply not responding at all. As it turned out, the underlying problem was a series of attacks — three in all — against the American Internet infrastructure. The first wave affected the East Coast. The second one affected users in California and the Midwest, as well as Europe. The third wave was mitigated by the efforts of Dyn, the DNS service company that was the main target of all three attacks. Music services, media, and many other resources were affected. Amazon came in for special attention: a separate attack against it in Western Europe brought the site down for a while. DNS and DDoS So, how is it possible to disrupt so many sites with just three attacks? To understand this, you need to know what DNS is. Can't get on a website? This is a live map, right now, of the massive DDoS attacks on Dyn's servers. It is creating many issues right now. pic.twitter.com/fekUqNgaL7 — Flying With Fish (@flyingwithfish) October 21, 2016 The Domain Name System, or DNS, is the system that hooks up your browser with the website you’re looking for. Essentially, each site has digital address, a place where it lives, as well as a more friendly URL. For example, blog.kaspersky.com lives at the IP address 220.127.116.11. A DNS server works as an address book — it tells your browser at what digital location a site is stored. If a DNS server does not respond to a request, your browser won’t know how to load the page. That’s why DNS providers (especially major ones) form an important part of critical Internet infrastructure. That brings us to DDoS. A distributed-denial-of-service (DDoS) attack floods the servers that run a website or online service with requests until they collapse and the sites they serve stop working. For a DDoS attack, criminals need to send an enormous number of requests, and that’s why they need a lot of devices to do it. For a DDoS attack, they usually use armies of hacked computers, smartphones, gadgets, and other connected things. Working together (but without their owners’ knowledge or consent) these devices form botnets. — Kaspersky (@kaspersky) October 24, 2016 Knocking out Dyn So, you see how it all happened: Somebody used a giant botnet against Dyn. It included tens of millions of devices — IP cameras, routers, printers and other smart gadgets from the Internet of Things. They flooded Dyn’s site with requests — a claimed 1.2 terabits per second. The estimated damage is about $110 million. However, the criminals responsible did not ask for ransom or make any other demands. In fact, they did nothing but attack, and they left no fingerprints. However, hacker groups New World Hackers and RedCult have claimed responsibility for the incident. In addition, RedCult promised to follow up with more attacks in the future. Why should the average user care about this stuff? Even if the Dyn incident did not affect you personally, that does not mean you did not take part in it. To create a botnet, criminals need a lot of devices with Internet connections. How many connected devices do you own? A phone, perhaps a smart TV, DVR, and webcam? Maybe a connected thermostat or refrigerator? Hacked gadgets serve two masters at the same time: For their owners, they work as usual, but they also attack websites at a criminal’s command. Millions of such devices took down Dyn. This gigantic botnet was created with the help of Mirai malware. The malware’s action is rather simple: It scans for IoT devices and tries a password on whatever it finds. Usually people do not change their gadgets’ default settings and passwords, so the devices are easy to hack — that’s how they get conscripted into the zombified armies of Mirai and similar malware. And that means that your connected TV could be a part of botnet, and you’d never know it. A timely reminder: These 60 dumb passwords can hijack over 500,000 IoT devices into the Mirai botnet https://t.co/RgjgRIJFy8 — Graham Cluley (@gcluley) October 24, 2016 In September of this year somebody used Mirai to take down the blog of IT security journalist Brian Krebs, overwhelming the server with requests from 380,000 zombified devices at up to 665 gigabits per second. The provider tried hold the line but eventually gave up. The blog started working again only after Google intervened to protect it. Soon after that attack, a user going by the pseudonym Anna-senpai published the Mirai source code on an underground forum. Criminals of all stripes grabbed it at once. Since then, the number of Mirai bots has increased constantly; the Dyn attack occurred after less than a month. Implicating the IoT DDoS is a very popular type of attack. And using smart devices in such attacks is appealing for criminals — as we’ve already mentioned, the Internet of Things is buggy and vulnerable. That is not likely to change anytime soon. — Kaspersky (@kaspersky) April 9, 2015 Developers of smart gadgets do little to secure their devices and don’t explain to users that they should change the passwords on cameras, routers, printers, and other devices. In fact, not all of them even allow users to do so. That makes IoT devices perfect targets. Today somewhere between 7 and 19 billion devices are connected to the World Wide Web. According to conservative estimates, that figure will reach 30–50 billion in the next five years. Almost certainly, the majority of these devices will not be powerfully protected. In addition, gadgets compromised by Mirai are still active — and new ones join its army of bots every day. What about the longer term? Criminals often use botnets to attack core industrial infrastructure — electrical substations, water utilities, and yes, DNS providers. Security researcher Bruce Schneier observes and opines that somebody is “learning how to take down the Internet” with the help of powerful and continuous DDoS attacks. Botnets are getting bigger, and when those attack-tests are finished, it’s not unreasonable to believe a full-scale attack will start. Imagine dozens of simultaneous attacks as powerful as the Dyn incident was and you’ll understand what damage can be done. Entire countries could lose their Internet. — Threatpost (@threatpost) October 19, 2016 How not to become a part of botnet One person cannot stop botnets from crashing the Internet — but together we can do a lot by not joining a botnet. You can start with making your devices more secure so that Mirai and similar malware can’t take control of them. If everyone did that, botnet armies would shrink into insignificance. To stop your printer, router, or refrigerator from plunging the world into Internet darkness, take these simple precautions. 1. Make sure you don’t leave default passwords on your devices. Use reliable combinations that cannot be brute forced easily. 2. Update firmware for all of your gadgets — especially the older ones — if possible. 3. Be selective in choosing smart devices. Ask yourself: Does this really need an Internet connection? If the answer is “Yes!” then take the time to read about the device options before buying. If you discover that it has hard-coded passwords, choose a different model.
<urn:uuid:3991f984-9f7d-49cd-8bb8-4243550063fa>
CC-MAIN-2022-40
https://usa.kaspersky.com/blog/attack-on-dyn-explained/7925/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00509.warc.gz
en
0.947512
1,768
2.671875
3
The Family Educational Rights and Privacy Act of 1974 (FERPA) is a United States federal law that determines how educational information can be accessed. The law give parents access to their child’s education records, and more control over how their data can be disclosed. In most cases, the school is required to obtain consent from the parents before disclosing their child’s information. FERPA only covers educational institutions that receive funds from the U.S. Department of Education. The Family Policy Compliance Office (FPCO) investigates complaints of alleged violations of FERPA. If they confirm that a violation has occurred, they will give the college a reasonable amount of time for them to make the necessary corrections. Unlike the GDPR, where fines for non-compliance can be as much as €20 million, it’s unlikely that failing to comply with FERPA will result in a fine or some form or criminal prosecution. That said, if the non-compliant entity fails to make the necessary corrections, they may have their funding revoked, or be subject to an alternative form of disciplinary action, depending on the severity of the risk. Below are some of the key steps that need to be taken to comply with FERPA What Information Is Covered by FERPA? All information in a student’s record will fall into one of two categories: personally identifiable information or directory information. Personally Identifiable Information (PII) PII is any data, or set of data, that can be used to identify a student. This might include a student’s name, data or birth, place of birth, student ID, social security number, and so on. PII cannot be disclosed without consent from the student’s parents or the student, if they are over 18 years old. In addition to obtaining consent, the school must inform the parents about the reasons for disclosure, and who the data will be disclosed too. Directory information is the type of information that one could typically find in the public directory, such as a phonebook. Given that this information is already publicly available, schools are not required to obtain consent for its disclosure, as it would not violate the privacy of the student. That said, some directory information might still be classified as PII if it were joined with other information to reveal the identity of the student. Ensure That Parents and “Eligible” Students Know What Their Rights Are Under FERPA Under FERPA, parents have the right to view and request changes to their child’s records, assuming the child is under the age of 18, and not entering post-secondary education. In some special cases, the parents are able to retain these rights after the student turns 18. However, this is usually due to tax reasons. Schools will need to produce a document that outlines the exact rights that parents and eligible students have, when it comes to accessing their data. Parents and eligible students must be allowed to view the policy, and should be annually reminded of their rights, as well as notified, were they to change. Before disclosing PII, the school must obtain consent from the parents or eligible student, and they must be informed about how to refuse the disclosure of both PII and directory information. Additionally, they will need to be informed about how to report FERPA violations to the supervisory authorities. It should be noted that there are certain circumstances where schools are legally allowed to disclose records containing PII without obtaining consent from the parents or eligible student. A school may disclose PII to the following recipients without obtaining consent: - School officials who have a legitimate educational interest in the information. - Other schools where the student is planning to enrol. - Representatives of the Comptroller General of the United States, the Attorney General of the United States, the Secretary of the United States Department of Education, or other state or local authorities for purposes of audit or evaluation. - State or local officials or authorities within a juvenile justice system, as long as the disclosure is made pursuant to a state law. - Organizations that are conducting studies for educational agencies or institutions in connection with the development or administration of predicative tests or student aid programs, or studies that are intended to improve educational instruction. - Accrediting organizations for purposes of conducting accreditation procedures. - The parents of a dependent student as defined by the IRS. - An organization dealing with a health or safety emergency. - An organization that is providing financial aid to the student, assuming they have applied for it, and that the disclosed information is relevant to their application. - The police, assuming a court order has been issued. For a more detailed explanation of the various exceptions, you can visit the Electronic Privacy Information Center website. Make Sure That Any Third Parties You Deal withAre Able to Comply with FERPA Schools will need to carefully screen any vendors or associates, to ensure that they are able to satisfy the FERPA compliance requirements. They will need to ask questions about how they plan to collect and store student data, and the access controls they have in place to prevent unauthorised access. Any vendors who offer to analyse student data for free should be avoided, as it is likely that they are seeking to make money from the data in some way. Schools will need to establish legally binding agreements with third parties, to ensure that they are aware of their responsibilities, when it comes to protecting student information and complying with FERPA. Implement Policies &Procedures in Line with FERPA Compliance Schools will need to establish clear policies and procedures to make it easier for staff members to comply. These might include acceptable use policies, procedures for disposing of old/redundant records, and an Incident Response Plan (IRP) to help limit the amount of damage caused by a data breach. Keep Your Staff Informed of Their Responsibilities Under FERPA Schools should carry out regular training to ensure that all staff members are aware of their responsibilities when it comes to protecting the privacy of student information, especially PII. Training sessions should be held at least once a year and should include information about the policies and procedures mentioned above. Staff will also need to be reminded about the many exceptions (also listed above), which can be easy to forget. Many of the FERPA violations boil down to a lack of training. For example, if a member of staff witnesses a fight on the playground, they are free to speak about the incident with whoever they choose. However, if they did not witness the incident directly, but instead learned of the incident via a formal document, they will not be allowed disclose the details of the document without authorisation. As such, staff members must be trained to ensure that they know what information they can share, and with whom. Staff members should have at least a basic understanding of data security best practices. For example, accidentally sending data to the wrong recipient will likely result in a violation of FERPA, yet this is a common mistake that people make. It’s also worth bearing in mind that parents (and students) also make mistakes. For example, a parent might send a request for access to the wrong department. If the recipient doesn’t know what the request is about, they might choose to ignore it, which could result in a violation of FERPA. Encrypt FERPA Covered Data atRest and In Transit The SANS Institute estimates that on average only 54% of higher educational institutions encrypt PII in transit, and a mere 48% encrypt PII at rest. While the use of encryption isn’t strictly necessary to comply with FERPA, it is still one of the simplest, cheapest and most effective ways to safeguard confidential data. If a device containing sensitive data were to be lost, stolen or hacked in some way, the encrypted files would be unreadable to anyone who doesn’t have the decryption key. Keeping track of encryption keys across an entire district can be a challenge, and each time any on-boarding/off-boarding takes place, the keys need to be reviewed and reshuffled to ensure that ex-employees are not able to access the encrypted information. Implement AComprehensive Data Loss Prevention Strategy for FERPA Breaches Data Discovery and Classification Regardless of which data privacy laws we are required to comply with, one of the first things we need to know is exactly what data we have, how sensitive the data is, and where it is located. It’s likely that most educational institutions will store large amounts of unstructured data, and while it may be possible to manually sift through this data and classify it accordingly, it would be more efficient to use a solution which can discover and classify the data automatically. User Behaviour Analytics (UBA) with Real-Time Alerting UBA is about knowing who, what, where and when, changes are being made to your sensitive data. UBA solutions use Machine Learning (ML) to learn the typical patterns of behaviour for each user, and send an alert, in real-time, when a user interacts with sensitive data in manner that is not typical for a given user. For FERPA, organizations can use UBA to analyze user interactions with student data to ensure that unauthorized access or suspicious behavior isn’t taking place. If such behavior is taking place, being able to receive real time alerts will ensure that you can react quickly to prevent a potential FERPA breach. Given that schools are using increasingly more cloud-based services, you will need a UBA solution that can aggregate and correlate event data from multiple cloud platforms. Detect Unencrypted PII Leaving the Network Data Loss Prevention refers to number of related strategies and solutions, with UBA being one of them. However, there are some DLP solutions, such as an Intrusion Prevention System (IPS) or a Next-Generation Firewall (NGFW) that can automatically detect and block unencrypted PII leaving the network. Administrators will receive real-time alerts about suspicious network traffic, which will enable them to conduct and investigation into the issue. In schools, where there are many staff coming and going, and where students have a tendency to engage in disruptive behaviour, we mustn’t ignore the importance of implementing strong physical security measures. Such measures will include ID badges, locks, alarms, and CCTV cameras. Access to the server rooms must be controlled, and any network-enabled devices, such as printers, will need to be secured. Staff members should use automatic screen locking, and, ideally, the public/guest Wi-Fi network should be isolated from the network and devices that are used internally for processing student data. How Lepide Helps You Become FERPA Compliant The first step in achieving FERPA compliance is identifying where all of your PII and directory information resides in your data stores. This cannot be a one-off exercise either; classification must be an ongoing process as new data is generated. The Lepide Data Security Platform enables you to discover and classify your sensitive data by risk, type and relevant compliance requirements. The next step is to ensure that you minimize the risk of breaches involving this sensitive data. Lepide can help you determine which of your users have access to FERPA-covered data and can even suggest which of these users do not require privileged access through their excessive permissions report. This kind of visibility can help you reduce your potential attack surface by implementing a policy of least privilege. Once you have minimized risk through governing access, you can then use the platform to proactively monitor the behavior of users accessing, modifying, moving, copying or doing anything with your sensitive data. Lepide can learn what normal behavior looks like and proactively alert administrators when users’ behavior deviates from this norm. If you would like to see the solution in action, schedule a demo with one of our engineers today.
<urn:uuid:5941c3bb-e8e7-4098-8181-6068590e3db5>
CC-MAIN-2022-40
https://www.lepide.com/blog/the-lepide-guide-to-ferpa-compliance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00509.warc.gz
en
0.942801
2,471
3.015625
3
During the last five years, the term “blockchain” has been one of the most prominent IT buzzwords. This was largely due to the emergence and the rise of blockbuster cryptocurrencies, such as BitCoin and Ethereum. Nowadays, at the dawn of year 2020, there is much less hype regarding blockchains’ and cryptocurrencies’ potential to completely disrupt the global banking and finance ecosystem. Cryptocurrencies are gradually finding their place in the digital finance landscape, while trading of BitCoins and Ethers is no longer the most trending topic. Nevertheless, blockchain technology has already left a significant footprint on the IT ecosystem, as it is already considered in a variety of applications in different of industrial sectors such as healthcare, real estate and industry. These applications are in most cases exploiting blockchain technology as a distributed shared database (i.e. a distributed ledger of transactions), that provides secure data storage and anti-tampering capabilities. There are certain conditions that drive organizations to make use of blockchain technology as a preferred alternative over centralized databases. Specifically, enterprises tend to consider using blockchains when the following conditions are met: In such cases, state-of-the-art centralized databases fall short, as they cannot provide the trust and reliability required. In particular, the absence of a trusted third party makes it impossible to ensure the consistency and integrity of the shared data in a robust way. On the other hand, a blockchain infrastructure can provide decentralized trust, as new transactions must be accepted by most of the participants, as part of distributed consensus mechanisms. Based on this rationale, various innovative enterprises (including many high-tech startups) have created blockchain systems. Prominent examples can be found in the following areas: Despite the benefits of blockchain technology in the above listed applications, the use of blockchains for enterprise applications is still in its infancy. One of the main reasons for this is that blockchain performance lags behind conventional centralized databases. The latter are optimized, robust and able to accommodate many thousands of transactions per second. On the contrary, BitCoin transactions are concluded in a timescale of minutes, given that the final validation of a new block of transaction takes place following computationally expensive consensus mechanisms. This makes the BitCoin blockchain and other public blockchains inappropriate for enterprise applications. In order to alleviate these performance limitations, the blockchain community has introduced a modified version of blockchain infrastructures, namely permissioned blockchains. The latter are sort of private blockchains that require their participants to be authenticated prior to joining them, which limits participation and improves performance. Most important permissioned blockchains do not require participants to solve computationally complex (i.e. Proof-of-Work (PoW)) problems prior to creating new blocks in the distributed ledger. Rather, they can use the blockchain as a shared ledger which increases its performance in terms of the number of transactions that can be accommodated per second. Benchmarks on state of the art permissioned blockchains report performances of many hundreds or thousands of transactions per second, which represents a significant improvement over public blockchains. This makes permissioned blockchains more suitable for enterprise applications, yet their performance is still far from that provided by centralized databases. Furthermore, permissioned blockchains are usually criticized of missing the decentralization benefits of blockchain technology as they must be operated by an organization or a consortium of organizations that acts as a trusted third party. Despite this criticism, private blockchains seem to be more appropriate for industrial applications and are gradually gaining momentum. By and large, following a period of hype, blockchain technology is finding its position in the rapidly evolving IT ecosystem. Several pilot implementations and products have already demonstrated the benefits of blockchain infrastructures for certain applications. Moreover, they have proven that blockchain applications can be operated at scale. Nevertheless, there is still a lack of large-scale applications beyond the popular cryptocurrencies, which raises concerns about the scalability of the technology in enterprise contexts. Our prediction is that blockchain will be certainly used in commercial applications, in cases where decentralized trust is providing merit to the participants. Moreover, it’s likely to support data marketplace applications, where data can be traded as soon as data owners provide their consent to potential buyers. However, we also believe that one should remain skeptical and conservative about the disruptive power of blockchain technology and its ability to completely replace existing centralized models and platforms for on-line trust. One way or another, it’s important to keep an eye on this technology and its adoption rate in the years to come. Smart Contracts for Innovative Supply chain management Increasing Trust in the Pharma Value Chain using Blockchain Technology Top Technology Trends for the Future of Insurance On Blockchain’s Transformative Power in Seven Industrial Sectors Blockchain in Healthcare: Hype or Tangible Opportunity? Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? The emerging role of Autonomic Systems for Advanced IT Service Management Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:044bf988-19a6-472c-831c-a1ee4b7ce79b>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/are-blockchains-ready-for-industrial-applications/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00509.warc.gz
en
0.932136
1,191
2.765625
3
According to Juniper Research, 206 million vehicles will have embedded connectivity by 2025 — with 30 million vehicles utilizing 5G connectivity. The connected car now contains units for communication, in-voice assistant, geolocation sensors and cloud-platforms that connect vehicles to mobility services. To ensure that these hyper-connected vehicles remain secure, a standard known as ISO SAE 21434 was developed. This standard is designed to guide automotive product developers and OEMs in following effective cybersecurity strategies and measures for connected vehicles. The status of ISO/SAE 21434 is currently ‘under development’, but it’s trending towards acceptance, which means it will be a part of compliance requirements in the near future. An ISO/SAE 21434 Summary ISO/SAE 21434 is a standard co-developed by the International Standard of Organization (ISO) and the Society of Automotive Engineers (SAE). ISO SAE 21434 “Road vehicles — Cybersecurity engineering” focuses on cybersecurity risks in the design and development of car electronics. The standard covers cybersecurity governance and structure, secure engineering throughout the life cycle of the vehicle and post-production security processes. What is ISO in Cybersecurity? ISO is a technical committee that is part of a worldwide regulatory body of national standards in cybersecurity engineering. Members are part of international regulatory committees, governmental, and non-governmental organizations. ISO works closely with the International Electrotechnical Commission (IEC) on everything that includes electrotechnical standardization. How Cybersecurity Automotive Standards Started The precursor to ISO/SAE 21434 is ISO 26262 “Road vehicles – Functional safety”. This does not cover software development or car sub-systems, nor does it cover how to deal with cybersecurity incidents. ISO/SAE 21434 covers every aspect of cybersecurity — from initial design to end-of-life decommissioning of a vehicle. The supply chain is also included to cover each step in automotive production. All phases of a connected vehicle’s lifecycle covering electrical and electronic systems, including their components and interfaces, are covered in ISO/SAE 21434 including: - Design and engineering - Operation by customer - Maintenance and service This lifecycle approach to cybersecurity management makes ISO/SAE 21434 one of the most comprehensive approaches to connected vehicle cybersecurity. Impact of Automotive Cybersecurity ISO Standards for OEMs and Developer Although the standard is still in development, any manufacturer, developer, or OEM should consider proactively integrating ISO/SAE 21434 into their current production process. The primary concern with the new standard revolves around cybersecurity. The standards focus on providing better safety to automotive consumers by regulating the way manufacturers test their products. ISO/SAE 21434 requires that manufacturers and developers perform a risk assessment. Before you can identify risk, you need to know what causes it. An assessment will identify any component, API, or software function that could be vulnerable to attack. With the assessment done, you then identify vulnerabilities. Blackbox fuzzing scans the system to find potential vulnerabilities in the same way an attacker would scan your system. Using the right fuzzing tools, you can ensure that development is done with security as a priority. The impact to automotive developers and manufacturers is that they have the benefit of producing applications and components that are tested before being launched, which benefits drivers and their safety. Fuzzing applications and finding vulnerabilities before they cause harm to drivers safeguards them and your organization’s reputation. Is ISO 21434 Released? As of August 31, 2021, ISO/SAE 21434 has been released. This release is being referred to as ISO/SAE 21434:2021 Road Vehicles – Cybersecurity Engineering and replaces the previous drafts from February 2020. There are no serious changes from the previous version, namely creates mandates for: - Scanning and creating risk assessments - Recognizing cybersecurity vulnerabilities - Ensuring safeguards are added to development to find and correct any vulnerabilities - Continuously test applications, software, and hardware to ensure risks have been mitigated Why is ISO/SAE 21434 necessary for the automotive industry? The automotive industry saw a 605% increase in cybersecurity incidents in connected cars between 2016 to 2019. The increase is surprisingly high, but threat actors targeting automotive computers is relatively new. Not only have more exploits been introduced in recent years, but the consequences in some successful attacks threaten the lives of drivers. Now that the industry has a framework to base its cybersecurity, testing for vulnerabilities during the vehicle’s lifecycle will be normalized. Standards also work together with other frameworks: in the case of ISO/SAE 21434, NIST SP-800-30 and standard ISO/IEC 31010 can be used to establish a foundation of risk assessment using tried and tested methodologies. Improving your cybersecurity with testing Good cybersecurity practices involve being proactive, and automotive developers and manufacturers can be proactive by integrating testing into their development lifecycle. Fuzzing an automotive computer is somewhat similar to a standard computer. The fuzzers launch tests against the automotive computer’s functionality attempting to trigger a vulnerability and exploit it. It’s done in a similar way an attacker launches an exploit, only testing performed as the product is developed can be used to improve cybersecurity rather than reactively patching a system using recalls. Imagine a driver with a connected car experiences an attacker fuzzing the system for a buffer overflow. Specially crafted data is sent to the engine that runs on feedback from various components on the car. A buffer overflow has potential to shut down the engine. It would be a frightening experience for a driver to experience an engine shutdown while on the freeway, and this type of scenario is exactly what ISO/SAE 21434 tries to stop. By fuzzing an automotive computer during the development lifecycle, the manufacturer can avoid putting drivers in dangerous situations from lack of cybersecurity testing. The Ramifications of Not Implementing ISO SAE 21434 Standards Since ISO/SAE 21434 has a primary focus on electronic automotive device connectivity security, the biggest penalty for a company would be an actual security breach. Any company that has their vehicles cyberattacked could potentially harm the customers and the general public. That company would instantly lose credibility with the public and face potential compliance fines depending on the country, the successful type of cyberattack, and jurisdiction since ISO/SAE 21434 is a global regulation. Future of ISO/SAE The automotive industry is at an important juncture in its history. The connected car is offering drivers an exciting new era in car ownership. But this expanded capability introduces cybersecurity risks that could threaten the safety of drivers. The ISO/SAE 21434 standard was introduced by automotive stakeholders to address the security issues that connectivity brings. The standard provides a framework for hardened security to build safer vehicles using better fuzzing and testing methodologies. Need to get ISO/SAE 21434 compliant? Learn more about Black Box Fuzzing with beSTORM and how it can be used as an automotive security testing tool.
<urn:uuid:fcb3f993-9412-4a74-8584-035fdaaedff1>
CC-MAIN-2022-40
https://blog.beyondsecurity.com/iso-sae-21434-standard-road-vehicles/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00709.warc.gz
en
0.937309
1,460
2.53125
3
Ruben Howard says that sometimes, educating students in a classroom isn’t enough. He says some students need more hands-on teaching methods to learn, alongside a taste of real world experiences. “The goal of the collaboration is to expose students majoring in logistics management to the various conveyors, sortation equipment, warehouse control systems, robotics and the automatic retrieval systems used in the industry,” says Howard, Dean of Transportation, Distribution & Logistics at Olive-Harvey College. “That’s the basic overview. When students are learning theory in class, a lot of them may not be able to get a hands-on component. What does it actually look and feel like?” The living lab, which is one of Wynright’s tech centers, provides students with a real distribution center look and feel. Howard says the living lab also helps students gain a better understanding of how certain equipment make a warehouse efficient, how distribution centers are being revolutionized and how companies organize management order fulfillment. “The living lab allows students to take the theory that they learned inside the classroom and apply it to real world applications,” he says. “It’s one thing to learn it inside the classroom, but in our field, it’s just a theoretical knowledge base. Students may not get an opportunity to see it and feel it in a real world business setting.” Howard says one of the best parts of working with Wynright is that students can observe and learn how distribution centers operate through equipment simulations without disrupting the company’s workflow. “When you go on a tour of [another distribution] facility, they can’t start and stop the conveyor belt to show students what technology is used,” he says. “They can explain it, but they can’t stop it, or use it as an example. What makes Wynright unique is the technology center allows them to do simulation and see a particular product of theory in place. That’s the advantage.”
<urn:uuid:de91d25b-bba5-465e-b7ff-6c6a7b9c1929>
CC-MAIN-2022-40
https://mytechdecisions.com/compliance/olive-harvey-colleges-living-lab-gives-students-real-world-experience-in-di/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00709.warc.gz
en
0.932523
428
2.65625
3
Cryptography Basics for the Aspiring Hacker As hackers, we are often faced with the hurdle of cryptography and encryption. Every cyber security engineer worth their pocket protector understands that encryption make the hacker/attacker's task much more difficult. In some cases it may be useful to the hacker, to hide actions and messages. Many applications and protocols use encryption to maintain confidentiality and integrity of data. To be able to crack passwords and encrypted protocols such as SSL and wireless, you need to have at least a basic familiarity with the concepts and terminology of cryptography and encryption. To many new hackers, all the concepts and terminology of cryptography can be a bit overwhelming and opaque. To start, cryptography is the science and art of hiding messages so that they are confidential, then "unhiding" them so that only the intended recipient can read them. Basically, we can say that cryptography is the science of secret messaging. With this brief overview for the newcomer, I hope to lift the fog that shrouds this subject and shed a tiny bit of light on cryptography. I intend this simply to be a quick and cursory overview of cryptography for the novice hacker, not a treatise on the algorithms and mathematics of encryption. I'll try to familiarize you with the basic terminology and concepts so that when you read about hashing, wireless cracking, or password cracking and the encryption technologies are mentioned, you have some grasp of what is being addressed. Don't get me wrong, I don't intend to make you a cryptographer here (that would take years), but simply to help familiarize the beginner with the terms and concepts of cryptography so as to help you become a credible hacker. I will attempt to use as much plain English to describe these technologies as possible, but like everything in IT, there is a very specialized language for cryptography and encryption. Terms like cipher, plaintext, ciphertext, keyspace, block size, and collisions can make studying cryptography a bit confusing and overwhelming to the beginner. I will use the term "collision," as there really is no other word in plain English that can replace it. Let's get started by breaking encryption into several categories. Types of Cryptography There are several ways to categorize encryption, but for our purposes here, I have broken them down into four main areas (I'm sure cryptographers will disagree with this classification system, but so be it). A Word About Key Size In the world of cryptography, size does matter! In general, the larger the key, the more secure the encryption. This means that AES with a 256-bit key is stronger than AES with an 128-bit key and likely will be more difficult to crack. Within the same encryption algorithm, the larger the key, the stronger the encryption. It does not necessarily mean that larger keys mean stronger encryption between encryption algorithms. Between algorithms, the strength of the encryption is dependent on both the particulars of the algorithm AND the key size. Symmetric cryptography is where we have the same key at the sender and receiver. It is the most common form of cryptography. You have a password or "key" that encrypts a message and I have the same password to decrypt the message. Anyone else can't read our message or data. Symmetric cryptography is very fast, so it is well-suited for bulk storage or streaming applications. The drawback to symmetric cryptography is what is called the key exchange. If both ends need the same key, they need to use a third channel to exchange the key and therein lies the weakness. If there are two people who want to encrypt their communication and they are 12,000 miles apart, how do they exchange the key? This key exchange then is fraught with the all the problems of the confidentiality of the medium they choose, whether it be telephone, mail, email, face-to-face, etc. The key exchange can be intercepted and render the confidentiality of the encryption moot. Some of the common symmetric algorithms that you should be familiar with are: DES - This was one of the original and oldest encryption schemes developed by IBM. It was found to be flawed and breakable and was used in the original hashing system of LANMAN hashes in early (pre-2000) Windows systems. 3DES - This encryption algorithm was developed in response to the flaws in DES. 3DES applies the DES algorithm three times (hence the name "triple DES") making it slightly more secure than DES. AES - Advanced Encryption Standard is not a encryption algorithm but rather a standard developed by National Institute for Standards and Technology (NIST). Presently, it is considered the strongest encryption, uses a 128-, 196-, or 256-bit key and is occupied by the Rijndael algorithm since 2001. It's used in WPA2, SSL/TLS, and many other protocols where confidentiality and speed is important. RC4 - This is a streaming (it encrypts each bit or byte rather than a block of information) cipher and developed by Ronald Rivest of RSA fame. Used in VoIP and WEP. Blowfish - The first of Bruce Schneier's encryption algorithms. It uses a variable key length and is very secure. It is not patented, so anyone can use it without license. Twofish - A stronger version of Blowfish using a 128- or 256-bit key and was strong contender for AES. Used in Cryptcat and OpenPGP, among other places. It also is in the public domain without a patent. Asymmetric cryptography uses different keys on both ends of the communication channel. Asymmetric cryptography is very slow, about 1,000 times slower than symmetric cryptography, so we don't want to use it for bulk encryption or streaming communication. It does, however, solve the key exchange problem. Since we don't need to have the same key on both ends of a communication, we don't have the issue of key exchange. Asymmetric cryptography is used primarily when we have two entities unknown to each other that want to exchange a small bit of information, such as a key or other identifying information, such as a certificate. It is not used for bulk or streaming encryption due to its speed limitations. Some of common asymmetric encryption schemes you should be familiar with are: Diffie-Hellman - Many people in the field of cryptography regard the Diffie-Hellman key exchange to be the greatest development in cryptography (I would have to agree). Without going deep into the mathematics, Diffie and Hellman developed a way to generate keys without having to exchange the keys, thereby solving the key exchange problem that plagues symmetric key encryption. RSA - Rivest, Shamir, and Adleman is a scheme of asymmetric encryption that uses factorization of very large prime numbers as the relationship between the two keys. PKI - Public key infrastructure is the widely used asymmetric system for exchanging confidential information using a private key and a public key. ECC - Elliptical curve cryptography is becoming increasing popular in mobile computing as it efficient, requiring less computing power and energy consumption for the same level of security. ECC relies upon the shared relationship of two functions being on the same elliptical curve. PGP - Pretty Good Privacy uses asymmetric encryption to assure the privacy and integrity of email messages. Hashes are one-way encryption. A message or password is encrypted in a way that it cannot be reversed or unencrypted. You might wonder, "What good would it do us to have a something encrypted and then not be able to decrypt it?" Good question! When the message is encrypted it creates a "hash" that becomes a unique, but indecipherable signature for the underlying message. Each and every message is encrypted in a way that it creates a unique hash. Usually, these hashes are a fixed length (an MD5 hash is always 32 characters). In that way, the attacker can not decipher any information about the underlying message from the length of the hash. Due to this, we don't need to know the original message, we simply need to see whether some text creates the same hash to check its integrity (unchanged). This is why hashes can be used to store passwords. The passwords are stored as hashes and then when someone tries to log in, the system hashes the password and checks to see whether the hash generated matches the hash that has been stored. In addition, hashes are useful for integrity checking, for instance, with file downloads or system files. In the world of encryption and hashing, a "collision" is where two different input texts produce the same hash. In other words, the hash is not unique. This can be an issue when we assume that all the hashes are unique such as in certificate exchanges in SSL. NSA used this property of collisions in the Stuxnet malware to provide it with what appeared to be a legitimate Microsoft certificate. Hash algorithms that produce collisions, as you might guess, are flawed and insecure. These are the hashes you should be familiar with. MD4 - This was an early hash by Ron Rivest and has largely been discontinued in use due to collisions. MD5 - The most widely used hashing system. It's 128-bit and produces a 32-character message digest. SHA1- Developed by the NSA, it is more secure than MD5, but not as widely used. It has 160-bit digest which is usually rendered in 40-character hexadecimal. Often used for certificate exchanges in SSL, but because of recently discovered flaws, is being deprecated for that purpose. Wireless cryptography has been a favorite of my readers as so many here are trying to crack wireless access points. As you might guess, wireless cryptography is symmetric (for speed), and as with all symmetric cryptography, key exchange is critical. WEP - This was the original encryption scheme for wireless and was quickly discovered to be flawed. It used RC4, but because of the small key size (24-bit), it repeated the IV about every 5,000 packets enabling easy cracking on a busy network using statistical attacks. WPA - This was a quick fix for the flaws of WEP, adding a larger key and TKIP to make it slightly more difficult to crack. WPA2-PSK - This was the first of the more secure wireless encryption schemes. It uses a pre-shared key (PSK) and AES. It then salts the hashes with the AP name or SSID. The hash is exchanged at authentication in a four-way handshake between the client and AP. WPA2-Enterprise - This wireless encryption is the most secure. It uses a 128-bit key, AES, and a remote authentication server (RADIUS). I hope you keep coming back, my rookie hackers, as we continue to explore the wonderful world of information security and hacking!
<urn:uuid:190b9c08-ec97-4d97-ad49-dcba7d8737eb>
CC-MAIN-2022-40
https://www.hackers-arise.com/cryptography-basics
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00709.warc.gz
en
0.941258
2,315
3.453125
3
There are many long explanations of heap vs. stack memory. This is a short one. The Stack is the temporary memory where variables are stored while a function is executing. When the function finishes, the memory is cleared automatically. The Heap is memory that the programmer can use for the application in a more manual way. You have to allocate memory, use it, and then free it up afterwords all by hand. - This mostly pertains to C and C++. There are some differences in some of the more modern programming languages, but in the world of information security this is still pertinent because of the lasting prevalence of buffer overflows in C-based languages. CREATED: DECEMBER 2016
<urn:uuid:2427904b-c608-45db-82e9-1184ad3396ad>
CC-MAIN-2022-40
https://danielmiessler.com/study/difference-stack-heap-based-memory/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00709.warc.gz
en
0.918186
146
3.3125
3
There are several types of testing in the IT market, each meant to address different aspects of security, operations and compliance. Penetration testing is a practice that can often span many of these aspects in meaningful ways, by providing security and system awareness across almost any facet of your organization’s technical operations. Here, we’ll start with an intro to the concept of penetration testing. In the near future, we will start to dig into the details of penetration testing for compliance, but here we will introduce some of the basics of what penetration is and why it is important. Why is Penetration Testing Important for Compliance and Cybersecurity? One of the primary concerns for both compliance regulations and cybersecurity hygiene is the understanding of potential vulnerabilities that could lead to security breaches. Generally, vulnerabilities can fall under three general categories: - Technical: The bread and butter of security, technical infrastructure vulnerabilities stem from digital technology itself. Vulnerabilities can pop up nearly anywhere: poor API security, misconfigured network security, poor application authentication, insufficient Identity and Access Management (IAM), or any combination of the above. If there is a place where a hacker can take advantage of technical weaknesses to break into a system falls into this category. - Physical: While we live in a largely digital world, our digital tools and environments are built on physical systems like data centers, local computers, routers and so on. Hackers with the right access can outflank technical security simply by going to the source and breaking into a system there. This can include technological breaches like unauthorized access to server rooms or workstations or even social engineering practices like dumpster diving for passwords or other information. - Administrative: Alongside physical systems, our IT infrastructure is run by, operated on and used by people. Accordingly, these people can prove a weak spot in digital defenses. Phishing attacks focus primarily on fooling people into giving up credentials. Likewise, poor security practices that continue without correction or training leave systems vulnerable due to ignorance of good cyber hygiene and security practices. Penetration testing is the process of finding weaknesses in any and all of these categories and exploiting them to demonstrate their existence. Unlike more theoretical or automated assessments of vulnerabilities, as they exist in a system, a penetration test leverages all the potential attack surfaces and modern security threats available to expose vulnerabilities and suggest remediation. This doesn’t mean that the organization undergoing the pen test is being hacked. Instead, the penetration testers will go so far as to prove how deep into a system they were able to get and then show their results. With that being said, the party providing the testing can vary. There are five different types of penetration testing: - External Testing: As the name suggests, a person or organization performs tests on external, public-facing systems to determine vulnerabilities. This can include a security firm testing from a remote office or white-hat hacker launching attacks in order to locate and report security gaps. - Internal Testing: Much like external testing, internal tests will usually (but not always) involve a third-party tester with access to internal systems. Unlike external testing, an internal test can help test for insider threats or other gaps. - Open-Box Testing: A testing scenario where the hacker has some level of information about the systems they are testing. - Closed-Box Testing: A testing scenario where the hacker has little or no information about the system they are testing. - Covert Testing: A testing approach where the security firm or hacker tests systems without anyone in the organization knowing. Each approach to pen testing can provide some knowledge as to what your vulnerabilities are. A hacker performing a closed-box test, for example, could model the experience of a hacker in the wild attempting to probe system weaknesses. Likewise, a covert test would give the testers a more authentic understanding of the day-to-day activities of employees and your IT and security team. Depending on your industry and regulations, penetration could be a smaller or larger part of your organization’s cyber hygiene. Many regulations require penetration testing at some stage of authorization or certification, but many organizations elect to undergo penetration testing on their own just to best understand their security weaknesses. What are the 5 Stages of Penetration Testing? When you work with a company providing penetration testing, they will often follow a standard 5-stage process. The stages of this process include: - Planning and Reconnaissance: At this stage, you work with the company to plan the test. Even in tests where hackers or your IT team don’t know much about the systems in question, business, technical and compliance leadership must have a plan in place to measure success and failure for the test. These conditions would necessarily include organization-specific goals, compliance requirements and other factors. - Scanning: Here, hackers/testers will begin to ascertain the contours of your system. They will problem system resources, including application code, network systems and other areas to determine how your organization will react to different types of threats, potential or otherwise. This probing will give the testers an idea of how your system reacts to threats. - Access: A full-out attack, at least in terms of gaining access to your system, usually through the approaches that many of us are familiar with; SQL injections, cross-site scripting, and even social approaches like phishing or installing backdoors through third-party software. - Maintenance: Once access is gained, the testers will attempt to maintain their access and expand control throughout the system. Persistent presence is one of the most damaging aspects of a breach because an unknown threat can steal information and infect other systems, often unknown, for months before detection. - Analysis: After the predetermined time frame, the testers provide a complete analysis of their findings, their success and failures, and places where remediation is called for. These tests, depending on their scope, can become quite complex. This is particularly true when considering the overlap of potential attack surfaces. A 100% technical assault might determine weaknesses in an IT infrastructure, but miss a poorly secured email system that doesn’t alert employees and, thus, opens the door for phishing attacks. Automated and Expert Penetration Testing with Continuum GRC Penetration isn’t just a security test: it is a complete understanding of the gaps in your security and compliance strategies and infrastructure. While some practices and tests are integral to compliance, penetration tests are the cornerstone of many frameworks and strong cybersecurity audits. More likely than not, if you operate IT services in any capacity, then you will undoubtedly want to undergo penetration testing. Learn how compliance, automation and penetration testing can become the cornerstone of your security strategy. Continuum GRC is proactive cyber security®. Call 1-888-896-6207 to discuss your organization’s cybersecurity needs and find out how we can help your organization protect its systems and ensure compliance.
<urn:uuid:422c5af6-55b8-45ef-8925-fe086bc6f807>
CC-MAIN-2022-40
https://continuumgrc.com/what-is-penetration-testing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00709.warc.gz
en
0.936231
1,413
2.71875
3
Every single day, it seems, we hear about another great innovation and another step in the right direction for our “data centers of the future.” CyrusOne has standardized on indirect evaporative cooling (IEC) systems to take advantage of free air cooling without the risks of introducing contamination into the IT spaces. Apple and EBay have installed fuel cells, and NetApp and Qualcomm and many others operate natural gas engines all to reduce reliance on the grid and to take advantage of abundant our natural gas resources. Microsoft is developing microsites that will operate in remote locations powered with bio-gas fuels derived from local waste water treatment plants and solid waste management sites. And many of the data centers powered by onsite generators as the primary and secondary source of power require no backup power like UPS, batteries, and diesel generators. And, at the same time, information technology continues to push the limits at an accelerating rate in order to improve both performance and efficiency (i.e., Moore’s Law continues to drive us to new extremes). Multi-core central and graphics processors and high-performance computers are more energy efficient and more energy intensive at the same time. New standards like ASHRAE’s TC 9.9 guidelines for cooling encourage us to take a closer look at real operating conditions and to remove the factors of safety that we have historically employed in our data center designs, and to consider water cooled systems to provide for power densities that air cannot effectively cool. And, as commendable and workable as these improvements really are, they leave us with much less room for error as we face the inevitable failures of power and cooling in our much needed data centers. While developing the design concepts for TDC’s self-powered data center facilities in Delaware (“Zinc Whiskers,” Mission Critical, November/December 2012), I really struggled to find a controls system that would reliably integrate a critical power plant with a high-performance data center. Of course, we will need a system that is proven to be highly responsive, accurate, and absolutely reliable to perform the function of the “brain and nervous system” of an independently powered high-density data center like this one. Issues related to effective energy management and systems availability are also key to the success of the project. AUTOMATION FOR EFFECTIVE AND RELIABLE PERFORMANCE Automation is destined to play a significant role in the future of data center energy management and in the efficient operations of high-reliability facilities, as is already the case in power plant and industrial environments. Automation improves energy management and reliability at the same time, allowing systems to easily fail into their safest condition for emergency conditions (e.g., dampers open and fans on full for airflow), and ensures instantaneous change and the most efficient operating conditions possible during normal operations. Programmable logic controllers (PLC) and their counterpart human machine interfaces (HMI) are often much better suited for most any large, critical operation modern data center environments than are the traditional building management systems (BMS) originally developed for commercial high-rise buildings that use direct digital controls (DDC) technologies. Traditional DDC systems may be a little lower in cost, but they are really best suited for “comfort cooling” in spaces intended for human occupancy and always operate with proprietary protocols and controls structures. PLCs are more robust and offer a higher level of “critical” redundancy. They are well suited for the Tier II and III operations of chilled water plants, high-demand HVAC, and power generation. They are moderate in cost, easy to configure with open protocols, flexible and highly adaptable to modular, scalable, and interoperable systems. Distributed control systems (DCS) may be best suited for Tier III+ and IV environments especially where equipment is expected to respond to intermittent changes in demand and operating conditions. DCS systems are extremely fault-tolerant and scalable and are commonplace in highly mission critical applications like nuclear power plants, oil and gas refining, semiconductor fabrication, and federal government SKIF facilities. They are more expensive and require more highly skilled designers and operators. In order to achieve the kind of automated performance offered by PLC and DCS technologies, I recommend that you call upon a technology-agnostic services provider capable of developing sophisticated controls systems that go well beyond the data center infrastructure management (DCIM) asset and information management systems that we are developing today. You might discover, as have I, the kinds of solutions that have come from more HVAC controls-intensive industries such as large central plants and campus systems that operate multiple plants and facilities. Troy Miller, vice president of Energy Solutions at Dallas-based Glenmount Global Solutions (GGS), has been doing this for years in power and industrial plants and can effectively deliver automated controls systems with failsafe performance. According to Miller, GGS has provided electrical and control systems consulting and design services followed by full turnkey implementation of BMS / BCS HVAC and utilities control and monitoring systems for some of the largest data centers in the US, including the implementation of industrial DCS, providing over 4,000 “hard” I/O of fully redundant, automated operation for a Tier IV critical facility. RELIABILITY IS STILL KING A few years ago, Lawrence Berkeley National Labs identified building controls and control systems as the #1 cause of data center HVAC systems problems — and the #1 potential threat to data center availability — as cited by the National Building Controls Information Program study titled, “Building Energy Use and Control Problems: Defining the Connection.” I have personally witnessed countless data center problems and even critical facilities outages directly related to poorly conceived and maintained controls systems. Controls programming and inflexible protocols are too often found to be a “single point of failure” in our critical facilities, and even our best commissioning agents sometimes misjudge the precision needed for the programming and long-term operations of the brain and nervous system of our data centers. In this era of change, we cannot afford to have controls systems be the weak link in our critical facilities. A data center construction project executive recently commented, “I agree wholeheartedly that (controls) is one of the systems that is usually designed, coordinated, and then implemented inadequately. I lose more sleep over controls than anything else.” Clearly there seems to be a need for someone to take full ownership for an end-to-end solution of our data center controls systems to ensure that operators and builders alike are confident in their operation. I believe that two issues lie at the root of these circumstances. First, many of the data center design (MEP / AE) firms have evolved out of the commercial world, and are accustomed to designing data centers based upon capacity and reliability requirements and, until recently, accustomed to designing to support a constant load. Only recently have we realized that our monitoring and controls need to offer the same redundancy and accuracy as do our capital equipment. And, as we further develop an appetite for dynamic systems to respond to more sophisticated operating strategies in order to achieve better energy efficiencies and lower power usage effectiveness (PUEs), we need to take much greater care with these issues. Secondly, today’s data center operators are pushing the thermal limits of fragile electronics by increasing server supply air temperatures and more variable humidities, and by delivering more precise volumes of air and water, all in order to save energy. And that means that the allowable time for recovery of critical cooling systems after an equipment failure is reduced to a mere few seconds before temperatures rise to well above the 105°F failure temperatures of our servers. The only reasonable method of effectively managing these outages is through an “automated” response to quickly put the facility into a fail-safe condition. And, operators tell me that automated controls are also the most reliable and safe approach for operating their critical facilities during normal modes of operation. All of these circumstances lead me to believe that the simplistic BMS that were developed to manage electrical and environmental controls for commercial office building have a limited future in the critical facilities space. As we continue to push the limits to become more efficient, more productive, and more independent, we will utilize these more robust, accurate, and reliable systems to keep us up and running. We will need to find someone to take more “ownership” for the correct delivery of our systems, and we will find them in services organizations that have served a broad spectrum of more controls-intensive facilities. Glenmount Global Solutions (www.glenmountglobal.com) is just such a systems integrator and full services controls engineer. According to Miller, “we consult, develop operating strategies, design, specify, fabricate, implement, test and commission, train operators and maintain our systems to assure that nothing is lost in the process of delivering 100% uptime along with effective energy management.” GGS seems to be a “go to” provider of Tier III/ IV and other complex data center controls and support systems. For example, they recently implemented “turnkey” a comprehensive facilities monitoring and control system (FMCS), a 5 MW cogeneration turbine BOP, and an energy monitoring application for the world headquarters campus and new Tier IV corporate data center of a Fortune 50 in Texas. With its fully redundant BMS, BCS, network and supervisory control and monitoring applications, it is one of the first LEED® Platinum certified facilities of its kind. GGS also provides controls systems upgrades and retrofits including integrated switchgear and power systems solutions that enable the client to avoid extensive downtime and capital project expenses due to power systems technology obsolescence. Projects are implemented as “live” upgrades of the power distribution, management, and monitoring SCADA systems while the data center is fully operational. ENERGY OPTIMIZATION — THE LOWEST POSSIBLE MECHANICAL PUE It has become evident to me how important it is to bring a qualified controls engineer into the picture early in the design process. Data center designers have historically selected systems and equipment based upon maximum capacity and reliability criteria that result in a gross over-estimation of the number and sizes of virtually all systems components. The ongoing “modular movement” is effectively the first step in right-sizing and improving project efficiencies. Now, I think, the next inevitable step to best design practices to data center cooling design is to develop a thorough operating strategy first, and only then to design a real-time dynamically controlled facility that will deliver power and cooling in the most effective and most reliable manner. Next-generation design will include the development of operating strategies that anticipate all possible modes of operation and select equipment with performance curves and efficiencies that best suit those strategies. The first generation of such a solution for airside cooling, known as dynamic smart cooling, was introduced by HP Labs around 2004. It was heralded to operate as a real-time computational fluid dynamics model much more capable than any of today’s DCIM systems. However, the concept was ahead of its time, and the data center world failed to embrace the value of the concept. Since then, we have learned a lot and now do a respectable job of managing our airside systems with contained aisles, free air cooling, and VFD controls on CRAC and fan motors. Solutions providers like SynapSense (www.synapsense.com) and Vigilant (www.vigilent.com) successfully provide DCIM and energy-savings controls systems that include wireless monitors and efficient variable-speed fans controlled by predetermined setpoints of pressure and temperature sensors. We sometimes add variable-frequency drives (VFDs) to our chillers and waterside components as well, to improve the efficiency of our chilled water plants. However, we don’t yet do a good of balancing the operations of the waterside and the airside of our HVAC systems. More holistic solutions have been developed in other industries, and we now have an opportunity to step up to a new generation of energy efficiency in our data center cooling. I expect that they will be integrated into DCIM programs to provide an automated energy management system that is more reliable than the “play it safe” manual controls approach still so prevalent in our IT spaces. With the flexibility of PLC controls, it will be easy to incorporate high-value packages to improve our energy management. My favorite amongst those is optimum energy (OE), a package that optimizes HVAC performance through an innovative system design and series of optimization algorithms and approaches. Optimum Energy (www.optimumenergyco.com) utilizes algorithms developed by Ph.D. engineers specializing in central plant operations with chillers, boilers, generators, and the like, and they own exclusive rights to them. The algorithms consider and combine the performance curves of each piece of equipment in a system. So, the “performance vs. efficiency” equations are defined for the chiller, the primary water pump, the cooling tower fans, and other waterside equipment along with the same for each CRAC fan, air handler, dampers, and similar components. It communicates to all of the controllers in the data center and allows the system to automatically settle into an optimized operating condition. It also determines the most efficient incremental change in equipment operations to respond to a change in load or environmental conditions. This is theoretically the most energy-efficient operation possible and will really give you the best PUE possible in an air and water system. Glenmount Global and Optimum Energy are already working together to provide the systems and controls expertise required to deliver much improved total cost of ownership solutions to world class data centers across the United States. In the next issue of “Zinc Whiskers,” I hope to describe the operations of this holistic solution in detail and to share some of the quantitative results that they have achieved. CRITICAL FACILITIES ROUNDTABLE CFRT will meet in May of 2013 in Silicon Valley to hear presentations by power generation equipment manufacturers, consultants, and operators to demonstrate how data centers are powered by on-site generators, and to consider the merits and challenges of alternative energies for the data center. CFRT is a non-profit organization based in the Silicon Valley that is dedicated to the open sharing of information and solutions amongst our members made up of critical facilities owners and operators. Please visit the website at www.cfroundtable.org or contact us at 415-748-0515 for more information.
<urn:uuid:49bdfb13-0e5b-464d-8f0d-84355383d42a>
CC-MAIN-2022-40
https://www.missioncriticalmagazine.com/articles/85598-the-coming-technology-and-controls-convergence
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00709.warc.gz
en
0.940347
2,999
2.625
3
The Meltdown and Spectre vulnerabilities that were revealed after the New Year have been the talk of the cybercommunity, and for good reason. The problem, if exploited, would be capable of breaking down basic security partitions by targeting a design feature in chips manufactured by Intel, AMD and ARM. First, don’t panic. The vulnerability may have existed for 20 years. Researchers have (so far) found no example of it being used. However, now that the information is “out in the wild,” hackers have been alerted. Reactions to this news have ranged from ARM and AMD’s downplaying the danger to their chipsets to CERT’s recommendation to ‘replacing CPU hardware,’ so the story is very much unfolding. Spectre and Speculative Execution One of the many processes that happens deep within the universe of a personal computer is managing highly secure, privileged memory and non-privileged, untrusted processes, at the machine’s central processing unit, or CPU. This is the deepest level of a computer. An important thing that happens at this level is something akin to guesswork. It’s actually called “speculative execution,” a process that helps speed things along by “guessing” what processes will be needed before they are called into action. The speculative execution, which is the theoretical basis of the Spectre Vulnerability, happens at the CPU level is the digital cousin of messenger RNA in a cell—and it is supposed to happen in a protected place on a computer’s hardware, but there are moments where information is exposed, including passwords and other credentials, encryption keys and sensitive data on a machine. How is it Different from Meltdown? Meltdown, limited to Intel Chips, operates in a similar manner via “out-of-order execution,” which in plain English allows hacker to access parts of a computer’s memory. While this has the potential to open up access to key strokes, passwords, and other valuable personal information, perhaps the larger concern is on cloud-based platforms such as Amazon Web Services and other shared hosting services. Exploiting the Meltdown vulnerability would mean that a hacker would theoretically be able to access any of the information residing on the shared resources of a server. Patches are now available, but questions remain regarding their efficacy, and in many instances they have already been shown to slow down machines—between 2 (more likely) and 30 percent—mainly because they were not designed to have a programmed work-around at such a deep level—the kernel level that is responsible for making a computer start-up and initiate the rest of its operating system. Read more here.
<urn:uuid:4be5d267-71d1-4fdc-ab88-f777143de906>
CC-MAIN-2022-40
https://adamlevin.com/2018/01/05/meltdown-spectre-vulnerability-serious-sorta/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00709.warc.gz
en
0.965699
563
3.265625
3
A Ruleset is a collection of rules that govern the way one business document is transformed into another. There are two types of Ruleset: Transformation and Data Analysis. Transformation Rulesets describe, in a hierarchical format, how source data is used to populate corresponding (but not necessarily analogous) target structures. The syntax of the sources and targets may be the same or different. (For example: flat file to flat file or EDI to XML) Transformation Rulesets are composed of a set of rules. A rule can express the relationship between larger structures (for example: EDI segments to database rows, or an XML element to a flat file record). A rule can also perform operations (move, add, substring) on lower-level data elements (attributes, fields, columns, or cells). Rules can be set up to execute (or not execute) based on conditions. Data-type checking takes place on the inputs and outputs of all rules to ensure the transformation produces a valid target document. For example, the transformation engine will make sure the numeric data in a source field can be moved to a numeric field in a target cell. Transformation Rulesets are compiled and executed by the Transformation Engine as tasks in a Business Process. Data Analysis RulesetData Analysis Rulesets analyze source data to determine the next step in the data integration process, or to perform a specific action. Typically, this type of Ruleset acts as a data router; as data is analyzed, individual rules are looking for pieces of data and make decisions about what work to perform (such as calling a Business Process or a Web Service).A Data Analysis Ruleset includes rules that group data based on user-specified criteria (trading partner, message ID, etc.), and activate the Business Processes that will handle the sorted data. The passing of Ruleset variable data is called "context point processing." Data Analysis Rulesets require special types of rules to be implemented so that way the Integration Engine can distinguish what data it needs to group on versus what values it needs to match on to complete an outbound process flow. It's easy to confuse regular Ruleset rules with Data Analysis Data Group Rules because much more time is spent within the Studio developing transformation Rulesets. One thing to remember is that every DARS needs the following: A Data Group Composite Rule The Source parameter should be the record/field you want to group on and should have 'For Each' once the value is assigned Data Group Start Rule This Rule will be the parent for an area where you move source data to a local variable for route matching Data Group End Rule - This Rule will be the parent for an area where you create a context point that the Integration Engine uses to tie the data grouping steps to the Route's Transformation Ruleset. - It will also have an Application Interface called so that way the data can be routed against Outbound EDI Routes or Application Routes You must use a Ruleset if you want to transform or analyze data. - The incoming Ruleset would use the 850 EDI Schema as the source and the purchase order Spreadsheet Schema as the target. You create rules to move data from EDI elements into spreadsheet cells. - The outgoing Ruleset would use the invoice Spreadsheet Schema as the source and the 810 EDI Schema as the target. You would create rules to move data from spreadsheet cells into EDI elements. For an analysis example, suppose you need to send invoices to various trading partners. A Data Analysis Ruleset would include rules that group data into documents, sort which documents belong to the same trading partners, and activate the Business Processes that will transform the data and package the documents for each trading partner. How the Object Works The transformation engine relies on the rules in a Ruleset to transform or analyze documents. It uses the format defined in a Schema to create nodes that can be the source(s) or target(s) of each rule. Each Rule is defined by an Action; the Actions can be one of the Cleo-supplied Actions, or Actions based on user-created objects. You must create a Rule for each piece of data you want to appear in the target document. - Define source and target Schemas (Data Analysis Rulesets do not require a target Schema). It is best to understand how a Schema works before approaching the Ruleset, as you must have a source and (if transforming data) target Schemas created and defined before creating a Ruleset. - In the Ruleset editor, create and define individual rules to control the transformation or analysis. - Set any necessary conditions to control if rules execute. - If using a Flat File Schema, specify its Connectors in the Ruleset Runtime tab. - Call the Ruleset as a task in a Business Process.
<urn:uuid:cdedfa1f-05a1-4afb-b0cb-8dfa5de10735>
CC-MAIN-2022-40
https://support.cleo.com/hc/en-us/articles/360044937334
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00709.warc.gz
en
0.860427
1,012
2.703125
3
The Internet Engineering Task Force (IETF) group defined RFC 1918 to specify the private address spaces, sometimes also called “non-routable” IP addresses. These addresses are 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16. You are free to use any of these addresses in your network, but for the devices using them do not have Internetwork connectivity. This means, those devices are not able to reach other networks, such as the Internet. By using NAT, you are able to reach other networks too. NAT is implemented by a router in your network. NAT can be used for RFC 1918 addresses as well as for public IP addresses. When a router configured for Network Address Translation receives a packet, it rewrites the address field with one of its own configured IP addresses and forwards the packet to the next hop device. The NAT router stores the local to global address mapping in its NAT table. After the packet leaves the NAT router, all the other devices in its road to destination are thinking the packet was originated by the router. When talking about NAT, you may hear some key terms: - Inside local address – most likely an RFC 1918 private address, but can also be assigned by your service provider. - Inside global address – is the public IP address a host inside the LAN gets after exists the NAT router. For example, the IP address of a host in your LAN is 192.168.0.1 and the public IP address assigned by your service provider for your NAT router is 184.108.40.206. The public IP address of the router which is used for translation is the inside global address, in our case 220.127.116.11. - Outside global address – is the public IP address of a host on the Internet. - Outside local address – is the local IP address assigned to a host outside the network. Usually is identical with the outside global address. Hosts inside networks using NAT are protected from the Internet. Internet hosts are unable to reach the internal hosts of a NAT network because the used IPs are not routable on the Internet, unless you configure your router to forward the connections to the internal hosts. NAT is also used in small networks to share a single Internet connection with the help of a router. There are two main types of NAT: static and dynamic. Static NAT is used for one-to-one mapping. Each inside local address is mapped to an inside global address. All mappings remain constant. Static NAT is usually used for servers, network devices or hosts that must have an address that is accessible from the Internet. Dynamic NAT is using a pool of IP addresses on a first-come, first-served basis. When a host with a private address tries to connect to the Internet, the router randomly assigns an IP address from its pool, which is not already in use by another host. NAT Overload, sometimes referred as the third type of NAT and also called Port Address Translation (PAT), is used when the number of inside local addresses is greater than the available inside global addresses. A router configured for PAT is mapping multiple private IP addresses to fewer or even a single public IP address. Connections are tracked using the port numbers, which are assigned by the NAT router when the client initiates a TCP/IP session. Hosts inside a LAN network using private IP addresses can be accessed from the Internet with a process called Port Forwarding. Port Forwarding is the process of forwarding connections coming to some destination ports and forwards those connections to a specified device on the network, to the same port or to some other port. For example, a host on the Internet makes a request to 18.104.22.168 on port 80. When the packet arrives on the router, the router forwards the packet to the device with the IP address 192.168.0.1 on port 80. Of course, you can forward the request to some other port, like 8088 for example. We will discuss how to configure NAT on a Cisco router in our next article, but before we move on, let me give you an example NAT configuration. We need the host with the IP address 192.168.0.2 to access the Internet. For this, we need to do a static NAT. We will map the private IP address 192.168.0.2 to the routable IP address 22.214.171.124. When you configure NAT, you must also define which is the inside interface (the interface connected to your internal network) and the outside interface (the interface connected to your service provider). Router(config)#ip nat inside source static 192.168.0.2 126.96.36.199 Router(config)#interface FastEthernet 0/1 Router(config-if)#ip nat inside Router(config-if)#interface Serial 0/0/0 Router(config-if)#ip nat outside Understanding how NAT works and what are the different types of NAT is crucial for your CCNA exam as well as for real-life use. Many companies these days use this technique to save the IPv4 address space or isolate some parts of the LAN from the Internet. We will discuss more about NAT in future lessons.
<urn:uuid:439635c4-3c4c-455e-9dd3-475f56ea56e9>
CC-MAIN-2022-40
https://www.certificationkits.com/cisco-certification/ccna-articles/cisco-ccna-network-address-translation-nat/cisco-ccna-nat-concepts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00109.warc.gz
en
0.922787
1,103
3.96875
4
The Numerical data is aligned to the right side of the column. This is essentially used for statistical calculations The text cannot be used for computations or calculations. If column 1 has text in it, the label of the column will be changed to CIT. Minitab Recognizes 3/5/00 as date and 5:30 as a time but the date and time are internally stored as numbers. The column label can be identified as date and time by mentioning D after the column name. Minitab can change Datatypes. Minitab can do the following transformations: Numeric to text Text to numeric Date/Time to text Date/Time to numeric Numeric to Date/Time Text to Date/Time To make the changes in Minitab, you need to select MANIP> CHANGE DATA TYPE. Then, select the option that you want to change and fill in the dialog box then the changes can be seen How to save the data in Minitab: In Minitab, you can save data in two different formats. You can save the worksheet or you can save the whole project. Saving the Worksheet individually is more preferable rather than saving the whole project. How to save the data in a worksheet: Select File at the top select Save current worksheet as ....... Use the arrow beside the save in the field In the file name field, type the name of the worksheet. while saving Minitab will automatically add an extension as MTW for the worksheet
<urn:uuid:debf11f6-1d12-463f-8f0a-6d372489a85d>
CC-MAIN-2022-40
https://www.greycampus.com/opencampus/minitab/data-types-in-minitab
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00109.warc.gz
en
0.849111
332
2.5625
3
The robots are coming. It has become conventional wisdom that artificial intelligence (AI) and machine learning (ML) will increasingly determine our lives going into the future. By 2020, according to an estimate from Capterra, about 85% of customer-business interactions will take place with AI, without a human involved. 47% of organizations with advanced digital practices have a defined AI strategy, based on data from Adobe. But for IT and security executives and professionals who must protect against cybercrime, AI poses both a promise and threat. The industry is looking toward the promise of AI tools to stay a step ahead of the cybercriminals. Experian’s 2018 annual Data Breach Preparedness Study found just 31% of respondents were confident in their organization’s ability to recognize and minimize spear phishing incidents, and just 21% were confident in their organization’s ability to deal with ransomware. Malware and cyberattacks evolve over time. ML uses data from previous cyberattacks, leveraging what it knows and understands about past attacks and menaces to identify and respond to newer, similar risks. The thinking also goes that AI and ML will help save time for overburdened IT departments. The threat comes from the bad guys also employing AI to create more sophisticated attacks, enhancing traditional hacking techniques like phishing scams or malware attacks. For example, cybercriminals could use AI and ML to make fake e-mails look authentic and deploy them faster than ever before. Criminals could apply AI to develop mutating malware that changes its structure to avoid detection. AI could scrub social media for personal data to use in phishing cons. Data poisoning is another danger, in which attackers find out how an algorithm is set up, then introduce false data that misleads on which content or traffic is legitimate and which is not. A lesser threat comes from within the industry, as companies rush to market with so-called AI cyber security tools. There is a difference between AI and machine learning. ML algorithms train on large data sets to “learn” what to look for on networks and how to respond to various scenarios. Generally, ML needs new training data to calculate and reach new conclusions, while a true AI system does not. Some products are based on “supervised learning,” requiring the data sets that algorithms are trained on to be chosen and labeled, by tagging malware code and clean code, for example. Some vendors are using training information that hasn’t been thoroughly scrubbed of erroneous data points, which means the algorithm won’t catch all attacks. Hackers could switch tags so that some malware is designated as clean code, or simply figure out the code the ML is using to flag malware and delete it from their own, so the algorithm doesn’t detect it. Given the fast-changing landscape, here are some tips to realize the enormous potential of AL and ML and still protect your organization. Resist the hype. AL and ML are the hot buzzwords and technologies of the moment. But there’s also a great deal of confusion. According to ESG Research, just 30% of cybersecurity professionals feel they are very knowledgeable about AI and ML and their application to cybersecurity analytics. When purchasing an AI or ML tool, try to do your research and understand what you’re buying so that it’s an effective and appropriate solution for your organization. Keep a human involved in the process. There used to be an old IT truism of bad data in, bad data out. The “intelligence” in AI is based on data inferences and correlations, which need to be checked and monitored so the model is addressing risks appropriately and evolving as you need. ML systems shouldn’t be totally autonomous. They should be set up with a human in the loop, and the ML should know to ask for help with presented with an unfamiliar situation. Have a strong data breach plan. According to Experian’s Data Breach Preparedness Study, 88% of organizations have a data breach response plan in place, but less than half (49%) think it is effective or highly effective. If you have a plan, it shouldn’t just sit on a shelf. Make sure that it is robust, with buy-in from all the key departments of your company, and drill on it early and often. If you need to get started on a plan or refine it, Experian’s updated Data Breach Response Guide can serve as a resource. AI and ML are the wave of the future. But the cyber threats are real now, and so are the limitations of the technology as a foolproof protection tool. Be aware, both of what’s ahead from the cybercriminals and how you’re applying AI solutions, so you’re not lulled into a false sense of security. About the Author: Michael Bruemmer is Vice President of Experian Data Breach Resolution, which helps businesses mitigate consumer risk following data breach incidents.
<urn:uuid:6c668ccf-d1ea-4388-9d9b-fdf1bf320fc3>
CC-MAIN-2022-40
https://www.cisoforum.com/why-ai-raises-your-risk-of-cybercrime-and-what-to-do-about-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00109.warc.gz
en
0.952748
1,022
2.84375
3
RDECOM Scientists Finding Ways to Safeguard Quantum Information (Phys.org) RDECOM Research Laboratory scientists at the Army’s corporate research laboratory (ARL) have found a novel way to safeguard quantum information during transmission.The project is led by Drs. Daniel Jones, Brian Kirby, and Michael Brodsky from the laboratory’s Computational and Information Sciences Directorate. “We believe that this research has a potential to revolutionize cybersecurity and to enable secure secret sharing and authentication for the warfighter of the future,” Brodsky said. “In addition, it will have an impact on developing better sensors for position navigation and timing as well as quantum computers that might result in the synthesis of novel special materials with on demand properties.” “We started with developing an understanding of how physical properties of real telecom fibers, such as inherent residual birefringence and polarization dependent loss, or PDL, affect the quality of quantum communications,” Jones said. “We exploited a novel mathematical approach, which has led to the development of a simple and elegant geometrical model of the PDL effects on polarization entanglement,” Kirby added.
<urn:uuid:da7e4b4e-3b8c-493b-a705-c2661858f8b4>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/rdecom-scientists-finding-ways-safeguard-quantum-information/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00109.warc.gz
en
0.923154
247
2.5625
3
How you choose metrics determines how machine learning algorithms are measured and compared. This also brings to fore the importance of different characteristics in the results and your ultimate choice of which algorithm to choose. Evaluation is the first and the most important step in choosing any machine learning algorithm. One model may pass muster using a particular metric say, accuracy score but fare poorly when evaluated against other metrics such as logarithmic loss or any other such metric. Though the classification metric happens to be the most popular to measure accuracy, it may not be enough to truly judge our model. In this post, we will cover the different types of evaluation metrics available. Being the most common type of machine learning problem, classification accuracy is the most common evaluation metric for classification problems, it is also the most misused. Classification accuracy is the number of correct predictions made as a ratio of all predictions made. However, it is really only suitable when there is an equal number of observations in each class. For example, consider that there are 98% samples of class A and 2% samples of class B in our training set. Then our model can easily get 98% training accuracy by simply predicting every training sample belonging to class A. When the same model is tested on a test set with 60% samples of class A and 40% samples of class B, then the test accuracy would drop down to 60%. Classification Accuracy is great but gives us a false sense of achieving high accuracy. Area Under Curve Area Under Curve(AUC) is one of the most widely used metrics for evaluation. It is used for binary classification problems. AUC of a classifier is equal to the probability that the classifier will rank a randomly chosen positive example higher than a randomly chosen negative example. Before defining AUC, let us understand two basic terms : - True Positive Rate (Sensitivity): True Positive Rate is defined as TP/ (FN+TP). True Positive Rate corresponds to the proportion of positive data points that are correctly considered as positive, with respect to all positive data points. - False Positive Rate (Specificity): False Positive Rate is defined as FP / (FP+TN). False Positive Rate corresponds to the proportion of negative data points that are mistakenly considered as positive, with respect to all negative data points. False Positive Rate and True Positive Rate both have values in the range [0, 1]. FPR and TPR bot hare computed at threshold values such as (0.00, 0.02, 0.04, …., 1.00) and a graph is drawn. AUC is the area under the curve of plot False Positive Rate vs True Positive Rate at different points in [0, 1]. Logarithmic Loss or Log Loss works by penalizing false classifications. It is a performance metric where the classifier must assign a probability to each class for all the samples. It works well for multi-class classification. The scalar probability between 0 and 1 can be seen as a measure of confidence for a prediction by an algorithm. y_ij, indicates whether sample i belongs to class j or not p_ij, indicates the probability of sample i belonging to class j Log Loss has no upper bound and it exists on the range [0, ∞). Log Loss nearer to 0 indicates higher accuracy, whereas if the Log Loss is away from 0 then it indicates lower accuracy. The confusion matrix as the name suggests gives us a matrix as output and describes the complete performance of the model. Let’s assume we have a binary classification problem. We have some samples belonging to two classes: YES or NO. Also, we have our own classifier which predicts a class for a given input sample. On testing our model on 165 samples, we get the following result. There are 4 important terms : True Positives: The cases in which we predicted YES and the actual output was also YES. True Negatives: The cases in which we predicted NO and the actual output was NO. False Positives: The cases in which we predicted YES and the actual output was NO. False Negatives: The cases in which we predicted NO and the actual output was YES. Accuracy for the matrix can be calculated by taking an average of the values lying across the “main diagonal” i.e Confusion Matrix forms the basis for the other types of metrics. F1 Score is the Harmonic Mean between precision and recall. The range for F1 Score is [0, 1]. It tells you how precise your classifier is (how many instances it classifies correctly), as well as how robust it is (it does not miss a significant number of instances). High precision but lower recall, gives you an extremely accurate, but it then misses a large number of instances that are difficult to classify. The greater the F1 Score, the better is the performance of our model. Mathematically, it can be expressed as : F1 Score tries to find the balance between precision and recall. - Precision: It is the number of correct positive results divided by the number of positive results predicted by the classifier. - Recall: It is the number of correct positive results divided by the number of all relevant samples (all samples that should have been identified as positive). Cognitive View uses Machine Learning models to analyze unstructured customer communications data to gain comprehensive insights into your firm’s obligations and real-time view of compliance adherence. Based on the ML model performance evaluation best practices, mechanisms have been built into the platform to measure the performance and transparency of all the AI models. You can see a case study on choosing a medicine at a low cost, which applies all the metrics discussed in the article.
<urn:uuid:cc12f6d6-b2a5-4f5c-82a3-24043e5aeb72>
CC-MAIN-2022-40
https://blog.cognitiveview.com/metrics-for-machine-learning-model-evaluation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00109.warc.gz
en
0.920419
1,205
2.671875
3
Images generated by algorithms are reportedly used for malicious purposes, such as misinformation or harassment. The truth is, this technology can also be used to hide your identity when signing up for some iffy websites. Data engineer George Paw created a fake person generator “out of boredom and curiosity” and explained CyberNews how his web app is different from others. The graphics card maker Nvidia developed a technology known as GAN (generative adversarial networks). Among other things, it lets you create fake human profiles. As with any other innovation, it was quickly adopted by bad actors as well. The Financial Times reported that GAN-generated faces were used in campaigns linked to China pushing pro-Being points, and Russia (who used it to create fictional editors). In February 2019, The Verge reported about the website ThisPersonDoesNotExit.com. Philip Wang, a software engineer at Uber, used Nvidia’s technology to create an endless stream of fake portraits. Just hit the refresh button, and each time you’ll see another fake stranger staring at you from your desktop. It seems that it really was, as The Verge put it, just a polite introduction to the technology. CyberNews came across yet another fakes generator. During the COVID-19 pandemic, data engineer George Paw, currently working in Australia, created a fake person generator Fakes.io. It’s different from similar projects, such as ThisPersonDoesNotExit.com, mainly in that it also generates fake person profiles for the user, which include the fake’s full name, gender, date of birth, height, weight, and favorite color. “Fakes.io generates brand new, never seen photos on demand. There are currently millions of image combinations that can be generated, and no two photos will look the same,” George Paw said. CyberNews sat down with him to discuss his product and how it came to be. Also read: Can we still believe what we see online? A child of “boredom and curiosity” Like many others, locked at home by the pandemic, George Paw had nothing much to do. “I created Fakes.io mainly out of boredom and curiosity,” he said. During the COVID-19 pandemic, he was asked to go on annual leave for three weeks. Yet, because of all the restrictions, he couldn’t travel or even leave his house. “So I came across a GitHub project called StyleGAN, created by the graphics card maker Nvidia. StyleGAN was able to generate extremely human-like profile pictures, but these humans have never existed. It was completely generated by a type of Artificial Intelligence framework called Generative Adversarial Network,” he explained. Pictures, he added, looked great but lacked flair. “I decided to attach names to these faces. Then birthdays, heights, weights, even their favourite colours. Suddenly these nameless faces have some semblance of personalities,” Paw said. That’s how he decided to build a web app that users could use to generate new profile pictures on demand. “I had in mind that this could be used for software testing purposes (for example, to test signup frameworks), or for users who want to maintain anonymity when signing up for iffy websites where the signup users do not trust the website's privacy policies,” he said. How is his fake person generator different from other similar projects? Paw admitted that ThisPersonDoesNotExist.com partially inspired the creation of Fakes.io. “It shared some common underlying AI framework & libraries to generate the hyper-realistic images. The main difference is that thispersondoesnotexist.com displays a certain number of images from a cache (approximately 100,000 images). Fakes.io generates brand new, never seen photos on demand. There are currently millions of image combinations that can be generated, and no two photos will look the same,” he explained. Most machine learning models require an existing dataset to work on. Fakes.io model was trained on over 200,000 pre-existing images to generate the best results. “Once the model has been created, it never uses the pre-existing images again, and the machine learning model will generate a brand new image every time,” Paw said. Can you tell apart the fakes? George Paw primarily created Fakes.io for software development and machine learning showcase purposes. But it’s accessible to anyone - you can start hitting the refresh button to generate brand new fake persons and use them, for example, to sign up to some websites where you want to remain anonymous. “We do not condone, support, or encourage illegal activity of any kind and will fully cooperate with law enforcement organizations if required,” emphasized Paw. If you have a closer look, you might be able to spot a fake. Recently, The New York Times published a detailed analysis of how you can do just that, in case you want to dive deeper into the topic.“While the generated pictures are hyper-realistic, the AI is far from perfect. Sometimes the AI will generate an image that has a distorted background which looks very strange. In some photos, you can see artifacts (things that do not look natural), for example, the ear is slightly warped, or the teeth have been misaligned,” Paw said.
<urn:uuid:2d555c79-a07d-4535-8674-21bf924f1c7c>
CC-MAIN-2022-40
https://cybernews.com/editorial/dont-trust-this-stranger-theyre-a-fake/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00109.warc.gz
en
0.945533
1,136
2.75
3
Security researchers recently discovered a website, which controls a very large botnet, used to infect vulnerable systems. The website, loads.cc, is based in Eastern Europe, most probably Russia, and acts in a quite interesting manner: the operators charge clients for infected PCs. In other words, anyone can use the botnet, the size of which is estimated to be a few million, and infect PCs with whatever malware they choose for a little fee. The website itself does not appear to have or distribute malware, but researchers recommend not to surf to it, because it likely logs the IP adress of visitors. Loads.cc allows less technically proficient cyber-criminals to "cash in". Upon the discovery of the website, the price of one infection was 20 cents. The operators of the site provide information on the availability and size of the botnet in real-time. A client can make an arrangement of how many PCs he wants infected before hand, let’s say 1,000 for $200. The payment can also be based on other things, such as country, IP adress and other attributes. Upon completion of the task, the client is given a report saying which IPs the loads were succesfully delivered to. Then he can do whatever he pleases: distribute spam, steal information, etc. This method is different from that of other similar schemes, such as those by the creators of the Gozi trojan and 76service, the latter allowing you to use an already infected PC, thus making the whole process more expensive, whereas Loads.cc allows you to pay to infect computers. This could possibly lead to some PCs becoming "superinfected", a term used to define the state of being infected with several bots at the same time. "Superinfected" systems would make for a battle ground to determine which bot has control over the PC.
<urn:uuid:702fa370-32ff-4ce5-96fa-f737958c780c>
CC-MAIN-2022-40
https://www.2-viruses.com/article-loadscc-by-hackers-for-hackers
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00109.warc.gz
en
0.958904
384
2.546875
3
In 2021, the rift between “online” and “real life” hardly exists. This article will cover Cybersecurity 101, and walk you through some of the most important cybersecurity tips for businesses and individuals. With over half of the world’s population on the internet, internet concerns should be a top priority. Problems that once faces on the internet can very easily become problems that they are facing in real life. Following this rule, you should care about cybersecurity just as much as every other type of security. When you understand the importance of cybersecurity, you can protect yourself and your company from attacks. Most of the problems you’ll deal with in the world of cybersecurity relate back to malware. Malware refers to software that’s designed to damage your computer or steal your information (malicious software). The key thing to take away here is that you’re not always being infiltrated by a personal hacker, most of the time, it’s software designed by hackers to do the work for them. Malware comes in several different types. It almost always disguises itself as something else, however. Remember that malware cannot get onto your computer unless you let it on. Do not click on any links that you find suspicious. Cybercriminals will often send out emails claiming to be legitimate businesses and include malware in links inside of the emails. When you click on these links, you invite the malware into your computer. This process is known as phishing, and while it might seem simple, it is surprisingly effective. Let’s take a look at some of the most important types of malware. Ransomware is a type of malware that encrypts your data, and won’t let you access it unless you pay the cybercriminals a ransom. There’s very little you can do to get around this since once files are encrypted, it’s very difficult to get them back on your own. A particularly malicious form of ransomware is one called “jigsaw” which deletes your files one by one, hour by hour. If you don’t pay by the twenty-four-hour mark, it will delete all of your data. Another form of ransomware is a locker. Lockers, rather than encrypting your files, simply lock you out of your own computer. If you want to get your access back, you have to pay. Make sure that you take preventative measures so that you don’t wind up the victim of a ransomware attack. One of the most disturbing types of cybersecurity attacks you can suffer is a Scareware attack. Scareware isn’t quite as technologically advanced, instead, it plays on the human emotions of anxiety and fear. Scareware systems, ironically enough, usually disguise themselves as the police or even cybersecurity programs. They use their fake authority to claim that you have a problem you need to fix, using language that will shock and scare you into paying the cybercriminals. One that disguises as a cybersecurity program will claim that your device is littered with viruses when it really isn’t. They’ll claim that they can fix your viruses for a price, and then make off with your cash. One that disguises as the police might claim they’ve found evidence of suspicious activity on your profile, and need you to pay some sort of fine. Good old-fashioned hacking usually comes in terms of stealing your personal password. There are several ways that hackers can do this and several ways in which hacking has become easier for cybercriminals. Hackers aren’t strangers to getting their hands dirty. Cybercriminals have known to dig through the trash to find valuable pieces of information — such as passwords written down on pieces of paper — that they can use. It might seem unlikely, but just recently a cybersecurity lawsuit blamed a trash company. To prevent this type of attack, never write down any of your passwords. Hackers might also make use of brute-force attacks. These attacks are old-school, but they work wonders. They make use of software that runs through all of the most common passwords to try to crack a certain system. This might seem ineffective, but you can easily find lists of the most popular passwords online. If you use any of these passwords, it’s best to change these. All a criminal needs to do is to guess to break in. Advancements in technology in the past 10 years have made it easier than ever for hackers to steal data. While the cloud might make conducting business easy by linking up every member of your company, it also sets you all up to be knocked down like a row of dominos to cybercriminals. If one falls — the rest will fall as well. AI has also provided a playground for hackers. Who knows what sort of intelligence they will develop to discover people’s passwords? Your best option is to try to prevent cybersecurity attacks as much as you can. The key way to do this is to care about how your data and information is distributed. Keep secure information on certain secure accounts that very few people have the password to. Do not link these accounts up with the cloud. Everyone is linked these days, but for the most part, criminals still need a way in. Teach your company about malware, and why they should never click on suspicious links. Teach every member about good password protection practices as well. Hire the services of a cybersecurity company, to make sure you get all of the support you need and stay updated on the most recent threats. Now that you have been taken through this Cybersecurity 101, you have a basic understanding of common cybersecurity risks. Prevention is the best (and in some cases, only) cure. For more information, contact us today about a security risk assessment for your business. HOW WE CAN HELP
<urn:uuid:cffeefac-d1ff-478a-94ad-5970feba8fdb>
CC-MAIN-2022-40
https://www.bridgeheadit.com/cybersecurity-101-the-key-things-to-understand/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00109.warc.gz
en
0.959274
1,220
3.0625
3
What Synthetic Data Means to Information Privacy Improving the capacity to share data without impacting personal privacy has become an expanding trend in data analytics. Synthetic data is a newly looked at emerging tool being considered an option for privacy protection in data science. What is synthetic data? According to the McGraw-Hill Dictionary of Scientific and Technical Terms, synthetic data is “any production data applicable to a given situation that is not obtained by direct measurement.” For privacy reasons, synthetic data consists of data not based on any real-word individuals or events. Still, data generated by a computer program used to simulate the information. In the field of data management, synthetic data and production data are terms used interchangeably. Production data is defined as “information that is persistently stored and used by professionals to conduct business processes.” It is real information, generated by AI to simulate an equivalency to the ‘actual data’ so that businesses can use the information for research or other studies. Since this type of data does not include the ‘actual data,’ it provides personal data protection for those in the data set. Imagine a data set: Actual Percentage – Changed – to Synthetic Percentage - Actual 55.7 – Synthetic 54.6 - Actual 58.4 – Synthetic 59.5 - Actual 60.1 – Synthetic 59.9 - Actual 53.7 – Synthetic 53.9 Totals: Actual 227.9 – Synthetic 227.9 Average: Actual 56.975 – Synthetic 56.975 With this simplified example, you can see how data over the entire set can be simulated to give the same results while hiding the original value. Generally speaking, any data generated by a computer simulation is considered synthetic data. The generated data can be used in physical modeling, medical research, or even community health needs. It gives way to model analysis with accurate data sets that do not necessarily point to any individual’s data. To provide privacy protection, synthetic data is created through a complex process of data anonymization. It can be described that you have a data set, it is then anonymized, then that anonymized data is converted to synthetic data. This breakdown shows synthetic data as a subset of the anonymized data set. Various fields and business types use synthetic data as a filter. Synthetic data acts as a filter layer to help provide privacy protection and confidentiality of the data subjects, who may otherwise be compromised. Many data sets used in research include synthesized data that protects specified data fields that reveal personal identity; including, name, home address, IP address, credit data, or social security number, in other words, the data that points to a particular individual. Today, data collection that surrounds individuals’ daily lives allows for a myriad of ways to match data sets to pinpoint a specific subject. In a 2016 study, artificial intelligence can monitor driver braking patterns and, within 15 minutes, identify the driver with an 87% accuracy. So much of our daily data, include an insignificant action like the way you brake while driving is unique to the individual. This is why there is such a need for synthetic data. Privacy Protection with Synthetic Data How does synthetic data impact privacy protection? Synthetic data improves privacy protection because synthetic data is artificially generated, not real-world information. Where data that is simply anonymized can be used for re-identification purposes when compared to other similar data sets, synthetic data points to no one specifically. In this sense, synthetic data offers superior privacy protection. Companies often use synthetic data available for processing when there are concerns that releasing the original data may violate privacy regulations. Processing consumer data requires strict compliance. Regulations like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) could levy huge fines and penalties for releasing private consumer data. In instances where consumer privacy is an issue, synthetic data is used. It is a form of anonymization that allows companies more agility to use, process, analyze, or share the data in a safe and compliant manner. Synthetic data is used explicitly for the preservation of privacy. “Synthetic data is described as artificially generated data that contains properties of the original data without disclosing the actual original data.” Everything Has Limitations Companies have been turning to synthetic data as a viable option to balance data privacy with the need for quality data. Synthetic data has often been described as the answer for providing complete data values while ensuring privacy protection. It may be a bit more complex, though, to call it perfect. It may seem that synthetic data is the ‘answer’ to solving the need for quality research data without compromising privacy, but nothing is that easy. Synthetic data has its limits. As an answer, it has limitations due to fundamental mathematical constraints. To have a perfect solution, allowing for both privacy and equal data values in a single dataset is mathematically impossible. It could be like comparing perpetual motion to a band-aid solution to privacy, knowing that it is proven scientifically false. Giving it the status of a Star Trek replicator is deceptive. Without further study, to say that it is a perfect solution is a misrepresentation. Companies should be aware that there are consequences for utilizing newer solutions that may require more study. Those with relatively extensive knowledge in the field are noting some shortcomings. These limitations could lead to breaches of customer privacy and penalties from violations of current privacy laws. To be sure, to claim that synthetic datasets can be statistically identical to the original data and perfectly preserve privacy is impossible. It does have its benefits in providing highly accurate statistics for study and research. It also can claim to be differentially private. Well, perhaps for the average person, it comes close. This isn’t perfect; just as AI and ML can use smart algorithms to simulate data, they can also be used with other data sets to unravel and match specified data. The truth is it does provide a much higher level of data privacy than other means currently available. To have an application that provides or guarantees 100% protection will always be proven false. This level of security and accuracy, from a scientific perspective, will not be achieved with any technology. This applies to all future innovations that are beyond Star Trek replicators and teleportation devices. To claim synthetic data is a perfect solution is not credible, but understanding the shortcomings and using the information sets with care can give much greater protection than any other anonymization solution currently available. Privacy and AI – The Future Many companies are using synthetic data in training artificial intelligence (AI) and machine learning (ML) applications. Real-world data can be expensive to collect, but synthetic data with an equivalent amount of data is more easily acquired. One central area in which privacy is protected and AI is developed for a specific purpose is developing autonomous driving vehicles. Creating the software that allows for safe autonomous driving uses volumes of data to learn and react to driving conditions. For these types of data applications, synthetic data allows AI and ML models to react to a wide variety of situations that even real-world data may not demonstrate. Synthetic data is a viable option. Corporations often use it to evaluate vendors. When choosing a vendor that may need to handle consumer or private data, the risks can be assessed without releasing the actual data. In any situation where data is exchanged, it increases the chance for a data breach, which could cause significant damage to the reputation of a business, causing fines, legal costs, and loss of revenue.
<urn:uuid:fe9ee16a-f035-487a-b7fa-4c4fd87a9e5b>
CC-MAIN-2022-40
https://caseguard.com/articles/what-synthetic-data-means-to-privacy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00109.warc.gz
en
0.915789
1,549
3.15625
3
Passwords are ubiquitous in the information age. We use them every day to sign into our web accounts and devices. As such, passwords help secure our digital lives. Well… sort of. Notwithstanding their widespread use, passwords are inherently insecure because they are only information. They are not tied to any physical object. Attackers can therefore steal a password from a vulnerable database (or from the Post-It note on your desktop), or they can purchase a tool that allows them to brute force their way into your account. It is in response to the shortcomings of password security that we have discussed in recent articles how to add an extra layer of protection to your web accounts. After covering the difference between two-factor authentication (2FA) and two-step verification (2SV), we explored how to protect a Google account with 2SV, including via the use of the Google Authenticator app. I think, therefore, that your Apple ID deserves an added layer of protection. Don’t you? In this guide, I will show you how you can protect your Apple ID against brute force attacks. 1. Use a web browser to sign into your Apple ID. On your Apple ID homepage, you will see some basic information about your account, including your email and birthday. You will also see a “Security” section that allows you to change your password, change your security questions, add a rescue email, and activate “Two-Step Verification.” Click on the hyperlinked text “Get Started” beneath that lattermost feature. 2. Apple will prompt you to answer two of your three security questions. Submit the proper answers and click “Continue.” 3. A dialog box explaining how two-step verification works and how it changes your Apple experience will pop up. Click on the “Continue” button. 4. Another dialog box will appear and prompt you to enter in your mobile number. Do so and press “Continue.” 5. Apple will send a verification code via SMS to your mobile device. Enter the code into the web browser and click “Verify.” 6. Once you have verified your mobile number, Apple will ask you if you would like to enable verification codes on any device with which you have enabled Find My iPhone, iPod, or iPad. If you would like to activate two-step verification on any of those devices, select the desired devices and go through the setup process. Otherwise click “Continue.” 7. Apple will present you with a recovery key that you can use to access your account in the event you lose your device or you temporarily cannot receive codes to your device. Make sure you write down or print that code and store it somewhere safe before clicking “Continue.” 8. Enter in your recovery key and click “Confirm.” 9. To complete the process, Apple will alert you to several conditions of enabling two-step verification on your account. These include what you will need in the future to unlock your account. When you have finished reading over the conditions, check the box next to the text that reads, “I understand the conditions” and click “Enable Two-Step Verification.” 10. And just like that, you’re done! On your Apple ID homepage, you will now see under the “Security” section that Two-Step Verification is labeled “On.” You will also see trusted numbers/devices clearly displayed. Whenever you attempt to sign into your account from now on, you’ll now come across a screen similar to this one. Click on a trusted device on which you would like to receive a verification code. This will lead you to a new screen. In the meantime, you should have received a verification code on the device you selected. Enter that code into your web browser. Apple will automatically verify the code (if correct) and will direct you to your Apple ID account home page. From there, you can enjoy all the benefits your Apple ID affords. - Two-factor authentication (2FA) versus two-step verification (2SV) - How to better protect your Facebook account from hackers - How to better protect your Twitter account from hackers - How to enable two-step verification (2SV) on your WhatsApp Account - How to protect your Amazon account with two-step verification (2SV) - How to better protect your Google account with two-step Verification (2SV) - How to protect your Dropbox account with two-step verification (2SV) - How to protect your Office 365 users with multi-factor authentication - How to protect your Microsoft account with two-step verification (2SV) - How to better protect your Tumblr account from hackers with 2SV - How to protect your LinkedIn account from hackers with two-step verification (2SV) - How to protect your PayPal account with two-step verification (2SV) - How to protect your Yahoo account with two-step verification (2SV) - How to protect your Apple ID account against hackers - How to better protect your Google account with two-step verification and Google Authenticator - How to protect your Hootsuite account from hackers - How to better protect your Instagram account with two-step verification (2SV) - Instagram finally supports third-party 2FA apps for greater account security - How to protect your Nintendo account from hackers with two-step verification (2SV) - How to better protect your Roblox account from hackers with two-step verification (2SV) Found this article interesting? Follow Graham Cluley on Twitter to read more of the exclusive content we post.
<urn:uuid:657a7d3a-0bb7-4e3f-98bf-a24d57e7945f>
CC-MAIN-2022-40
https://grahamcluley.com/protect-apple-account/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00109.warc.gz
en
0.917383
1,216
2.625
3
Know about Linux system security One of the most important Linux system security feature are passwords today. Most of server administrator and users use password to secure their system to get access by others. In Linux (RHEL/DEBIAN) these passwords are saved in passwd and shadow files in /etc directory. In deep description about passwd and shadow both file’s data encrypted. Most distro uses one way encryption called DES (Data Encryption Standard) to encrypt passwords saved into /etc/passwd and /etc/shadow files. When you attempt the login the username and password, the password encrypted again and compare with saved password, if match found then you are allowed to access otherwise decline by the system. Understanding /etc/passwd File: This file contain the required information which used at time of user login. This is text file contains a list of user accounts for System. This contain the following entry in each line each field is separated by : so you can understand easily. - Username : it is used when user logs in. - Password: An x character indicates that password is encrypted and stored in /etc/shadow file. - User ID (UID): Each user must be assigned a unique user ID (UID). UID 0 (zero) is reserved for root. - Group ID (GID): The primary group ID (stored in /etc/group file) - User ID Info: This field allow you to add extra information about the users such as user’s full name, phone number etc. - Home directory: This is path of user’s home directory - Command/shell: this is path of a command or shell (/bin/bash) Understanding etc/shadow File: This file stores passwords in encrypted format for user’s account. And also contain additional properties related passwords. It contains the following field and every field is sperated with a colon (:) character. - User name : It is users login name - Password: It is users encrypted password. - Last password change: This contained the information when last password changed. - Minimum: The minimum number of days required between password changes. - Maximum: The password validity for maximum numbers of Days. - Warn : The number of days before password is to expire that user is warned that his/her password must be changed - Inactive : The number of days after password expires that account is disabled - Expire : days since, that account is disabled
<urn:uuid:ba7ac9ec-b43e-4f5c-863f-3c3e5b5b9395>
CC-MAIN-2022-40
https://www.cyberpratibha.com/linux-system-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00109.warc.gz
en
0.872756
516
3.640625
4
Share this post: The new IBM quantum computing-safe tape drive prototype is based on a state of the art IBM TS1160 tape drive. Ten months ago we assembled a team from IBM Research in Switzerland and IBM tape developers based in Tucson, Arizona, to try to build something which has never been built before to address a risk that may not materialize for another decade or more. As you can tell, we love a good challenge. The risk comes from quantum advantage, the point when a quantum computer can perform some particular computation significantly faster than a classical computer. The challenge we faced, develop a quantum computing safe tape drive, because at the current rate of progress in quantum computing, it is expected that data protected by the asymmetric encryption methods used today may become insecure. Preparing Cybersecurity for a Quantum World Quantum computing is an emerging form of computing that takes advantage of quantum mechanical phenomena to solve certain types of problem that are effectively impossible to solve on classical computers. Quantum Advantage will occur when quantum computers surpass today’s classical computers at which point they are expected to enable dramatic advances in areas such as chemistry, bioinformatics and artificial intelligence, but at the same time they will impact information security. State of the art storage technologies, such as magnetic tape drives, use a combination of symmetric and asymmetric encryption to ensure that the data they store remains secure. However, in the future, the security of today’s asymmetric encryption techniques will very likely be broken by advances in quantum computing. At the current rate of progress in quantum computing, it is expected that asymmetric encryption may become insecure within the next 10-30 years. While this seems rather far in the future, tape systems are often used to archive data for many years which is why it’s important to begin implementing quantum computing-safe solutions now to provide clients sufficient time to migrate to this new technology before their data becomes vulnerable. Making Tape Quantum Computing Safe In order to prepare for the impact that quantum computers are expected to have on data security, IBM Research has been developing cryptographic algorithms that are resistant to potential security concerns posed by quantum computers. These algorithms are based on Lattice Cryptography, which is in turn related to a set of mathematical problems that have been studied since the 1980’s and have not succumbed to any algorithmic attacks, either classical or quantum. CRYSTALS are based on the hardness of mathematical problems that have been studied since the 1980’s and have not succumbed to any algorithmic attacks. In collaboration with several academic and commercial partners including: ENS Lyon, Ruhr-Universität Bochum, Centrum Wiskunde & Informatica and Radboud University, IBM researchers have developed two quantum resistant cryptographic primitives based this work: Kyber, a secure key encapsulation mechanism and Dilithium, a secure digital signature algorithm. These two algorithms make up the “Cryptographic Suite for Algebraic Lattices” we call “CRYSTALS”. Both of these algorithms are candidates in the second round of the National Institute of Standards and Technology (NIST) Post Quantum Cryptography standardization process and will be presented today at the Second PQC Standardization Conference at the University of Santa Barbara, Aug 22-24, 2019. The new IBM quantum computing-safe tape drive prototype is based on a state-of-the-art IBM TS1160 tape drive and uses both Kyber and Dilithium in combination with symmetric AES-256 encryption to enable the world’s first quantum computing-safe tape drive. The new algorithms are implemented as part of the tape drive’s firmware and could be provided to customers as a firmware upgrade for existing tape drives and/or included in the firmware of future generations of tape drives. Magnetic tape has a long history of leadership in storage security and is an essential technology for protecting and preserving data. For example, IBM tape drives were the first storage technology to provide built-in encryption starting with the TS1120 Enterprise Tape Drive. In addition, tape provides an additional layer of security via an airgap between the data stored on a cartridge and the outside world, i.e. data stored on a cartridge cannot be read or modified unless it is mounted in a tape drive. The security and reliability provided by tape systems combined with their low total cost of ownership have resulted in tape becoming the technology of choice for archiving data in the cloud as well as in commercial and scientific data centers With the development of quantum computing-safe tape encryption technology, IBM Tape continues the legacy of tape leadership in security and encryption and reaffirms its long term commitment to this critical part of modern storage infrastructure. The authors also wish to acknowledge the expertise and support of Paul Greco and Glen Jaquette from IBM Systems and Tamas Visegrady and Silvio Dragone, IBM Research.
<urn:uuid:90542de4-9cd6-405d-a1b0-949dbbbe6c21>
CC-MAIN-2022-40
https://www.ibm.com/blogs/research/2019/08/crystals/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00109.warc.gz
en
0.937457
1,013
3.078125
3
In 2010, 25 percent of new worms have been specifically designed to spread through USB storage devices connected to computers, according to PandaLabs. These types of threats can copy themselves to any device capable of storing information such as cell phones, external hard drives, DVDs, flash memories and MP3/4 players. This distribution technique is highly effective. With survey responses from more than 10,470 companies across 20 countries, it was revealed that approximately 48 percent of SMBs (with up to 1,000 computers) admit to having been infected by some type of malware over the last year. As further proof, 27 percent confirmed that the source of the infection was a USB device connected to a computer. So far, these types of infections are still outnumbered by those that spread via email, but it is a growing trend. “There are now so many devices on the market that can be connected via USB to a computer: digital cameras, cell phones, MP3 or MP4 players,” says Luis Corrons, Technical Director of PandaLabs. “This is clearly very convenient for users, but since all these devices have memory cards or internal memory, it is feasible that your cell phone could be carrying a virus without your knowledge.” How does it work? There is an increasing amount of malware which, like the dangerous Conficker worm, spreads via removable devices and drives such as memory sticks, MP3 players and digital cameras. The basic technique used is as follows: Windows uses the Autorun.inf file on these drives or devices to know which action to take whenever they are connected to a computer. This file, which is on the root directory of the device, offers the option to automatically run part of the content on the device when it connects to a computer. By modifying Autorun.inf with specific commands, cyber-crooks can enable malware stored on the USB drive to run automatically when the device connects to a computer, thus immediately infecting the computer in question. To prevent this, Panda Security has developed Panda USB Vaccine, a free product which offers a double layer of preventive protection, disabling the AutoRun feature on computers as well as on USB drives and other devices.
<urn:uuid:46cbeb14-161e-4928-b823-458f9c693154>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2010/08/26/25-of-new-worms-are-designed-to-spread-through-usb-devices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00109.warc.gz
en
0.956467
451
2.84375
3
The recent NCA report shows how easy it is for young people to slip down the path of cybercrime. The barrier to entry in the cyber crime market is at an all-time low with the tools needed to create new ransomware attacks available for free online. There are videos, tutorials, and blogs all detailing how to make money usually in the form of anonymous crypto currencies such as bitcoin. The problem is that cyber crime is seen as low risk and high reward and with the faceless anonymity of a computer terminal it is all too easy for a moral compass to point in the wrong direction. The same teenagers who wouldn’t dream of burglary or kidnap will happily rob and extort victims online. The difference between right and wrong in 2017 can be a single digit in a command, this is a very fine line that is easily crossed. If you talk to a lot of security professionals, myself included, you will find they were drawn into a career in security after dabbling with ways to break into systems, cheat in computer games or bypass some restriction. Many of us were fortunate that at the time there was far less connectivity and a lot less awareness so the impact was limited. However, young people today can easily be the cause of a multimillion pound data breach with serious legal and financial consequences. I believe that ultimately this is an education problem, as the industry has simultaneously declared a skills shortage for security professionals and a rise in cyber crime. If the education system was setup to guide young people into information security and explain the pitfalls of computer misuse then we would all be in a better position. As a nation, we are very much behind the curve having only recently introduced programming and computing as core parts of the curriculum, even now students are often way ahead of schools and teachers in this respect. The difference between a hacker and security professional is often who signs the pay check. The skills and motivations are often essentially the same. There is a lot of young cyber talent out there that we should be harnessing, not fighting against. There will always be some who turn to crime however as we have seen with the rise of hacktivism there is often a huge desire to do more than just commit a crime for personal benefit. James Maude, Lead Cyber Security Researcher James Maude is the Lead Cyber Security Researcher at BeyondTrust’s Manchester, U.K., office. James has broad experience in security research, conducting in-depth analysis of malware and cyber threats to identify attack vectors and trends in the evolving security landscape. His background in forensic computing and active involvement in the security research community makes him an expert voice on cybersecurity. He regularly presents at international events and hosts webinars to discuss threats and defense strategies.
<urn:uuid:43163e3d-2a9c-488f-b723-fc9368a2b2f8>
CC-MAIN-2022-40
https://www.beyondtrust.com/blog/entry/why-we-should-be-harnessing-young-cyber-talent-not-fighting-it
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00309.warc.gz
en
0.969119
553
2.71875
3