text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Print Your Own: The State of Additive Manufacturing 3D printing, or additive manufacturing (AM), looked set to revolutionise how products would be created. Today, from hobbyists to advanced personalised drugs, AM continues to rapidly evolve to usher in a new age where the limits of traditional manufacturing are removed. According to the current report from Sculpeto, Many industries are beginning to consolidate their use of 3D printing as the role of 3D printing becomes more defined within their manufacturing processes. Users are becoming more mature in their knowledge and application of additive manufacturing: 80% of respondents have used 3D printing for more than two years (+7% vs 2019), and 31% even use it daily. There is no doubt that AM is on a trajectory that will change many businesses in the manufacturing industry. AM was firmly in the short-run or prototyping sector; today, major manufacturers continue to integrate AM into their production processes. The number of companies that have shifted to using AM for full-scale production runs of hundreds of thousands of parts has doubled from 7% in 2019 to 14% in 2020, proving AM has evolved from the prototyping phase,” finds the third annual study from Essentium. During the Covid-19 pandemic, AM proved it can step in to make quantities of supplies at scale, or at least the mould to make the product, to keep the assembly lines moving. Blake Teipel, PhD, CEO and Co-founder, Essentium, Inc., commented to Silicon UK: “One of the most significant breakthroughs we anticipate will be AM’s role in keep supply chains flowing and factory floors moving in the event of a disruption. 3D printing is an excellent alternative for manufacturers to fill gaps in the supply chain caused by circumstances beyond their control. Teipel continued: “We’re at the beginning of radical change. Additive is ready for prime time, and manufacturers are already moving into actual manufacturing to save manufacturing costs while building stronger supply chains that can withstand the worst type of unforeseen events – such as the pandemic.” Manufacturing is rapidly evolving. AM offers a flexible approach that can provide massive cost savings and efficiency improvements. “By redesigning existing components into light-weight models, they tend to become more complex in form, leading to the use of manufacturing technologies that can produce complex designs easily: namely 3D Printing and Additive Manufacturing in plastics and metals,” says Grant Cameron, MD, Concurrent Design Group. “Aerospace and healthcare have found the best return on investment for metal printing. As designers become more aware of the benefits of metal 3D printing as a result of being able to design and produce more complex parts, including use of lattices and other weight-saving geometry, its use will grow rapidly in other industries.” Inside knowledge To gain an insight into how additive manufacturing has evolved and what the future could look like, Silicon UK spoke with Rajeev Kulkarni, Vice President, Strategy, 3D Systems. What is the current state of 3D printing in industry and the home? “Additive Manufacturing (AM) is a manufacturing technology that sits within the manufacturer’s toolbox. It presents a tremendous opportunity to introduce a high level of efficiency into the industrial manufacturing process, with the ability to print metals and plastics, bio print tissue, jet edible foods, and be applied in concrete-based construction. Beyond improved efficiency, AM also delivers benefits such as material savings, reduced time-to-market, and lowered operational costs. AM will never replace all traditional processes, although the designer and manufacturer should integrate it into their workflow when making design and economic sense. “For all the success manufacturers have realised from AM, it’s a relatively ‘young’ manufacturing technology and is still establishing its base. A great deal of R&D effort across the industry is focused on materials, size of prints, accuracy, reliability, and repeatability. While all these are being mastered, the need for standards for materials, processes, and software is still in its infancy. AM’s evolution will continue through these efforts, adding value to many aspects of product design and manufacturing. “On the flip side, the opportunities envisioned for in-home 3D printing have not materialised. The 3D printers originally developed for in-home use have largely shifted to STEM classrooms, maker spaces, and among ‘garage entrepreneurs.’ The challenge for home 3D printing is rooted in the lack of useful applications for the technology and limitations of simplistic design tools and skills to create 3D printable data. Several developments are happening on the culinary printing front, although those are focused for use by high-end kitchens and chefs, not for use by the average ‘at-home chef.’” Are businesses continuing to see the benefits of using 3D printing across their enterprises? Is AM forming part of their strategic planning? “Manufacturing is transforming as companies introduce the power of additive manufacturing into workflows built upon traditional technologies. We’ve seen evidence in a 2019 E&Y study that confirms the adoption of additive is ramping, with nearly 75% of companies embracing the technology. That same study indicates that 46% of manufacturing organisations will apply AM for end-use parts by 2022. We believe this increase in adoption results from the influence the technology has on the product development process – from design to production – and from its ability to enable new business models and improve efficiencies. “The main design value propositions being pursued are weight reduction, assembly count consolidation, increased operational efficiencies with conformal designs, topologically optimised designs, multi-material parts, and adoption of new materials and alloys. The business benefits that manifest due to these design benefits are reduced tooling costs, decentralised manufacturing, mass customisation, reduction in spare parts inventory, cost-effective, low volume production, and reduced time to market. “And the blend of design and business value makes the planning within an organisation very strategic. It is one that is being simultaneously pushed top-down and bottom-up.” Which sectors or industries are expanding their use of 3D printing? “3D Systems, and largely the industry, is focused on developing additive manufacturing solutions to address applications within Healthcare, Aerospace, Automotive and durable goods. These, by far, are the highest growth sectors for AM. The need for custom solutions, improved efficiencies, and new business models are the main drivers that encourage organisations within these sectors to explore and adapt this tool for manufacturing. “The unique AM solutions designed to address a customer’s application needs involve digitisation of data, part building, customised technology development based on application needs, custom materials, and supporting that with application and business expertise. For example, within the Healthcare industry, the development of patient-specific dental prosthetics, aligners, hearing aids, implants, surgical aids, prosthetics, and limb joints are all enabled by Additive Manufacturing and benefit from the affordable customisation it offers.” What challenges remain for more wide-scale implementation of 3D printing technologies? “At the heart of AM is the need to bring together software, materials, imaging systems, optics, thermal systems, motion systems, design file formats, and process expertise to create a clean, crisp, usable, and functional part. Surrounding this core is the need for process and quality monitoring tools, integration with IoT and AI, also the need for finishing or post-processing the printed part. “And, outside this printing process domain is the need for enhanced design tools that let designers harness the total value of AM through the use of organic shapes, conformal designs, generative designs, multi-material parts, etc. Finally, based on the needs of each vertical – Aerospace, Healthcare, Automotive, etc. – standards are required that govern all these discrete technical elements. “There are various levels of innovation being brought to market in each of these areas, and that has made the technology very fluid. Currently, within the AM industry, each organisation is pursuing its independent path, and there is minimal work being done to harmonise the standards, terms, and communication channels. As an industry, we need the most work so that the customers are not overwhelmed with the need to sort this by themselves. “While lower costs might help, the value offered by AM is so highly disruptive, that if implemented correctly, it negates the cost concerns. There are several examples like jewellery, dental prosthetics, hearing aids, and aligners where AM has been adapted by the entire industry by carefully understanding how to harness its complete value and thus negating the cost variable.” Does AM form a critical component for Industry 4.0 to expand and diversify? “Inherently, AM is a digital technology, and that bodes well for Industry 4.0 as it too relies on digital platforms. 3D printing fits well in this paradigm as it is a practical solution for both product development and conventional manufacturing. This enables organisations to digitise design and manufacturing into one single process and integrate into the Industry 4.0 platform that enables a networked factory. “To enable such a networked, Industry 4.0 factory, we not only need to digitise and integrate the logistics of the manufacturing and delivery process, but process control, customisation, and distributed manufacturing also need to work together. 3D printing enables this digital integration naturally and hence is an important element of Industry 4.0. Additionally, the use of AI, AR, and VR in this mix is also helping the evolution.” AM is now out of the early adopter stage of its development. The capabilities of AM machines and the materials they can use, has lifted many of the technical barriers that did exist. AM is now an essential component for manufacturers considering their processes and supply chains that they will increasingly rely upon to help them innovate. The future where personalised products become the poster child of 3D printing, it’s a more mundane revolution in day-to-day manufacturing where genuine innovations are taking place.
<urn:uuid:a04585e8-795d-48a8-803a-c36deb12a0a8>
CC-MAIN-2022-40
https://www.nmg-international.com/post/print-your-own-the-state-of-additive-manufacturing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00527.warc.gz
en
0.948734
2,095
2.65625
3
Over the last two months, millions of visitors to mainstream websites have been exposed to a new form of malware embedded in banner pixels. And if you didn’t see it, don’t be surprised. The new malware, “Stegano”, is nearly invisible to the naked eye. Its code has been embedded in parameters controlling the transparency of pixels used to display banner ads. Since it’s buried in the alpha channel, even watchful ad networks find it difficult to detect. The malware’s name borrows from the word Steganography; the practice of concealing secret messages inside a larger document. The medium is new, but the practice dates back to at least 440BC. How Does Stegano Work? Stegano first verifies that the browser isn’t running on a virtual machine or connected to security devices used to detect attacks. Then, it redirects the browser to a site that hosts three Adobe Flash exploits. These exploits are currently unpatched. According to researchers from antivirus provider Eset, the Stegano virus easily outclasses other major exploit kits, such as Angler and Neutrino, in terms of referrals. And if Stegano is providing big returns, that means more malware based on hidden-pixel attacks are likely to crop up. “We have observed major domains, including news websites visited by millions of people every day, acting as ‘referrers’ hosting these advertisements,” An Eset representative said. “Upon hitting the advertising slot, the browser will display an ordinary-looking banner to the observer. There is, however, a lot more to it than advertising.” Two consecutive alpha values represents the tens and one of a character code, encoded as a difference from 255 (the full alpha). To make this hard to detect by the naked eye, the difference is minimized using an offset of 32. What Should I Look Out For? Currently, the Stegano ads promote applications calling themselves “Browser Defence” and “Broxu”. They also target people who visit news sites using Internet Explorer browsers. The script contained in the pixels exploits a now-patched IE vulnerability called CVE-2016-0162 to obtain details about visitors’ computers. The script also checks for the presence of packet capture, sand-boxing, virtualization software, and a variety of security products. If none of these exist, it refers them to the exploit site and serves up either Ursnif or Ramnit malware. In short, the attackers have gone to great lengths to make sure security-savvy people aren’t attacked. The Stegano attacks are concentrated in Canada, the UK, Australia, Spain, and Italy. What’s Ursnif or Ramnit Malware (In case I get infected)? The Ursnif family is made up of modules designed to steal your e-mail credentials, log your keystrokes, take screenshots and videos, and act as a backdoor into your device. The Ramnit family does essentially the same things as Ursnif. However, it mainly targets the banking industry. Anti-Phishing Training Won’t Help, So Protect your Users! Since Stegano-type attacks are nearly invisible to the naked eye, even a security-savvy user is vulnerable to this sort of threat. That means your anti-phishing training is going to be near useless against this sort of malware. And if these exploit kits lead to ransomware demands, your business could be hit for tens of thousands of dollars from a single virus. If you want your business to survive, the best way is to make sure your essential data is protected. With trustworthy backup and disaster recovery software, you can be hit by ransomware and still restore everything from bare metal. BackupAssist is the #1 Backup and Disaster Recovery software for Windows Servers, used by world-famous organizations such as NASA, Cessna, and M.I.T. Check out our free 30-day trial or read more here.
<urn:uuid:cf4f902f-b3dd-43d6-84c4-0d14f59c2fdc>
CC-MAIN-2022-40
https://www.backupassist.com/blog/invisible-malware-hide-in-banner-pixels-with-stegano
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00527.warc.gz
en
0.923081
882
2.625
3
Today a report published by U.K.’s Information Commissioner’s Office (ICO) highlighted the data protection challenges and privacy risks agencies face when dealing with sensitive personal information. While the ICO’s study was limited to the foster care system, their findings highlight the importance of protecting sensitive personal information across all social services. Risks and Penalties - Highly sensitive personal information concerning foster carers and looked after children is routinely emailed between agencies and local authorities for the purpose of arranging foster care placements without encryption. The lack of such safeguards increases the risk that the information could be inappropriately accessed. - The majority of agencies visited did not encrypt mobile devices used to process, store or transport personal data. This included items such as laptops and USB sticks. If lost or stolen, any such devices containing sensitive personal data could be easily accessed. - Fostering agencies often require carers to provide them with updates about looked after children but they do not provide secure methods such as VPNs by which to do this. Sensitive personal information is therefore processed on home computers and stored in the ‘cloud’ in ISP or webmail accounts (Hotmail, Gmail etc.). - Some agencies allow their staff to carry out work involving sensitive personal data on their home computers instead of providing appropriate remote access to their network, an encrypted memory stick or a work issued encrypted laptop on which to save their work. - Adequate data protection/information security training is not provided by agencies to their staff. John-Pierre Lamb, ICO Group Manager in the Good Practice team, said, "The worst breaches of the Data Protection Act can lead to a monetary penalty of up to £500,000, but when you consider the sensitivity of the information this sector is responsible for, the human cost could be far more significant.” How to Get Compliant Nowhere is the need for privacy more important than in protecting personal information about children in need and the foster families who help them. Minimizing this threat doesn’t necessarily mean rigid, cumbersome security measures of the past. A few best practices that could prevent a breach of this nature: - Secure sensitive documents: Using a DRM solution like FileOpen RightsManager allows you to grant permission to only certain users. If one of those users accidentally forwards the document to the wrong person, it can’t be opened or viewed. Moreover, DRM enables you to instantly revoke access to a previously authorized recipient, if necessary, or even to one of their devices if misplaced or stolen. - Enforce usage and retention policies: Advanced DRM solutions allow you to control exactly how a recipient uses your document, adding layers of security to the most sensitive documents. Restrict or expire privileges on a need-to-know basis. Different organizations, and levels within them, can be granted unique sets of permissions on the same document. - Apply detailed watermarks: As a final measure of security, watermarks can ensure the traceability of sensitive documents by overlaying key information about the user, such as their name, date, time, printer and location. Watermarks can provide the “smoking gun” in determining where and when a document was leaked, and aid authorities in enforcing compliance with privacy regulations. - Enable secure, but uncomplicated access: At the core of any failed security initiative is an overly complex, hard-to-use solution. FileOpen eliminates such hurdles with the FileOpen Viewer, which provides access to protected documents through Web browser with no plugins or desktop software required. It also provides Android or iOS user’s access on their smartphones and tablet through native applications.
<urn:uuid:1523deb5-775c-402b-a08c-f396c7a3e758>
CC-MAIN-2022-40
https://www.fileopen.com/blog/bid/102042/U-K-s-ICO-Reports-Security-Gaps-in-Foster-Care-Data-Protection
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00527.warc.gz
en
0.920455
742
2.515625
3
People often overestimate how much disk space they need when backing up computers, especially when multiple computers are involved. It is reasonable, of course, to assume that to determine backup capacity that you would simply add up how much data is (or the hard drive/SSD capacity is) on each computer you’re backing up. In reality, you probably need much less space than you might assume. Backup and disaster recovery products such as the DataHarbor Network Backup Appliance are designed to deduplicate (“dedupe”) and compress data that it is backing up from laptops and computers. As a result, taking advantage of these capabilities greatly reduces the amount of backup storage or backup disk space required to safely back up an office full of computers. Deduplication is a technique in which the backup device saves only one copy of programs and files common to more than one workstation. For example, it’s quite common for most, if not all, computers in an office to be running the same version of an operating system. Or perhaps, everyone has the same applications installed on their systems. Several people might have saved copies of the same spreadsheet or other files to their hard drives. The backup device takes note of this and only keeps one copy of the duplicated software and data. If a workstation needs to be restored, the device is able to orchestrate the restoration of all of the applications and data needed to rebuild the workstation to its previous state. Compression is a computer science technique in which file sizes can be reduced by algorithmically encoding files such that they require fewer bits than the original source file. (The Microsoft Windows Storage and Server Essentials software used in the DataHarbor appliance typically compresses files such that they only require half the space of the original.) Both of these techniques greatly reduce the amount of disk space needed as well as the amount of time needed to back up an office’s computers. Examples at work The charts shown below nicely demonstrate the value of deduplication and compression to reduce the amount of storage needed for backing up two different small businesses. In the first instance, the amount of storage needed on the DataHarbor appliance is 35% of the aggregate storage space on the computers being backed up. In the second example, this boutique investment bank required 59% of of total disk space for the five computers it backs up on the DataHarbor appliance. (Note that one of the computer clients being backed up had a lot of personal media files, thus reducing the impact of deduplication, as well as compression, since media files are usually compressed as they are stored in their original formats. This is one reason that many IT departments opt to set policies such that personal media files will not be backed up by the company’s backup procedures.) So before you end up overspending and overprovisioning your backup storage requirements, think about what data you really need to back up, as well as how likely it is that you can greatly reduce the amount of backup space needed. If you need help, let us know.
<urn:uuid:b8104387-ad62-462d-847a-b163d4a4b18a>
CC-MAIN-2022-40
https://www.cru-inc.com/data-protection-topics/deduplication-compression/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00727.warc.gz
en
0.952302
644
3.25
3
Whether you’re a systems administrator or a developer, you’ll soon want to understand how your software works. From a single microservice to a vast, monolithic system, logging, tracing, and monitoring are all ways to help ensure correctness in your system, to track what may have gone wrong when problems arise, and to improve the overall functionality. Importantly, logging, tracing, and monitoring aren’t different words for the same process. They’re each functioning in a unique way. Even if some tools or technologies overlap, each process provides a different outcome to your IT environment. Let’s take a look. What is logging? The purpose of logging is to track error reporting and related data in a centralized way. Logging should be used in big applications and it can be put to use in smaller apps, especially if they provide a crucial function. The term logging can refer both to the practice of event logging or to the actual log files that result. Logging is primarily deployed and used by system administrators on the operational level, intentionally providing a high-level view. The most successful log files are not noisy; they shouldn’t contain extraneous or distracting information. Instead log files should log only what is absolutely necessary, such as actionable items. Of these action-related items, you may have two types of data: - Data for us humans that alerts or warns of a panic situation (enough to begin the investigation but not an overwhelming amount) - Structured data for machines (Some debate whether this machine-level data is necessary, but security is a good case use.) Consider that logging should tell a compelling story, but as succinctly as possible. When choosing what to log, consider: - Who is using the logs (typically sysadmins) - Whether logging helps only with preventative measures or with ongoing pursuits - Are all system errors equal, or does a warning in a particular area serve as a warning for a critical failure elsewhere? - Output is often standards-based - Logs are localized - New events need not be agile Logging too much data can be distracting and a poor use of resources. Indeed, transferring, storing and parsing logs is expensive, so minimizing what the log files contains can minimize cost and resources. Still, logging is king, especially when it comes to traditional monolithic architectures. What is tracing? Where logging provides an overview to a discrete, event-triggered log, tracing encompasses a much wider, continuous view of an application. The goal of tracing is to following a program’s flow and data progression. As such, there is a lot more information at play; tracing can be a lot noisier of an activity than logging – and that’s intentional. In many instances, tracing represents a single user’s journey through an entire app stack. Its purpose isn’t reactive, but instead focused on optimization. By tracing through a stack, developers can identify bottlenecks and focus on improving performance. When a problem does occur, tracing allows you to see how you got there: - Which function - The function’s duration, - Parameters passed - How deep into the function the user could get A common tracing tool is the Profiling API in .NET. In an ideal world, every function has tracing enabled. But, the amount of resulting data can be too much to sort, though cloud technology is certainly helping tracing become a realistic option for more time. Unlike logging, localization is not a concern, but new messages do need to be agile. Other drawbacks include: - Complex layers - Abundance of implementation code - A push model, a common design, which can affect applications To illustrate this, tracing libraries that intend to simplify tracing as a practice often wind up being more complicated than the code they are serving. Sometimes, tracing is best for microservices. Because of the data involved, tracing can be an expensive endeavor. Before stepping into tracing, remember that it is not a requirement. You’ll want to consider whether the added complexity is warranted, what value will it bring? You may fall into a trap of optimizing prematurely, or you may be able to scale horizontally and avoid such optimization for a time. What is monitoring? While monitoring may be a casual term that can be applied to tracing or logging or a number of other activities, in this context, monitoring is much more specific: instrumenting an application and then collecting, aggregating, and analyzing metrics to improve your understanding of how the system behaves. This type of monitoring is primarily diagnostic – for instance, alerting developers when a system isn’t work as it should. In an ideal world where cost isn’t a problem, you could instrument and monitor all of your services Monitoring systems are the best way to begin employing metrics. Such systems handle storage, aggregation, visualization, and even automated responses. These monitoring systems are surprisingly affordable, though they do rely heavily on data. With companies embracing cloud and data, the more data you have, the more beneficial monitoring can be. Choosing tracing, logging, or monitoring Certainly, companies don’t have to deploy only one tool, as each process has its own goals and outcomes. Often logging is the first step, held up by many as a requirement. Tracing or monitoring, at least for now, may be beneficial but not necessities; as you grow and need more functionality, one or both can be useful. It is important to remember, however, that each of the three are not, in and of themselves, solutions. Instead, logging, tracing, and monitoring are simply proxy representations of the actual action software performs. They are not what the software is or does – they are simply tools to understand how a system behaves. - BMC IT Operations Blog - BMC AIOps Blog - Observability vs Monitoring: What’s The Difference? - Cloud Monitoring: Choosing the Right Metrics - State of IT Monitoring: A Report Roundup
<urn:uuid:d23e285b-fd2c-4ed2-8d2b-0bc745ddb35e>
CC-MAIN-2022-40
https://www.bmc.com/blogs/monitoring-logging-tracing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00727.warc.gz
en
0.928265
1,255
3
3
If we’re going to succeed in enabling true Internet of things (IoT) security, we all need to agree on a few things. Things like IoT SAFE (IoT SIM Applet For Secure End-2-End Communication), a new standard from the GSMA that declares the SIM (Subscriber Identity Module) to be the most secure location within a device from which to process and secure the data exchange from chip to cloud. Before I explain why IoT SAFE makes so much sense for IoT security, let’s look at what ‘secure location’ means within this context. Today’s IoT devices typically employ any number of isolated and trusted components we call Root (or Roots) of Trust (RoT). Often proprietary, they’re spread across hardware, firmware and software elements, performing specific critical functions. For manufacturers, establishing these roots of trust will be the first step in ensuring a new device is built to include trustable security. But while each component may be trustworthy in itself, the lack of standardization has resulted in inconsistent methods of provisioning, reduced interoperability across vendors not to mention uncertainty, from those eager to build IoT devices, over whether proprietary security methods are truly secure. No wonder a recent Arm survey, compiled in conjunction with the Economist Business Unit, found that – among other things – security concerns still constrained respondents’ IoT ambitions. Standardizing the RoT, within a device’s SIM, ensures a common mechanism for secure data communications using a highly trusted and time-tested module. It offers a cost-effective mechanism for cloud authentication and end-to-end security, since SIMs are already used for authentication on mobile networks. That makes IoT SAFE a key step towards uniting the industry in realizing the vision of a truly secure IoT, from chip to cloud. Kigen focuses on eSIM and iSIM solutions with a commitment to IoT security that goes back 15 years, long before we called that global network of diverse devices the ‘Internet of Things’. It’s why we’ve seen the important in adopting IoT security best practices such as PSA Certified, which offers a security framework and independent assessment program to enable IoT developers to build devices that IoT solution deployers trust to readily secure their data channels from chip to cloud. Kigen sees the value in investing, utilising and advancing IoT security and technologies. iSIM, which embeds the SIM within a trusted, tamper-resistant enclave at the heart of the device’s System on Chip (SoC), is the ultimate foundation for a secure IoT SAFE device. IoT SAFE meets the needs of IoT security for all SIM form factors: SIM, eSIM and iSIM. But if we’re looking to maximize IoT security, it makes most sense to bake that RoT directly into the SoC, where it’s integrated into the heart of a device’s capabilities from the off. iSIM takes IoT SAFE further than any other SIM form factor as its existence in a device can be relied upon. An iSIM’s security already offers industry-recognized levels of protection of network and subscriber credentials that are built-in from point of manufacture. Read More: SIM, eSIM, iSIM: What’s the difference? And if we need to update that security in future (a somewhat cumbersome and proprietary task until now), IoT SAFE standardizes the delivery and provisioning of over-the-air (OTA) security certificates directly to the most secure place in a device, ensuring that the transfer of information from iSIM chip to cloud can’t be intercepted and modified. It gives device manufacturers the best chance to mitigate potential attacks, and by using a secure, tamper-resistant hardware element to protect credentials, it reduces the risks associated with spoofing or man-in-the-middle attacks when exchanging sensitive data with the IoT service provider’s cloud. iSIM also takes the concept of simplifying the SIM SKU count further than other form factors. The SIM or eSIM is until now built on a discrete secure microcontroller (MCU), so it makes perfect sense to add these SIM capabilities into the device’s main SoC, reducing the bill of materials (BOM), optimizing the supply chain and shrinking the physical size of the SoC’s die. That size reduction will be particularly important to IoT SAFE devices that are physically too small (or too well sealed, for example in high-moisture applications) to accommodate SIM or eSIM chips. And of course, iSIM makes the secure, zero-touch provisioning and ongoing management of these devices, their authentication credentials and connectivity, using remote orchestration platforms, much more seamless and inherently more scalable. We’ve always believed innovators should be able to choose any device, any data source, any network and any cloud to tailor their solutions. In keeping with this philosophy, the combination of IoT SAFE running on an iSIM allows for self-contained processing and encryption elements to manage security-related workloads for network and cloud authentication in a more integrated yet tamper-resistant way. It enables a vast new range of secure use cases covering a combination of smaller device sizes, ‘baked in’ connectivity and seamless provisioning and lifecycle management. Learn how Kigen SIM solutions unlock the potential of IoT security and enable new growth in your business.
<urn:uuid:93c35cfb-339b-4f2c-a492-f5d1eb66abb3>
CC-MAIN-2022-40
https://kigen.com/resources/blog/isim-and-iot-safe/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00727.warc.gz
en
0.895305
1,128
2.515625
3
The coronavirus-induced lockdowns have exposed a greater number of children to the digital world. These include online classes, social media, online gaming, and digital avenues of entertainment. But, the internet is also home to nasty content in equal proportion. To make matters worse, the online world is infested with cybercriminals who lure innocent children into sharing private information, sexual abuse, and a host of other criminal activities. Greater digital exposure, therefore, makes children more vulnerable to cyber risks. Therefore, there is a pressing need to create a safer internet for children. Time children spend online is increasing According to UNICEF, over 175,000 new children go online every day—that's one new child every half second. And, the amount of time children spend online is rising. An estimate suggests that COVID19 has resulted in a 50% increase in screen time for children. Whether it is schooling through online collaboration tools, educational and vocational webinars, co-curricular activities, and even competitions, everything is now taking place digitally. In the absence of outdoor activities, children are turning to digital means for recreation. The most common avenues for entertainment are videos and online games. Children are spending more time on social media platforms to communicate with friends and share their views. They are reading eBooks to keep themselves engaged. All of these activities have only translated into a significantly increased time spent online and a higher risk of cyber exposure. Online threats are real and growing The threats that children face in the online world are real. Fraudsters are using all available avenues to exploit the innocence of children. These include online bullying, sexual abuse, exposure to inappropriate content, social engineering, malware, ransomware and malicious links. These threats can have serious consequences and can even scar the personalities of the affected children for the rest of their lives. A digital-native generation that is aware of online scams Children, today, are aware of the scams and criminal activity plaguing the internet. However, they don't really know how to deal with impostors. To learn what children know about fraud, Arkose Labs commissioned a study entitled ‘How to Get Kids Cyber-Savvy and Make a Safer Internet For All'. We spoke to more than 80 children, aged 6-16 years, and located across North America, Asia, and Europe. The study reveals some interesting insights into popular online activities and knowledge of online fraud among children. It reveals that the generation today is digitally native. More than 54% of children spending four hours or more every day online. 20% spend between two and three hours a day, 6% spend one to two hours, while only 2% spend less than an hour. COVID19 clearly has a role in increasing the time spent online. 95% of children reported spending even more time there due to virtual classes and other activities. As per the study, the most popular online activities that children spend their time on include classes for school (96%), watching videos (89%), video conferencing (75%), surfing the web (71%), gaming (67%), and social media (46%). When it comes to awareness about scams and online frauds, children know that sharing information related to payments (94%) or social security numbers (93%) is dangerous. Children clearly know that online fraud involves stealing personal details for financial fraud and impersonation. Surprisingly, 7% of the children polled were aware they had been hacked. A majority (81%) denied having been hacked, whereas 12% were not too sure. Additionally, most children wouldn’t be able to recognize that they have been hacked. You can find more statistics and insights from the Arkose Labs study here. Parental counsel can help create a safer internet for children There is no doubt that parents can play a key role in building a safer internet for children and protecting them from potential dangers. Involvement with children and their online activities helps parents learn about the websites and apps their children access. Therefore, they can counsel their children on how to navigate the digital world safely. Parents must emphasize that children should not purchase anything online without their presence. They must discourage children from clicking random links or opening suspicious emails and teach them about the telltale signals. All of this counseling and awareness can help children be mindful of the potential dangers in the online world and prepare them to navigate safely in the digital world. In fact, according, to a SuperAwesome survey of parents, 96% said they would like to see more parental controls in the games and services their kids use. That said, parental control has dipped. This is because parents themselves are obliged to attend to their professional commitments while working from home. As a result, children are mostly left on their own, which can increase their vulnerability to online risks. Role businesses can play UNICEF believes that businesses can play a massive role in protecting children in the digital world and help make a safer internet for children. The world body has collaborated with the International Telecommunication Union to develop the Child Online Safety Assessment (COSA) tool. This tool aims to help companies ensure children have a safe experience while using the internet. Businesses, too, are coming forward to shoulder their responsibility. From creating tools to awareness campaigns, there's a lot of ways businesses are helping make a safer internet for children. As a responsible netizen, Arkose Labs plays a vital role in these efforts. We work with many popular digital platforms to keep children safe from fraud and abuse. Our fraud and abuse prevention solution detects and stops automated new account creation. This is often the starting point for spam and phishing messages. Our long-term approach to fighting fraud makes the attack attempts so expensive that cybercriminals lose all financial incentives and abandon the attack. Safer internet for children is a collective responsibility Children today belong to a digitally native generation that is spending a considerable amount of time in online activities. This means their risk of exposure to cyber threats is also increasing. What children go through in the online world can have lasting effects on their personalities. It is, therefore, necessary for everyone—parents, teachers, schools, society, governments, and businesses—to come together and make a safer internet for children. Arkose Labs reaffirms its support to all the activities that aim to make the digital world safer for our children. To request a copy of the eBook, please download here.
<urn:uuid:189e133d-c0f4-4e3a-81a6-b0b6fb7f9faf>
CC-MAIN-2022-40
https://www.arkoselabs.com/blog/coming-together-to-create-a-safer-internet-for-children/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00127.warc.gz
en
0.9525
1,303
3.515625
4
There are over 4 million cybersecurity professionals worldwide. But we need almost twice as many. The global cybersecurity skills gap narrowed over the past year, from 3.1 million to 2.7 million people, according to the newly-published 2021 (ISC)2 Cybersecurity Workforce Study. It estimates that there are 4.19 cybersecurity professionals worldwide - 700.000 more compared to last year. In the US, there are now more than 1.1 million cybersecurity professionals (approx. 880,000 in 2020), and in Europe - almost 1.1 million compared to 830,000 last year. The 2021 (ISC)2 Cybersecurity Workforce Study collected survey data from a record 4,753 cybersecurity professionals working with small, medium, and large organizations throughout North America, Europe, Latin America (LATAM), and Asia-Pacific (APAC). Roughly one-third of the survey respondents indicated that a shortage in cybersecurity team members has led to real-world impacts, including misconfigured systems, not enough time for risk assessment and management, rushed deployments, and slowly patched critical systems. Despite some of the prevailing narratives in the media about cybersecurity professionals feeling stressed, unappreciated, and facing overwhelming pressure, (ISC)2 research continues to reveal a highly engaged and satisfied workforce. “Cybersecurity is anything but stable or predictable. Perhaps partly because of its very dynamism and the challenges it presents, many successful cybersecurity professionals overwhelmingly report happiness with their jobs. In fact, those currently in cybersecurity roles have consistently expressed very high levels of job satisfaction over the last four years, and they reported sharply higher satisfaction in the last two,” the research claims. Job satisfaction numbers are the highest ever reported, with 77% of respondents saying they are satisfied or extremely satisfied with their jobs. That’s a significant boost from 66% in 2019. The global cybersecurity force is well-educated - 86% have a bachelor’s degree or higher. It is also technically grounded - among those respondents with college degrees, most graduated with diplomas in STEM fields (46% computer science, 18% engineering, 3% mathematics), and some from business fields (8% business, 4% finance, 3% economics). Research also claims that cybersecurity professionals are ‘strongly compensated.’ Respondents reported an average salary before taxes of $90,900 — up from $83,000 among respondents in 2020 and $69,000 in 2019 — with 31% reporting a median annual salary of $100,000 or more. Salaries and their distributions vary broadly by region. In North America, the average salary is $119,000 before taxes, Latin America - $32,000, Europe - $78,000, and APAC - $61,000. 26% of respondents globally chose not to disclose their salaries. Among study participants, the field also continues to be predominantly male (76%) and Caucasian (72%) in North America and the UK. More from CyberNews: Subscribe to our newsletter
<urn:uuid:fad9876e-4740-4b99-8b04-2bf1104e1874>
CC-MAIN-2022-40
https://cybernews.com/news/cybersecurity-professionals-salaries-disclosed-north-americans-earn-119-000/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00127.warc.gz
en
0.94918
628
2.59375
3
The Secure Shell (SSH) protocol was created in 1995 by a researcher from the University of Helsinki after a password-sniffing attack. SSH is the tool of choice for system admins and is used throughout traditional and virtual datacenter environments to enable secure remote access to Unix, Linux and sometimes Windows systems. You can think of the SSH key, which enables this remote access, as a “Swiss Army Knife” for IT teams in that it helps administrators and developers authenticate to systems, build authentication into systems and applications and encrypt the resulting traffic between its users and systems. During the authentication process, these SSH keys often establish direct, privileged or root access to a variety of critical systems, effectively turning these cryptographic assets into privileged credentials. SSH keys are granted the same access as passwords, but when most people think about securing their privileged credentials, they forget about SSH keys. As a result, these keys can easily fall into the wrong hands and, instead of protecting access to important assets, these keys can become “virtual skeleton keys.” To make matters worse, when an attacker gains access to one privileged SSH key, she or he can access every SSH key stored on that machine and spider the entire company network, often gaining access to all company data. As few as five to 20 unique SSH keys can grant access to an entire enterprise through transitive SSH key trust, providing attackers with privileged access to the organization’s most sensitive systems and data. Four SSH vulnerabilities you should not ignore: - SSH Key Tracking Troubles. It’s not uncommon for a typical large enterprise with 10,000+ servers to have more than one million SSH keys – making it incredibly difficult, if not impossible, to find and manage each key. Organizations typically accumulate large numbers of SSH keys because end users can create new SSH keys (credentials) or even duplicate them without oversight, unlike certificates or passwords. Once a large number of SSH keys are built up over time, an organization could, for example, easily lose track of these credentials when development servers are migrated into production environments (assuming the development environment credentials are not scrubbed) or when employees leave the company and their keys are not changed. The result? SSH keys left unaccounted for can provide attackers with long-term privileged access to corporate resources. If attackers gain access to a key that is never revoked or rotated, the attackers could have a permanent network entry point and impersonate the user that the SSH key originally belonged to. - When it Comes to SSH Keys, Sharing Isn’t Caring. For the sake of efficiency, SSH keys are often shared or replicated across a common group of employees or servers and infrastructure components. As noted above, as a result of SSH key duplication, as few as five to 20 unique keys can grant access to all machines throughout an enterprise. This approach may make IT teams’ jobs easier in the short-term, but it also makes attackers’ lives easier in the long-term. SSH key duplication creates complicated, many-to-many private public key mappings that significantly reduce security because it is difficult to rotate and revoke a single key without breaking untold other SSH key relationships that share the same key fingerprint. SSH key sharing is also dangerous because it reduces auditability and nonrepudiation. - Static SSH Keys, Because “Ain’t Nobody Got Time for Rotation!” It’s easy to see how rotating one million plus SSH keys would be a logistical nightmare. Many IT administrators and security professionals rarely change and re-distribute keys for fear that a critical component or employee may be forgotten (which could mean anything from a simple inconvenience for a single employee to a major company-wide system outage). These factors typically result in a surge of static SSH keys, opening the door for attackers to compromise an unchanged key, use it to move laterally through the enterprise and gain permanent, unauthorized access to sensitive data and assets. - Embedded SSH Keys – The Ones No One Wants to Mess With. SSH keys are frequently embedded within applications or scripts. Administrators are often fearful of changing them as they do not understand the code the keys are embedded in or are strongly discouraged from rotating them because of the level of coordination required to prevent system outages. As a result, static SSH keys embedded in applications, code and scripts can lead to persistent backdoors for attackers. SSH keys can present a tremendous opportunity for hackers to gain privileged access to networks, stay connected, impersonate legitimate users, hide their activity with encryption and move freely. To learn about SSH key challenges and best practices for mitigating associated risks while improving your overall security posture, visit our website. And to learn more about our comprehensive secrets management tool to help secure SSH keys, credentials and other secrets used by applications and machines, check out this blog post. Editor’s Note: This article has been updated. It was originally published April, 2016.
<urn:uuid:b73d63d4-f347-433f-95c2-46aaaaa5e82f>
CC-MAIN-2022-40
https://www.cyberark.com/resources/blog/four-ssh-vulnerabilities-you-should-not-ignore
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00328.warc.gz
en
0.932914
1,006
3.390625
3
The battle for processor speed supremacy is only part of the equation in a successful computer chip, but the Alpha line — passed on from Digital Equipment to Compaq to HP — was built on the premise that speed was the key. Now, after a dozen years, the Alpha line — which many credit for sparking a speed war among today’s giants Intel and AMD — is being phased out, but leaving a legacy on today’s computing system and living on in the form of AMD’s Athlon XP processor. “It was certainly among the fastest processors that have ever been produced,” Mercury Research president Dean McCarron told TechNewsWorld. “Simply by that, it created competitor interest to have fast processors as well.” Good Speed, Bad Timing The Alpha line of processors was used primarily in high-performance settings that required faster processor speed than was available at the time of its debut more than 10 years ago. The Alpha also had the support of Microsoft, which shipped a version of Windows NT for the Alpha in 1999. However, the Alpha pushed speed at perhaps the wrong time, when the PC was emerging as the preferred platform for successful processors. “It had a profound impact on computer architecture,” McCarron said. “There were certain characteristics of the market that were working against it. The market greatly prefers backward compatibility with previous platforms. Eventually, [chipmakers] realized that they needed to address the PC market.” Too Fast for Own Good While the Alpha was the first processor to break the 1-GHz mark, Alpha suffered from a lack of applications and support, including support from Microsoft. By the time Alpha backers realized the need to look beyond clock speed and consider overall, real-world performance, Intel and AMD had already zoned in on the idea to provide better processors. “The net effect of it was that the performance alone is not enough to buy you a market,” McCarron said. The industry analyst compared the path of the RISC architecture Alpha to the MIPS architecture, which started in similar fashion aimed at the high-end performance environment, but diverged when MIPS processors found a market in the embedded space. While it might not have lived up to expectations when it was breaking speed barriers, Alpha will live on in HP’s reported sales through 2006 and support through 2011. McCarron said the Alpha EV6 bus design actually established the design that is used in today’s Athlon XP processor from AMD. Alpha also is credited by some with the development of the high-speed interconnect technology known as HyperTransport, used in AMD’s successful Opteron line of chips and the multithreading technology known as HyperThreading, used by Intel. At HP World in Chicago this week, HP announced several improvements to server products, including AlphaServer systems with new, faster processors and price reductions.
<urn:uuid:2d27c2f6-709d-4d1a-ad6c-bbfd507d7b43>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/hps-alpha-processor-bidding-adieu-35973.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00328.warc.gz
en
0.974907
610
2.53125
3
China's capital Beijing issued its second smog red alert of the month on Friday, triggering vehicle restrictions and forcing schools to close. A wave of smog is due to settle over the city of 22.5 million from Saturday to Tuesday. Levels of PM2.5, the smallest and deadliest airborne particles, are set to top 500, according to the official Beijing government website. That is more than 20 times the level that is considered safe by the World Health Organization. Half the city's cars will be forced off the road on any given day, while barbecue grills and other outdoor smoke sources will be banned and factory production restricted. Schools will close and residents advised to avoid outdoor activities. On Friday afternoon, the air was relatively good, with a PM2.5 reading of about 80 and the sun shining brightly over the city. However, visibility in some parts of Beijing will fall to less than 500 meters (1,600 feet) on Tuesday when the smog will be at its worst, the city government website said. An almost complete lack of wind would contribute to the smog's lingering over the city, it said. Smog red alerts are triggered when levels of PM2.5 above 300 are forecast to last for more than 72 hours. Although the four-tier smog warning system was launched two years ago, Beijing had not issued a red alert until last week, drawing accusations that it was ignoring serious bouts of smog to avoid the economic costs. Some residents have defied the odd-even license plate number traffic restrictions and complained about the need to stay home from work to accompany housebound children. Others have used the break from school to travel to places where the air is better, while many who stay wear air filtering face masks and run air purifiers in their homes. Scientific studies attribute 1.4 million premature deaths per year to China's smog, or almost 4,000 per day. Most of the pollution is blamed on coal-fired power plants, along with vehicle emissions, building construction and factory work resulting from three decades of headlong economic expansion. While Beijing's smog gets the most attention, the scourge strikes much of northern China on a regular basis, sometimes forcing the closure of highways because of poor visibility. The world's biggest carbon emitter, China plans to reduce hazardous emissions from coal-fired power plants by 50 percent over the next five years, and says its overall emissions will peak by about 2030 before starting to decline. China still depends on coal for more than 60 percent of its electricity but is in the process of shifting to nuclear, solar and wind power.
<urn:uuid:9264554e-8495-43df-8f74-7bfb9a3ec06e>
CC-MAIN-2022-40
https://www.mbtmag.com/global/news/13103601/beijing-issues-second-smog-red-alert-of-the-month
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00328.warc.gz
en
0.956846
535
2.5625
3
Japan faces a unique power delivery challenge because of its two entirely incompatible power grids. The odd system is a legacy from the 19th Century, when local providers In Osaka used 60Hz generators, while German equipment purchased in Tokyo worked on a frequency of 50Hz. Here, Nick Boughton, sales manager at systems integrator, Boulting Technology, explains how the timeline of power grid modernisation, including the convergence of disparate systems, has led to the evolution of the smart grid. By the early 20th century, local grids worldwide were growing, driven by the demands of the industrial revolution. Becoming very large, mature and highly connected by the 1960s, power grids were able to be metered on a per-user basis, allowing appropriate billing according to the varying consumption of different users. However, limited data collection and processing capability meant fixed-tariff arrangements were common. Alongside the less-than ideal billing options, the growing request for power meant supply sometimes outstripped demand, particularly at peak times and power quality became affected. Between the 1970s and 1990s, events such as blackouts, power cuts and brownouts, where voltage is dropped for minutes or hours, were not uncommon in many developed countries. More recently, from the turn of the century, technology has advanced to a stage where many of these limitations have been overcome. Peak power prices no longer need to be averaged out and passed on to domestic and commercial customers equally. However, new challenges, including the instability of renewable power, have also become apparent. Concerns over environmental damage from fossil fired power stations and a reluctance to uptake nuclear power has resulted in the use of renewable energy technologies on a large scale. According to REN21’s Global Status Report, 19.3% of the global final energy consumed was provided by renewable energy, with modern renewables increasing their share to approximately 10.2%. Renewable energy capacity grew through the use of solar photovoltaic cells, while hydropower continued to represent the majority of generation. Renewable energy is key to fighting climate change, but it does produce highly variable power, which could lead to lower energy margins and potentially even blackouts on cloudy, still days. These risks, combined with a need for a highly distributed grid with power generated and consumed throughout, has led to the development of smart grids. The first step in a smart grid upgrade is to improve infrastructure, to produce what China has coined a Strong Grid. Next is the addition of the digital layer, making the grid smart, followed by business process transformation, which is necessary to capitalise on the investment. Nowadays, much of this work is grouped as smart grid upgrades. The smart grid is the end goal to take advantage of the full suite of features available for power grids. These include state estimation technology, which improves fault detection and allows self-healing and multiple power routes that improve reliability, resilience and flexibility. Modern smart grids can also handle two directional energy flow, pushing further toward the goal of distributed generation. This is achieved by allowing power from photovoltaic cells, fuel cells and charge from the batteries of electric cars to reverse flow. Two directional flow increases safety while reducing reliability issues in an intelligent manner. Algorithms can use data fed back to the system to predict how many standby generators will be needed to cope with rapid increases in grid load. This promotes load reduction that can eliminate stability issues. Smart grids are a natural evolution of the power grid for most countries and an obvious choice for developing countries investing in power infrastructure or upgrading cities to smart cities. The benefits have brought about results in more stable power quality for commercial properties, manufacturers and other industries alike. Smart grids effectively eliminate or account for many power quality and reliability issues. Desite the many advantages of a smart grid upgrade, Japan’s separate grids might require more work before becoming compatible. The author of this blog is Nick Boughton, sales manager at systems integrator, Boulting Technology
<urn:uuid:a80574d4-dcb1-4c4e-91d0-640a372fac9e>
CC-MAIN-2022-40
https://www.iot-now.com/2018/11/28/90793-evolution-smart-grid/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00328.warc.gz
en
0.956478
824
3.53125
4
One of the most common calls I answer from Network Administrators is: “Why are we getting so much email spam?” When I look at the examples they provide, many times I see that the email message is actually ham, not spam. The following is a brief article to help you to identify the difference between Spam and Ham and what to do about them. So, what is Spam? Wikipedia describes Spam as “the use of electronic messaging systems to send unsolicited bulk messages, especially advertising, indiscriminately.” This is a commonly accepted definition in the industry, though there are variations in how governments define and regulate spam. For example, in the United States, any commercial email has to comply with Spam regulations, even if the email is not sent in bulk.People often don’t realize that they are signing up for mailers when they download free software, or sign up for a new service, or even when updating existing software. Click To Tweet The key word here is unsolicited. This means that you did not ask for messages from this source. So if you didn’t ask for the mail it must be spam, Right? That is true, however quite often people don’t realize that they are signing up for mailers when they download free software, or sign up for a new service, or even when updating existing software. The best way to deal with spam is to forward the message to the system administrator. In 2003 the CAN-SPAM ACT was made law. This act defines the rules for advertisers and bulk mailers to follow. In order to legally send bulk mail and advertisements, they are required to adhere to the following guidelines: - The header of the commercial email (indicating the sending source, destination and routing information) doesn't contain materially false or materially misleading information; - The subject line doesn't contain deceptive information; - The email provides “clear and conspicuous” identification that it is an advertisement or solicitation; - The email includes some type of return email address, which can be used to indicate that the recipient no longer wishes to receive spam email from the sender (i.e. to “opt-out”); - The email contains “clear and conspicuous” notice of the opportunity to opt-out of receiving future emails from the sender; - The email has not been sent within 10 days after the sender received notice that the recipient no longer wishes to receive email from the sender (i.e. has “opted-out”); - The email contains a valid, physical postal address for the sender. So what is Ham? The term ‘ham’ was originally coined by SpamBayes sometime around 2001and is currently defined and understood to be “E-mail that is generally desired and isn't considered spam.” Desired? You may be saying to yourself “I do not desire this mail, how is this ham and why am I getting it? “ The answer is you requested it. There are two ways you could have signed up for this email. - Directly– While downloading free software such as a browser or a game or signing up for a new online service you were required to agree to and check the box agreeing to their Terms of Service (TOS). Below or above the TOS were other checkboxes. One said “Yes! I would like to receive information and offers from you and your partners.” If you checked this box, then legally you asked for this email. - Indirectly– This is the same scenario as Directly signing up except, The box for the information and offers is pre-checked, leaving it for you to uncheck the box if you do not want to be on their mail lists. Either way, once you are on a bulk mail list they can legally send you the offers (and rarely any information worth anything) as long as they follow RFC Regulations. The good news is that if they follow RFC Rules then it is easy to stop these emails. All you have to do is to simply “click to unsubscribe” and the mail stops. That is if they follow rules. Malicious spammers especially will take advantage of this and offer the same format at the bottom of their emails linking the unsubscribe link to malicious downloads and/or tracking cookies; Etc…A Hacker’s greatest tools are the oldest tricks in a book written before time. Those tricks still work today. #Spam #EmailSecurity Click To Tweet How am I supposed to know the difference? Here are a few simple things to look for when evaluating the legitimacy of a message: - Check who the email is from. An email address has two parts: - The username- the part before the “@” sign - The domain- the part after the “@” sign - Mouse over the “unsubscribe” link or button - If the end of the address something(.com,.org,.net,.gov,etc…) is the same as the from domain name it is a good bet that this is legit ham and it is ok to click to unsubscribe. - If the end of the address does not match the from domain, Don’t Click It! This still may be badly formatted legit mail, but why take a chance. Instead forward the mail to postmaster@ (your domain) they will know what to do with it. - Another option that can be exercised, especially if the mailer is from a legitimate retail or such, is to contact them directly and let them know you will not shop with them as long as they use such tactics to advertise. - Most Retailers do not knowingly employ spam as a means of advertising, however quite often we receive spam from them. The reason is Retailers hire Marketing firms that may employ other firms some of which for economic reasons hire less scrupulous bulk mailing firms. Letting a retailer know of this allows them to review whom they do business with. This tactic is also prohibited in the CAN-SPAM ACT. A Hacker's greatest tools are the oldest tricks in a book written before time. Those tricks still work today. Our Threat Spotlight series has detailed analyses and images of recent attacks. Barracuda Total Email Protection is the World′s Most Comprehensive Email Protection—Made Radically Easy Frank Dreamer is a Lead Support Engineer with Barracuda Networks.
<urn:uuid:91e53acf-fb88-4126-b02a-95f3c75eeb97>
CC-MAIN-2022-40
https://blog.barracuda.com/2013/10/03/ham-v-spam-whats-the-difference/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00328.warc.gz
en
0.951783
1,341
3.171875
3
Password protection is the most common login and data protection tool today. There aren’t many apps, websites or ecommerce sites that don’t require you to use password protection and that’s just to name a few. It’s often the chosen method of data protection by businesses because of its inexpensive costs and simple nature. When comparing it to today’s more tech-savvy data protection tools, like facial and fingerprint recognition, it’s the vastly cheaper option. The simplicity of password protection along with the many methods of password hacking, is why hackers try their hand at breaking through password protection and accessing = personal data. A simple password, (for some of us, the numbers 123456, or the name of our dog), is the only thing preventing a skilled cyber hacker from accessing our personal and financial information. That being said, it is important for us to do what we can to mitigate our chances of being hacked, and to inform ourselves of the methods hackers use to steal data. In this post, we will look at some of the most common methods of password hacking, and what you can do to prevent it from happening to you. Here, hackers disguise themselves as people you know such as family members, friends, co-workers. They typically use email or text messages to convince you they are someone you know. These emails and texts will include a link or attachment that will either download malware to your device, or prompt you to fill out a form with your personal information. To avoid this type of password hacking, be cautious when opening up email or links that seem suspicious. 2. Brute Force Brute force as a method of password hacking, is essentially when a hacker guesses at your password with a large number of combinations to figure it out. Hackers often use software to help them generate possible passwords, that allows them to try every possible combination of letters, numbers and symbols. To decrease the chances of being hacked via brute force, choose lengthy passwords that use “passphrases”. Passphrases use numbers or symbols in place of letters in a word and include spacing in-between words. For example, Get coFF3 on mOnd0y. Malware refers to a malicious software program that is downloaded to a device that is used to steal data and information. Malware is one of the most frequently used methods of cyberattack and password hacking. Malware most commonly comes from phishing emails and texts but can also come in the form of online advertisements containing malware or websites with pop-up ads and “free download” links. Keylogging is a form of malware software that a hacker downloads to the device of a target, that will record the device’s keystrokes, allowing them to gain passwords. You can prevent this by implementing security software onto your devices that will detect any keylogging malware. Also, remember not to click on any suspicious links that may be sent to you via email or text, as they could download keylogging malware to your device. 5. Third Party A third-party attack refers to when a hacker gains access to your passwords and information through an outside source. For a company, this may be a partner, vendor or anyone your company works with. Hackers will often target MSPs as they likely have access to their customers’ systems. To prevent a third-party password attack on your business, use a password management system to help you manage your passwords, and access to information. You can learn more about password management systems here. Why Every MSP Needs a Password Manager (passportalmsp.com) 6. Credential Stuffing Credential stuffing, or list cleaning, is when hackers test lists of stolen credentials or passwords, against multiple websites and accounts to see if it’s a match. Many of us often use the same passwords for multiple accounts and hackers know this. You can prevent credential stuffing by simply having a unique password for all of your accounts and logins. This way if your password gets stolen in one place, your information on all your other accounts isn’t compromised as well. 7. Guess Work (shoulder surfing, local discovery, password spraying) Guess work, often referred to as “password spraying”, is just what it sounds like. It’s the instance where a hacker uses commonly used passwords or the information, they know about you to guess what your password is. To prevent a hacker from succeeding in their guess work, make sure that your passwords aren’t commonly used passwords. Here is a list of the top 200 most common passwords to compare against your own. Even with the innovations like facial and fingerprint recognition, the least expensive and most commonly used data protection tool is the password, and it’s not going away anytime soon. Begin to protect your sensitive information by being intentional with the passwords you create, only share them when necessary, and be cautious of the links, websites and ads you click on.
<urn:uuid:0cb18c91-f1f3-4ed0-89cf-1ea53b228cc2>
CC-MAIN-2022-40
https://agileblue.com/7-ways-hackers-steal-passwords-and-how-to-prevent-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00328.warc.gz
en
0.932635
1,049
3.171875
3
Big, expensive, and the opposite of aesthetically pleasing are what usually pops into a techie’s head whenever he or she hears the word Lidar, which stands for light detection and ranging. Due to its rather hefty price tag, Lidar’s wondrous abilities are only utilized by experimental cars. The US’ Defense Advanced Research Projects Agency (DARPA) says that things are about to change. Enter SWEEPER, which is basically a Lidar-on-a-chip system. SWEEPER is an acronym for: Short-range Wide-field-of-view Extremely agile Electronically steered Photonic EmitteR. Unlike its bulky predecessor, SWEEPER helps self-driving cars “see” the road through an electronic beam transmitted by many small emitters. These emitters are each equipped with their own unique signals and form a beam that can sweep from one end to another and repeat the process 100,000 times per second. That’s fast; SWEEPER is 10,000 times faster than the “fastest” laser roof ornaments. While the system only covers a 51-degree field, it’s more than enough considering that it will still help stitch together a perfect panorama of the surroundings. In fact, for a scanner as small as SWEEPER, it’s the widest field of view ever achieved. What does this mean for all of us? As DARPA’s project manager, Josh Conway, said in a statement: “This wide-angle demonstration of optical phased array technology could lead to greatly enhanced capabilities for numerous military and commercial technologies, including autonomous vehicles, robotics, sensors, and high-data-rate communications.” Through SWEEPER, self-driving military vehicles will be able to navigate and survey various terrains, thereby impacting how these vehicles steer themselves and improving targeting capabilities. The biggest draw of SWEEPER is its price. Although DARPA hasn’t announced yet just how much it would actually cost, SWEEPER is expected to cost significantly cheaper than its big brother. This cost-effectiveness could open the doors to mass production, allowing SWEEPER to be utilized not just by high-grade military vehicles, but also by cars, copters, and for other commercial uses.
<urn:uuid:4cb4e891-2ba4-412f-8556-62bc1ce804e2>
CC-MAIN-2022-40
https://davidpapp.com/2015/07/29/move-over-lidar-sweeper-is-here/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00328.warc.gz
en
0.950985
485
2.890625
3
Spear phishing is an email targeted at a specific individual or department within an organization that appears to be from a trusted source. It's actually cybercriminals attempting to steal confidential information. The same email is sent to millions of users with a request to fill in personal details. These details will be used by the phishers for their illegal activities. Most of the messages have an urgent note which requires the user to enter credentials to update account information, change details, or verify accounts. - Vishing is the phone's version of email phishing and uses automated voice messages to steal confidential information. - These attacks try to trick an employee into giving out confidential information via a phone call. - Vishing attacks use a spoofed caller ID, which can make the attack look like it comes from either a known number or perhaps an 800-number that might cause the employee to pick up the phone. - Vishing often uses VoIP technology to make the calls. - Vishing attacks can be focused on all employees, or against employees that mainly deal with people outside the organization. Departments like the help desk, PR, Sales, and HR are good to include in vishing security tests. Phishing conducted via Short Message Service (SMS), a telephone-based text messaging service. A smishing text, for example, attempts to entice a victim into revealing personal information via a link that leads to a phishing website.
<urn:uuid:2501468a-0618-4241-a8b5-f3601e33e350>
CC-MAIN-2022-40
https://ecfirst.com/social/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00328.warc.gz
en
0.932302
313
3.09375
3
Maxtrain.com - [email protected] - 513-322-8888 - 866-595-6863 The Cost Management course addresses the identification, elaboration, planning, and management of the project budget. Including selected processes from the PMI Integration, Cost, Scope and Risk Knowledge Areas, this class addresses the development of a Project Cost Estimate, Project Budget, and the Project Budget Baseline. In addition it addresses the preparation of a spending profile that supports variance analysis and corrective action using Earned Value Management. Using a combination of theory based lecture and hands on exercises, students are provided with an effective skill set for developing and controlling the project budget baseline. Strategic Course Goals and Objectives: Lesson 1: Strategic Course Goals and Objectives Lesson 2: Introduction to Cost Management processes Lesson 3: Introduction to Cost Management processes Lesson 4: Plan Cost Management Lesson 5: Estimate Costs 6. Determine Budget 7. Control Costs 8. Earned Value Management 9. Project Closure This course is intended for both project team members and project managers wishing to gain a fluent working knowledge of commonly accepted best practices for planning and management of the project cost baseline. Team members and managers looking to improve their project cost estimating and cost budgeting skills and looking to improve their understanding of how to deliver their projects within budget should take this course. Students on a track to take the PMP examination should take this course. 2 Days Course
<urn:uuid:fa2fd6be-a4f0-4e69-a0c2-05d342396808>
CC-MAIN-2022-40
https://maxtrain.com/product/project-cost-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00328.warc.gz
en
0.867861
312
2.6875
3
Topics Fraud Defenses Crowdsourced Abuse Reporting Device Fingerprinting Email Reputation Service IP Reputation Service SR 11-7 Compliance Supervised Machine Learning Two-Factor Authentication (2FA) Unsupervised Machine Learning Fraud Tactics Bot Attacks Call Center Scams Card Cloning Credential Stuffing Data Breaches Device Emulators GPS Spoofing P2P VPN Networks Phishing Attacks SIM Swap Fraud URL Shortener Spam Web Scraping Fraud Tech Device Intelligence Feature Engineering Identity (ID) Graphing Fraud Types App Install Fraud Application Fraud Bust-Out Fraud Buyer-Seller Collusion Content Abuse Loan Stacking Synthetic Identity Theft Buyer-Seller Collusion What is Buyer-Seller Collusion? Buyer-Seller Collusion is a specific example of a broader fraud category known as Collusion Fraud. Collusion fraud occurs anytime two entities within an organization or defined ecosystem conspire to commit fraudulent actions. Collusion commonly occurs in the workplace, in the insurance sector, and in markets where two or more entities agree to work together to fix prices or stifle competition. In buyer-seller collusion, the colluding parties form the two sides of a retail transaction—merchant, and customer. What Should Companies Know About Buyer-Seller Collusion? Buyer-seller collusion is a strategy fraudsters often employ to make the movement of large amounts of money seem normal, and accordingly to make it easier to avoid detection. A typical example of this kind of fraud occurs when bad actors create fake user accounts and then use them to make purchases with stolen credit card credentials. These so-called “purchases” are transacted with fake seller accounts that have been set up expressly to enable fake transactions. No goods ever actually change hands—in fact, they don’t exist in the first place—but once the “seller” receives the funds from the stolen credit card, those funds are then cashed out. At surface level, there is nothing suspicious about these transactions—the collusion can only be detected through holistic analysis that can expose the coordination and connections between all the moving parts of the fraud. Research from DataVisior has previously identified three additional motivations for buyer-seller collusion: To claim promotion money: E-commerce platforms often offer promotions to incentive buyer and seller activity. These funds can be illicitly claimed by fraudsters who masquerade as legitimate buyers and sellers.To artificially establish seller reputations: As a new or small seller, it can be challenging to build a positive reputation, but without one, it’s hard to get buyers. Sometimes sellers will rely on fake buyers to artificially inflate their reputations. In many cases, both the buyer and the seller are bad actors, trafficking in stolen or fake goods.To launder money: Moving money between fake buyers and sellers via fraudulent transactions can be an effective way to obscure the illicit origin of the funds. How to Prevent Buyer-Seller Collusion Modern fraudsters engaged in content abuse activity can create and post massive amounts of content quickly, and adjust their techniques with equal rapidity. They are able to create vast hordes of fake accounts and use them to disseminate malicious content in ways that are increasingly difficult to detect. Fraud solutions that rely on manually-created features, rules, and blacklists, to try and keep pace with rapidly evolving content abuse techniques should not be considered viable solutions. DataVisor recently addressed large-scale buyer-seller collusion for a client—a global online food ordering and delivery platform with 100 million+ monthly active users and services in 20+ countries. The business was rapidly expanding, and encountering unique and evolving fraud problems in each new region, including buyer-seller collusion. The collusion in question involved three colluding entities—merchants, customers, and delivery riders. Fake “customers” were making thousands of fake orders online, fake “merchants” were pretending to prepare food, and fake “riders” were reporting as if they had delivered the food. However, no actual food delivery ever took place. What did happen is that all of the involved parties received a vast number of subsidies from the platform. Ending the buyer-seller collusion fraud involved uncovering an extensive collusion network that contained over 50 customers, merchants, and riders, all of whom conspired to place over 200 fake orders, and to fraudulently claim subsidies from the client. DataVisor’s solutions were able to detect that all of the “customers” shared similar email patterns, device models, and GPS coordinates, and holistic analysis was able to affirm that all the suspicious orders originated only from these “customers.” In addition to these giveaways, DataVisor’s systems revealed that the intervals between order times and order deliveries were consistently between 5 and 10 minutes—a range not actually possible. By holistically analyzing activities, accounts, addresses, digital fingerprints, and more, to expose suspicious patterns and coordinated activities, DataVisor was able to end the buyer-seller collusion that had been plaguing the client.
<urn:uuid:2056da26-1524-4809-a283-01d04282ebc4>
CC-MAIN-2022-40
https://www.datavisor.com/wiki/buyer-seller-collusion/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00528.warc.gz
en
0.934298
1,033
3.03125
3
Firewalls for SMB Protecting Your Small Business: Why Cybersecurity Matters. Attacks come from both inside and outside of your organization, so a comprehensive approach to security addresses each area and plans for business continuity and data recovery. In this article, we’ll look at one of the tools for preventing external security threats. A firewall is the first line of defense for your network and computers. Set up correctly, it screens out malicious activity before it reaches your computers. The primary purpose of a firewall is intrusion prevention (IPS), or shutting out hackers trying to access your network. Firewalls are set up with rules that help them identify what application activity is expected and permitted, and this starts with identifying what “ports” are open. If an application or hacker attempts to use a port that isn’t permitted, they are prevented from accessing the network. Firewalls come in software and hardware versions, but to protect your business you need a hardware based firewall. Software firewalls react to threats that are already on the network or that are attempting to exploit specific applications. Is My Firewall Secure? Do any of the following describe your network? - Using an “out-of-the-box” consumer grade modem or router provided by your internet provider or purchased from a store - Uncertain if there is a firewall or what protections are in place - Using default firewall administrator passwords - Using no password protection to connect to the network, or using older forms of network authentication security such as WEP or WPA that are susceptible to brute force attacks. - Have not established separate “guest” networks to segment your critical business functions from visitors’ computers and any possible threats they bring with them - No controls for preventing employees from accessing risky websites and content - IP Addresses are not locked down - No reporting or analytics on network traffic through the firewall - First, determine if you are using a consumer modem, router or firewall. Consumer grade modems are priced attractively, fast, and often come with a basic firewall capability. Consumer grade modems and routers are targeted for home use and optimized for activities such as media streaming, but do not have the level of security necessary to protect your business. Business grade firewalls provide enhanced security, reporting, and control over who accesses your network and what they are allowed to do. Additionally, business grade firewalls provide the option for additional connections to your internet provider for business continuity and additional capacity when your business grows. - Change the administrator password. Using the default password or no password is equivalent to handing over the keys to your business to hackers. Administrator access should be granted only to the staff whose job it is to configure and maintain your firewall. - Quarantine visitors’ traffic from your business computers by segmenting the network. On consumer grade modems, this is called a “guest network” but the term used for business firewalls is VLAN (Virtual Local Access Network). Guest networks should be configured with password security, and use a different password than your business network. - Review password security protocols for connecting to the network, and update if you are using WEP or WPA. At a minimum, your network should require WPA2 security. This is an opportunity to change an old password that may have been shared with non-employees or past employees. - Establish content controls by identifying what types of websites and services employees need to access during business hours, and locking down harmful sites and unused ports. The objective is to provide the right level of access for your employees to do their jobs while keeping out potential threats. This requires an understanding of the applications your business uses and what ports those applications use, so IT and management should work together to determine how employees are allowed to use the office network. - Enable antivirus protection at the firewall. This scans downloads and emails and provides an extra layer of protection over the antivirus programs installed on employee’s computers. - Enable / Configure Intrusion Prevention System (IPS) - Check for and install any firewall firmware updates. Consumer grade routers and modems generally do not receive ongoing maintenance and patch updates from the manufacturer. - Monitor firewall reports and analytics for attempted security breaches. These reports are only available in a business grade firewall. Help is Here. If these steps to securing your network sound like gibberish, you’re not alone. Small companies want to focus on their core business and often turn to outsourcing IT services to a managed service provider. A trusted service provider can review your network security, choose and configure a business firewall that meets your business needs and budget, and provide ongoing monitoring. Firewall security is just one of the solutions provided by Ariel IT’s businessCARE™ suite of small and medium business services.
<urn:uuid:10b47ca7-a665-4f04-a255-e9682c924638>
CC-MAIN-2022-40
https://arielitservices.com/security/firewalls-for-smb/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00528.warc.gz
en
0.918325
1,001
2.640625
3
The Department of Homeland Security (DHS) Science and Technology (S&T) Directorate released a predictive modeling tool to estimate the natural decay of SARS-CoV-2 – which causes COVID-19 – under a range of temperatures and humidity levels. “While researchers across the federal government and the medical community continue to study COVID-19, preventing the spread of the virus is a top priority,” DHS Senior Official Performing the Duties of the Under Secretary for S&T William Bryan said in a news release. “Transmission occurs primarily through respiratory droplets produced by talking, coughing, and sneezing. If these droplets settle on surfaces or objects, contact with those contaminated surfaces may also be a factor in spreading the virus.” The tool will be used to assist response efforts and estimate environmental persistence of COVID-19 under certain temperature and humidity combinations using results from research being conducted at S&T’s National Biodefense Analysis and Countermeasures Center. The COVID-19 virus can survive in droplets of saliva on surfaces for extended periods of time, S&T said, and using this predictive model will help gain a better understanding of the virus’ decay under conditions representative of indoor environments. S&T did offer that the model doesn’t include the effects of conditions such as solar light exposure, but it was working to enhance the model to include different environments such as: droplets in the air vs. on a surface, expanded temperature and humidity ranges, and different surfaces.
<urn:uuid:fb3ccfe1-df22-4ef7-a2d7-4b519d7fcda1>
CC-MAIN-2022-40
https://origin.meritalk.com/articles/dhs-st-releases-covid-19-predictive-modeling-tool/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00528.warc.gz
en
0.92191
319
3.515625
4
These images show data from a modern turned pot and a cast of a cuneiform tablet in the collection of the Oriental Institute in Chicago Illinois. The tablet is used as a test target on the histogram equalization page. Dimensions of the tablet are approximately 42mm x 36mm x 17mm. Since our data analysis tools allow the user to extract small regions of interest, and to continue the geometric analysis just on those regions, it seems that it would be possible to measure 3-D shapes of individual signs or their components, possibly leading to a tool for paleography. For many more results using cuneiform tablets, see these pages: Adaptive local extrema equalization (automatically highlighting signs) Adaptive histogram equalization (applied to both conventional 2-D imagery and also to range map visualization imagery) Raytracing results for cuneiform tablets Further Ideas for Archaeological Visualization The article "Visualization of LiDAR terrain models for archaeological feature detection" [B.J. Devereux, G.S. Amable and P. Crow, Antiquity v82 (2008), pp 470-479] suggests a very interesting idea. Use the 3-D range model to generate raytraced imagery with varying lighting models. For example, if you imagine that the above image of a cuneiform tablet is looking straight down, with north at the top of the image, you can see that the surface is illuminated by a source in the south-west, about 45° above the horizon. Some additional features would be visible if it had been illuminated from another direction, but at the expense of losing some features visible here. Generate 16 such images, from the same viewpoint but with the illumination source N, NE, E, SE, S, SW, W and NW. The resulting 16 images would collectively do a good job of capturing more small-scale surface shape features. However, that would be 16 times as much data and 16 times as much analysis work! So, do Principal Components Analysis, resulting in a new set of 16 images. Most of the information will be captured in the first few such images. Use them individually, or generate false-color imagery by encoding the first three images as red, green, and blue planes of a single image. They show results for LiDAR (that is, laser radar) data for an Iron Age hill fort. The original data had previously had its vegetation removed as described in earlier work. ["The potential of airborne LiDAR for detection of archaeological features under woodland canopies", [B.J. Devereux, G.S. Amable, P. Crow and A. Cliff, Antiquity v79 (2005), pp 660-] They found that the first principal component image contained 55.4% of the variation (that is, the information), the first three contained 98.82%, and the first four contained 99.45%.
<urn:uuid:f2830c0f-8083-42db-affc-6779d65331ac>
CC-MAIN-2022-40
https://cromwell-intl.com/3d/archaeology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00528.warc.gz
en
0.9281
613
2.890625
3
To meet the large demand for high capacity transmission in optical access systems, 10G-PON (10G Passive Optical Network) has already been standardized by IEEE (Institute of Electrical and Electronics Engineers) and ITU (International Telecommunication Union). To enable the development of future optical access systems, the most recent version of PON known as NG-PON2 (Next-Generation Passive Optical Network 2) was approved recently, which provides a total throughput of 40 Gbps downstream and 10 Gbps upstream over a single fiber distributed to connected premises. The migration from GPON to 10G-PON and NG-PON2 is the maturity of technology and the need for higher bandwidth. This article will introduce the NG-PON2 technology to you. What Is NG-PON2? NG-PON2 is a 2015 telecommunications network standard for PON which was developed by ITU. NG-PON2 offers a fiber capacity of 40 Gbps by exploiting multiple wavelengths at dense wavelength division multiplexing (DWDM) channel spacing and tunable transceiver technology in the subscriber terminals (ONUs). Wavelength allocations include 1524 nm to 1544 nm in the upstream direction and 1596 nm to 1602 nm in the downstream direction. NG-PON2 was designed to coexist with previous architectures to ease deployment into existing optical distribution networks. Wavelengths were specifically chosen to avoid interference with GPON, 10G-PON, RF Video, and OTDR measurements, and thus NG-PON2 provides spectral flexibility to occupy reserved wavelengths in deployments devoid of legacy architectures. How Does NG-PON2 Work? If 24 premises are connected to a PON and the available throughput is equally shared then for GPON each connection receives 100 Mbps downstream and 40 Mbps upstream over a maximum of 20 km of fiber. For 10G-PON, which was the second PON revision, each of the 24 connections would receive about 400 Mbps downstream and 100 Mbps upstream. The recently approved NG-PON2 will provide a total throughput of 40 Gbps downstream and 10 Gbps upstream over a maximum of 40 km of fiber so each of the 24 connections would receive about 1.6 Gbps downstream and 410 Mbps upstream. NG-PON2 provides a greater range of connection speed options including 10/2.5 Gbps, 10/10 Gbps and 2.5/2.5 Gbps. NG-PON2 also includes backwards compatibility with GPON and 10G-PON to ensure that customers can upgrade when they’re ready. The NG-PON2 technology is expected to be about 60 to 80 percent cheaper to operate than a copper based access network and provides a clear undeniable performance, capacity and price advantage over any of the copper based access networks such as Fiber to the Node (FTTN) or Hybrid Fiber Coax (HFC). At present, three clear benefits of NG-PON2 have been proved. They are a 30 to 40 percent reduction in equipment and operating costs, improved connection speeds and symmetrical upstream and downstream capacity. NG-PON2 can coexist with existing GPON and 10G-PON systems and is able to use existing PON-capable outside plant. Since the cost of PON FTTH (Fiber to the Home) roll out is 70 percent accounted for by the optical distribution network (ODN), this is significant. Operators have a clear upgrade path from where they are now, until well into the future. Improved Connection Speeds Initially NG-PON2 will provide a minimum of 40 Gbps downstream capacity, produced by four 10 Gbps signals on different wavelengths in the O-band multiplexed together in the central office with a 10 Gbps total upstream capacity. This capability can be doubled to provide 80 Gbps downstream and 20 Gbps upstream in the “extended” NG-PON2. Symmetrical Upstream and Downstream Capacity Both the basic and extended implementations are designed to appeal to domestic consumers where gigabit downstream speeds may be needed but more modest upstream needs prevail. For business users with data mirroring and similar requirements, a symmetric implementation will be provided giving 40/40 and 80/80 Gbps capacity respectively. With the introduction of NG-PON2, there is now an obvious difference between optical access network and copper access network capabilities. Investment in NG-PON2 provides a far cheaper network to operate, significantly faster downstream and upstream speeds and a future-proof upgrade path all of which copper access networks do not provide, thus making them obsolete technologies. Telephone companies around the world have been carrying out trials of NG-PON2 and key telecommunication vendors have rushed NG-PON2 products to market.
<urn:uuid:5d6c5d5c-c9ab-4dd0-b948-afb673923ef8>
CC-MAIN-2022-40
https://www.cables-solutions.com/tag/ng-pon2
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00528.warc.gz
en
0.92573
994
2.71875
3
A British paramedic recently completed the world's first trial mission in a rocket-propelled jet suit, flying. But how far away is the widescale adoption of technological “superpowers?” If you spend time in a bar with a group of friends, it won't be too long until somebody asks, "What superpower would you like to have and why?" Predictably, having the ability to fly or unlock the power of invisibility are usually the most popular answers. It's a question that has been debated for many generations and even appears in job interviews as employers attempt to see how a candidate can think on their feet. As a child, I watched William Suitor with amazement and boyhood wonder as he flew into the opening ceremony of the 1984 Olympics in Los Angeles using a jetpack. The following year, Back to the future suggested that Hoverboards would be another future mode of transport. Those magical moments from my childhood would ignite my passion for technology and arguably a lifetime of disappointment. However, my childhood dreams could be finally coming to fruition. A British paramedic flew using a jet pack almost two kilometers on a test rescue mission in an area where helicopters would have been grounded. Gravity Industries, the company behind the trial, believes that jet packs will dramatically cut response times. The 3D-printed suit has two turbines attached to each arm and a larger one mounted on the back. The trial run enabled a paramedic to complete a 2,000-foot climb in three and a half minutes. Technically, the jet suit can reach speeds of more than 80 mph and an altitude of 12,000 feet, but it's currently flown much lower for safety reasons. But could this finally be the beginning of mainstream adoption of jet packs? Fiction vs Reality The lines between fact and fiction continued to blur with revelations around government-funded research into invisibility cloaks. Chinese military scientists have also been able to hide equipment from spy satellite radar. But this technology is breaking out from isolated to secret government documents or cloak and dagger operations. So, could the superpower of invisibility be on the verge of becoming a reality for all? A few years ago, Warner Bros. famously released a Harry Potter Invisibility Cloak that required a smartphone app and augmented reality for the user to appear invisible. Although it looked like the real thing and enabled fans to recreate their favorite "Harry Potter" scenes through photos and videos, there is no avoiding the fact that it simply wasn't real. Disappointed by the lack of progress and unavailability of working invisibility solutions, London-based start-up 'Invisibility Shield Co' launched a Kickstarter Campaign to create a range of fully functional invisibility shields. The team promised to make invisibility available for everyone while unwittingly upgrading the traditional game of hide and seek in the process. A precision-engineered lens array deflects the light from the subject hiding behind the shield away from the observer. The light from the subject's background is then refracted towards the observer, and before you can say "Great Scott!" in your best Doc Brown impression, they will not be able to see anything hiding behind the shield. Measuring 950 x 650mm, the average person will have no problem hiding behind the freestanding invisibility shield. Of course, whether it's worth £299 will be entirely subjective, but I suspect that early-adopting Trekkies, Harry Potter fans, and creatives will be first on board. If the idea of having X-Ray vision like Superman sounds appealing, you might want to check out the work by scientists at the Massachusetts Institute of Technology. Its RF Capture system uses short-wave radio signals to track the movement of people through walls with up to 90 percent accuracy. Although in the wrong hands, this would be a sinister superpower, it's actually aimed at monitoring seniors who are at risk of falling. In a cruel twist of fate, as technology begins to unlock the superpowers many dreamed of as kids, the idea of sneaking around in invisibility cloaks or watching people via X-ray vision suddenly feels quite creepy. As a result, many are beginning to explore a bigger dream using neurotech and genetic engineering to enhance human abilities as we approach the dawn of transhumanism and human augmentation. It's been a long time coming, but we are finally beginning to explore the art of the possible. The only limit for the future is our imagination, but that doesn't necessarily mean that we should rush into connecting ultra-high bandwidth brain-machine interfaces that connect ourselves to computers. But make no mistake, the two examples of start-ups turning science fiction into reality by creating fully functional jet packs and invisibility shields are just a taster of things to come. Rather than talking about what superpowers we would like to have, it seems that many in the tech industry are following Jean-Luc Picard's famous advice and can be found repeating the mantra "Make it so." But until then, what superpower would you like to have? More from Cybernews: Subscribe to our newsletter
<urn:uuid:5600946f-4dd1-4ded-b813-208c6dcd7b5f>
CC-MAIN-2022-40
https://cybernews.com/editorial/are-you-ready-for-jet-packs-and-invisibility-cloaks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00528.warc.gz
en
0.951811
1,040
2.78125
3
These days, businesses function in a hypercompetitive landscape, and it’s more crucial than ever to effectively manage and deploy applications in order to remain innovative and agile. For this reason, businesses should choose wisely when considering containers vs. virtual machines, both of which inject virtualization into their operations but offer significantly different pros and cons. What Is a Virtual Machine? A virtual machine is a “software computer that, like a physical computer, runs an operating system and applications,” VMWare explains. VMs create an environment that is essentially a “computer within a computer,” according to Microsoft. A VM delivers to the end user the same experience they would have on the physical computer, but a VM is “sandboxed from the rest of the system,” which means that the software inside of the VM can’t impact the computer it’s in. The advantage is that it offers an optimal environment “for testing other operating systems including beta releases, accessing virus-infected data, creating operating system backups, and running software or applications on operating systems they weren’t originally intended for,” Microsoft notes. Why Use Virtual Machines? VMs have been around for decades and offer several major advantages. Several VMs can run on the same computer simultaneously, managed by a hypervisor. Each VM has its own virtual hardware, cutting down on the need for physical hardware and all the space, costs, maintenance and other demands that go along with that, Microsoft notes. The VM’s biggest advantage over a container, according to TechRepublic’s Jack Wallen, is that it offers a full platform capable of housing several contiguous services. “Virtual machines make perfect sense when you have a beast of a host server that can handle the load of multiple guests and you need to know that, at any given moment, you can spin up a host snapshot, should something go wrong,” he notes. What Are Containers and How Do They Work? Containers differ from VMs in that, while VMs virtualize hardware, containers virtualize operating systems, which can improve server use. A container “packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another,” notes container platform provider Docker. This delivers applications in a collection of smaller and more composable units. Containers also allow software to run reliably when moving between different software environments, Docker Founder Solomon Hykes tells CIO. “You're going to test using Python 2.7, and then it’s going to run on Python 3 in production and something weird will happen. Or you’ll rely on the behavior of a certain version of an SSL library and another one will be installed. You'll run your tests on Debian and production is on Red Hat and all sorts of weird things happen,” Hykes says, adding that security and software policies might also differ. Because containers consist of an entire runtime environment, they cut through differences in operating system distributions. Moreover, containers are small (often, just several megabytes), while VMs can be many gigabytes in size, meaning that a server can host more containers than VMs and containers will boot up more quickly. Containers also offer the advantage of modularity. As CIO reports: Rather than run an entire complex application inside a single container, the application can be split in to modules (such as the database, the application front end, and so on). This is the so-called microservices approach. Applications built in this way are easier to manage because each module is relatively simple, and changes can be made to modules without having to rebuild the entire application. Because containers are so lightweight, individual modules (or microservices) can be instantiated only when they are needed and are available almost immediately. What to Consider When Choosing a Container vs. a VM So how do companies choose between VMs and containers? It depends largely on the scope of an organization’s work. Jack Wallen says the decision comes down to two questions: “Do you need a full platform that can house multiple services? Go with a virtual machine. Do you need a single service that can be clustered and deployed at scale? Go with a container.”
<urn:uuid:df7cf45a-5f03-4791-af22-e60fffb679b1>
CC-MAIN-2022-40
https://biztechmagazine.com/article/2018/11/containers-vs-virtual-machines-whats-best-your-business-perfcon
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00728.warc.gz
en
0.935773
891
2.671875
3
How to create checks and balances when organizing your IT network. When many people think about IT security, the first things that come to mind are programs such as firewalls or malware detection software. However, security is as much about the organization systems and process your company has in place as anything else. Of those organizational structures, one of the most important is how companies assign responsibility for certain IT-related tasks. This is called Segregation of Duties. Segregation of duties (SoD) means that no one person should be solely accountable for certain business operations. In fact, we see SoD used frequently to stabilize systems were there might otherwise be an imbalance of power. SoD should already be familiar to anyone who has worked in finance or banking, as it is an important precaution against embezzlement, fraud, or other financial errors. For instance, imagine the person in charge of verifying time sheets is also the one responsible for cutting pay checks. Or what if one person was responsible for both approving and recouping business expenses. In such a circumstance, they could easily compensate themselves for expenses that either were not warranted, or never happened. On the other hand, when these duties are properly separated, taking advantage of the system requires collusion among at least two employees. No single person can both close a client and handle the payment transaction, or approve a purchase order and pay the supplier. Segregation of duties applies to IT in similar ways. While IT doesn’t usually involve direct monetary transactions, there are still ways in which a system without the appropriate checks and balances can lead to abuse and error. Let’s take a closer look. In practice, a lot of SoD rests on the principle of least privilege. Least privilege means that individuals only have access to as much information as they need in order to execute their job correctly. For instance, if a company has a data base of access codes for client accounts, some of their employees might reasonably require access to some of these passwords in order to effectively manage the client account. But they should only have access to the codes they might conceivably need, rather than to the entire account. In IT, privilege controls are usually restricted according to user role. For instance, one person might be granted read-only access to a folder, without permission to add or edit documents. They can also limit the kinds of files users are allowed to download from the Internet, or prohibit users of certain levels from installing programs onto a hard drive. Most critically, user privileges can control who is allowed to create and modify other user privileges. Privilege controls also help contain the spread of an IT attack. If a malicious party gains access to the account of someone with editing but not administrative privileges, for instance, they can only go so far before the administrator shuts down or suspends that account. But if they gain control of an admin account, they could potentially shut a company out of their own network. This is one main reason why you should limit privileges to those who need them. The fewer accounts with high privilege levels you have, the fewer you have to protect. Segregation of Duties in IT security. In IT security, SoD is mostly for two things: avoiding conflicts of interest that could result in abuse or fraud, and preventing control failures that could result in data theft or security breaches. For instance, a conflict of interest would arise if the person responsible for system security also wrote and delivered their own performance reports. In theory, a person in that position could neglect their duties but continue reporting “all is well” with no oversight. Similarly, if one individual is responsible for both developing and testing a security system, they are more likely to be blind to its weaknesses. Or, they might intentionally design a system loophole that they could exploit later, safe in the knowledge that no one else could test and find it. To avoid these situations, different people must be responsible for these duties. Organizations should conduct internal audits that are run by individuals without a vested interest in those audits delivering a clean report. External audits conducted by a third party who report directly to the board of directors or CEO can also prevent potential mismanagement or false reporting. Preventing employee error. Of course, SoD isn’t only about preventing fraud. It also helps prevent large-scale errors from taking place. For instance, the recent false missile alert in Hawaii could have been avoided if the duties of creating and testing the missile alert system had been separated from those of sending out a live alert. The fact that a single employee could accidentally send out a false alert indicates an error in design of the security alert system. Hawaii has since changed their system so that two people are now required to send out a security alert. This has other ramifications for IT security. While some lapses in security could be due to negligence of malign intent, others are simple error. In the design/testing scenario we mentioned earlier, an IT technician might easily fail to test for a security hole through honest oversight. SoD provides backup for you and your employees, while also leaving a clear audit trail so that organizations can understand what’s happening in their company from start to finish. Understanding SoD for your business. For any business, understanding the way your IT systems overlap and interlink can be complex. Because of this, it can help to have an outside company consult with your business to create a more secure environment. Increasingly, applying appropriate SoD best practices to your IT network will become critical—and perhaps even mandatory—to protect the security for your company. If you would like help managing your IT security, we can help. We provide a free IT assessment to businesses to help them understand possible weaknesses in their network. Contact us today to get started.
<urn:uuid:a905de0d-b492-4150-a082-3c87befa05fd>
CC-MAIN-2022-40
https://brightlineit.com/why-segregation-of-duties-is-crucial-for-it-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00728.warc.gz
en
0.960483
1,186
2.984375
3
The Blockchain is a fascinating technology that has the potential to disrupt a number of industries. This short primer will explain the basics. The Blockchain was created to enable the first viable digital currency, known as Bitcoin. The idea was to have a decentralized currency that did not rely on banks or other financial institutions for its integrity or legitimacy. But, as we’ll see, the concept is fundamentally designed to be an integrity decentralization technology in general, and isn’t specific to financial use cases. The Block Chain is a distributed database of transactions powered by public-key cryptography. The core concept that makes it different from centralized banks is that all transactions, and in fact all of the currency itself, is stored in a public ledger that is kept by all participants in the system. Blocks are a key part of the technology, and they have the following characteristics: - they are small sets of transactions that have taken place within the system - each new block includes a SHA-256 hash of the previous transaction that which “chains” it to all previous blocks - blocks are computationally difficult to create, taking multiple specialized processors and significant amounts of time to generate The fact that blocks are both difficult to generate, and that in order to change one you’d have to successfully change all previous ones, makes the block chain particularly resistant to tampering. To send or receive money using Bitcoin (the most popular block chain implementation) one creates a Bitcoin address (really just a long alphanumeric token) that can be used to either send or receive. The use of the public/private keypairs ensures that payments are sent and received by the correct individuals. The Blockchain concept is important because it represents a technological framework for decentralization. Many industries, such as banking, credit scoring, money transfers, etc., rely on someone in the middle to manage the integrity of the system, and they usually become extraordinarily rich in the process. Once they become one of—or the—standards, they can change rules, fees, etc. without much ability for consumers to object. The Blockchain changes this by offering a system whereby the users themselves manage the entire process, and where there is full transparency into costs, fees, etc. - The Blockchain is a technology framework for decentralizing a number of entities that used to require one or more middlemen and involve significant opacity - The system uses a public and transparent ledger of all activity that has taken place within it, and that ledger is simultaneously the currency as well as the record of how the currency has been created and moved among nodes - The integrity of the system is handled through the use of public-key cryptography and continuous, computationally intense validation by nodes in the system - There is significant opportunity for disruption using this technology, as many middleman entities, such as banks, are ripe for replacement using more transparent options that offer lower fees - This primer is designed to explain the core concept. If you want technical details I suggest reading Satoshi’s original paper, which can be found here. - An interesting distinction between a block chain based system and a traditional central authority is that in traditional systems a record talks about what is true of objects outside the ledger, whereas with digital currency the ledger and the currency are actually the same. - While transparency is usually a good thing, there are certain types of industries and transactions that value privacy, which the Blockchain technology isn’t perfect for.
<urn:uuid:a8b64d13-f75e-4c3a-a28f-f3e338352877>
CC-MAIN-2022-40
https://danielmiessler.com/study/blockchain/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00728.warc.gz
en
0.950139
706
3.453125
3
It is fair to say that there are few things more important to your daily security than the security of your Internet connection and hence your privacy and sensitive data. Everyone has heard horror stories of their information being stolen via covert means online, and nobody wants to be subject to the nightmares that can bring. That means investing in the protection of your privacy and data, which in turn means looking into protecting your home wireless network as much as possible. This is where the WiFi Network Security Key comes into play. I’m mentioning the wireless network here because nowadays almost all homes have a WiFi network running for all family members to enjoy fast Internet access and information exchange. Still, if you’re not a tech-savvy individual, you may have a basic idea of why it’s important to protect your privacy and data, but not know how to do so, or what any of these terms mean. This guide can, thus, give a bit of a primer on that while filling you in on some of these terms, what they mean, and how to approach them. Moreover, we’ll discuss in detail how to find your WiFi Network Security Key, what it is, how to change it etc. Table of Contents What Is a Network Security Key? First thing’s first – what is the network security key shown in the picture below? I’m sure you have encountered a window similar to the above (this is taken from my Win10 computer) when you try to connect to a WiFi network. The short answer is that the “network security key” shown above is the password to your WiFi network. Chances are, if you are reading this, you know what WiFi is. If so, you probably understand the importance of having a security key that keeps random people from having access to your network. You want to make sure your network is password protected, and that’s precisely what your network security key allows. This must be a long and complex key (at least 12-14 random characters including alphanumeric symbols and special characters). If you forget your network security key, you can typically find it on the router itself (some routers have a label on them with the key). Moreover, if a member of your family or guest friend asks you to give them your network security key of your WiFi, you can find it from your computer as described below: How to find your Network Security Key on Windows 10 Assuming you are already connected to your WiFi network, let’s see how to find the security key: - Go to Bottom left of your windows screen > Type here to search > WiFi Settings - Click to WiFi Settings and a new window will open. Scroll down to WiFi settings window and click on “Change Adapter Options” as shown below: - A new window will open with all of your network adapters (wired and WiFi adapters). Right click on the WiFi adapter and select “status” - At the new window, click to “Wireless Properties” as shown below: - At the new window, select the “Security” tab and also click “Show characters”. This will show the current Network Security Key of your WiFi network. Types of WiFi Encryprion and Security Protocols There are four main wifi security protocol standards – WEP, WPA, and WPA2 are the existing ones but there is also a new standard called WPA3 as we’ll discuss below. WEP (Wired Equivalent Privacy) WEP makes use of a 40-bit key for the purpose of encryption of wireless traffic. This is paired with another key, a 24-bit IV, which combines to give the 64-bit WEP key. That said, WEP suffers from several shortcomings, including slow speed and weaker passwords. While it was an important step instead of no-encryption in wifi, it is no longer considered a choice among those looking to safeguard their connection and data. In summary, DO NOT use WEP when selecting the security protocol on your router. WPA and WPA2 (WiFi Protected Access) WPA came next, and was intended to be a stopgap replacement for WEP to address its shortcomings. In that respect, it largely succeeded, offering faster and better encryption. Even so, it has since been outstripped by its intended successor, WPA2. For example, WPA still makes use of the same RC4 stream algorithm which has had a history of being insecure and causing problems with WEP. That said, it does provide extra security in the form of TKIP. By contrast, WPA2 uses AES encryption standard, an improvement over the instability and weaknesses of RC4. In addition, it replaces the TKIP used by WPA with CCMP. It is, thus, able to offer better and faster security than WPA, and is recommended over both that and WEP. That said, WPA3 has come out, and is superior to all of the above, offering the fastest and most secure encryption options yet. WPA2 and WPA3, thus, remain the most secure options for WiFi encryption, with WPA to be avoided but it is still occasionally used. You’ll, thus, want to take these factors into consideration and choose one of these (preferably WPA2 or WPA3) when encrypting and protecting your WiFi connection. Note however that currently not all wifi routers support WPA3. - How to Scan an IP Network Range with NMAP (and Zenmap) - What is Cisco Identity Services Engine (ISE)? Use Cases, How it is Used etc - What is Cisco Umbrella Security Service? Discussion – Use Cases – Features - 7 Types of Firewalls Technologies (Software/Hardware) Explained - 10 Best Hardware Firewalls for Home and Small Business Networks
<urn:uuid:dcbbfc00-6837-41bd-8148-4f47c33a1365>
CC-MAIN-2022-40
https://www.networkstraining.com/network-security-key-in-wifi/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00728.warc.gz
en
0.940495
1,240
2.53125
3
Linux.ProxyM malware was well known for infecting almost any Linux devices which include routers, set-top boxes, and other equipment. It affects the devices and launches a SOCKS proxy server on an infected device. It involved in various activities, in June it was used by cybercriminals to target Raspberry Pi devices for Mining Cryptocurrency, in September it engaged in sending spam messages from infected devices, and it was detected by security researchers from Dr. Web. Cybercriminals began using the “Internet of things” to distribute phishing messages. The emails were supposedly sent on behalf of DocuSign—a service that allows users to download, view, sign and track the status of electronic documents.says Dr.Web. Starting from December attackers using the infected devices of Linux.ProxyM to ensure anonymity and to launch numerous attempts at hacking websites. To launch hacking attempts they have used various hacking methods. 1.SQL injections – Injection of a malicious SQL code into a request to a website database 2.XSS (Cross-Site Scripting) – Attack method that involves adding a malicious script to a webpage 3.Local File Inclusion (LFI) – To remotely read files on an attacked server It Launches significant attacks over game servers, forums, and resources on other topics, including Russian websites. This activity performed in two different threads the first one to list the username, passwords and next thread operation of the SOCKS proxy server. Mirai uses Telnet ports with default username and passwords, whereas Linux.MulDrop.14 uses SSH ports with default username and passwords.
<urn:uuid:3c706c65-75ee-4c59-8055-5ed868feb5d1>
CC-MAIN-2022-40
https://gbhackers.com/linux-proxym-hacking-websites/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00728.warc.gz
en
0.920033
368
2.90625
3
Announcer: Welcome to What That Means with Camille, companion episodes to the Cyber Security Inside podcast. In this series, Camille asks top technical experts to explain–in plain English–commonly used terms in their field, then dives deeper, giving you insights into the hottest topics and arguments they face. Get the definition directly from those who are defining it. Now, here is Camille Morhardt. Camille: Hi, and thanks for joining me to figure out what that means. Each episode, I’ll introduce our guests and then they’ll have three minutes to define today’s topic. After that we’ll spend about 15 minutes diving a little deeper. Okay, here we go. Today we’re going to interview Mic Bowman who spent over 20 years working on large-scale databases and distributed systems. He is Senior Principal Engineer at Intel Labs, and he runs a trusted and distributed systems research group with the focus on security and privacy. Mick has several patents and he has also served as a member of the Hyperledger Technical Steering Committee. But more importantly, otherwise known as a bit of a block chain god. I went one time, went to a conference with him in on Cryptography and we had flown down separately and then joined up at the conference. And I noticed he was sitting just a few chairs away from me. So the first break that showed up, I went over to say hello to him. And he’s sitting there saying hi to me. I said, “how was your flight?”–your flight is good and all this. And I turned around and within about 10 or 15 seconds, there was a line 20 or 30 people long behind me waiting to talk to Mick. So I’m very happy to have him on today. How are you doing Mic? Mic: I’m doing really well. Camille: Mic is a modest guy. He doesn’t like it— Mic: Big eyeroll (laughs) Camille: (laughs) Uh, Mick, I’m hoping that you can define block chain for us in under three minutes. Mic: Well, let me give you two versions of the definition. The first version, as you know, as a sort of pure technology, a block chain is really just a way of using cryptography in order to make sure that we have a history that we can all agree to very, very quickly. And as a result of that, since we can detect differences, we can also detect that things are exactly the same. And that’s the sense of immutability that we get with a block chain as a data structure. But really block chain is a lot more. It’s kind of a philosophy, an approach and it really represents the change in how inter-organizational trust works that the block chain access to substitute for–as a technology substitute for–organizational trust and allows us to have an authority, that’s outside any given organization. And that authority allows us to do some very interesting things with transactions that can either be public or private and agree on the outcome and have everyone who’s participating agree on the outcome. Camille: So I hear people use the words, “distributed ledger technology” and “block chain” and “Bitcoin” all interchange sometime. Can you enlighten us as to some of those differences? Mic: There are about as many different uses for those terms, as there are people who use those terms. Bitcoin is a very specific instance of a technology that uses block chain to create a cryptocurrency–digital assets that can be exchanged. Um, when we talk about distributed ledger technology, we generally mean the sort of very generic there’s stuff happening. We all agree it happens. It changes some shared state in some way, and we all agree that the resulting computation or the resulting change status in has some truth associated with it. So, the distributed ledger may be used for everything from, uh, playing decentralized games and creating really cool digital pets, or it can be used to do things like create cryptocurrencies. It’s a very generic technology. And, and like I said, underlying that as the block chain, which is really the kind of way and the protocol and the data structures in which we use to build that agreement. Camille: And now let’s dive a little deeper. And is that fundamentally cryptography? Mic: The block chain is usually involves a fair amount of difference techniques that we would call cryptographic. So it’s using techniques for signing data based on cryptography, so that we can identify that a particular action or chunk of data came from an individual like a source of a transaction. We can use cryptographic, hashing–collapsing a big chunk of data down into a single number in a very unique way that doesn’t allow it to be reproduced. And we are also using cryptographic techniques for actually doing those transactions, especially in, when we talk about the distributed ledger technologies for doing very complicated things that allow us to hide the information, but still prove that it was done correctly. And those homomorphic encryption and zero-knowledge proofs and others that people are starting to use in these places, along with what Intel spend doing and trusted execution environments. Camille: Okay, so that last bit that you’re talking about, you’re saying, boy, when we have distributed databases–which is essentially this distributed ledgers or distributed ledger technolog– everybody is a part of that community or network has got to have an entire copy of the database and all of the information that’s on it, which can get heavy. And so we started to hear these words, redundant compute. Are you suggesting that these hashes or this homomorphic encryption that you’re talking about, help us point to the data or point to the original source and prevent everybody from having to have all the data? Mic: So it depends a lot on how they’re used. There are some techniques that are being used that are based on computing, over portions of the block chain, and then coming to agreement on it. And those techniques would be things like sharding, for example. There are also techniques for doing off-chain–what we call “off-chain computation.” So we’re doing the compute that’s interesting to the transaction someplace else, but we’re providing a proof that we did the, the competition correctly. And that proof can be something that’s very small on it. Um, so in the sort of traditional block chain, if there ever was a traditional block chain and the traditional block chain sense, everybody has a copy because everybody has to do every compute. That’s not realistic for things that are very compute intensive. And so we’re developing techniques for being able to split those up into two sub units. Camille: Okay. So, so various people who are connected to the network would be doing the computation on their own and then proving to the network that they’ve done it. Mic: Yep. And those techniques involve trade-offs between trustworthiness and computability. There’s no sort of one-size-fit-all here that each application has to be designed and its own within the context of its own community. Camille: I want to back up just a little bit and kind of push a little bit more on why everybody’s so interested in block chain in the first place. One of the things I hear is that, uh, it allows you to remove the intermediary or remove the central authority (I think you referenced that). How is it allowing the removal of an intermediary or it, or a central authority. Mic: The block chain, you know, like I said, it’s just basically, it’s a linked list of transactions that allows us to very quickly agree on the fact that we’re looking at the same thing. But the really challenging part is how you build this list. And that process is what we call “consensus.” And there are a variety of different kinds of consensus. Bitcoin chooses one, which is called “proof of work,” which involves a lot of computation, which makes it really hard for somebody who doesn’t have enough computation to, to cheat the network. There are other techniques that are less decentralized, but provide better transaction rates and more power efficiency in order to it. But it’s that process of coming to agreement that we’re all looking at or changing that link list, that block chain in the same way, that’s really where the interesting part of the de-centralization comes in. It’s in that consensus stuff. Camille: So we’re saying rather than all of we bankers are all of week, hospitals are all of the people in the supply chain sending our information to the single buyer or the government or the bank affiliation–I can’t think of the right word. But instead of doing that, we’re saying “no, no, either we don’t trust you to take our data and aggregated or make decisions based on it or we value our own IP and we’re scared to share it.” Either way, we kind of remove the idea that somebody kind of has the button that controls all of the data that we’ve all submitted to that person. And instead we each kind of have these checks and balances through mathematics that anything anybody’s adding to the database we can attest that that’s accurate. Mic: Yeah. But it, that we all see the same thing, whether it’s true or not is a different question, but we all see the same thing. Yeah. Camille: Okay. So what makes it untrue? Mic: So, well, again let’s kind of go back and talk about what truth is, right? I can have a sense of the reports that the current temperature is 80 degrees. We all agree that that’s been some reports, 80 degrees, but the fact is that the thermometer might be broken. Right? So the fact that we all agree that on the same result for the sensor does not mean that the temperature is actually 80 degrees. So, we do have that, that sense of, of agreements is not always truth, but for the purposes of things like digital assets and cryptocurrencies, agreement is truth as far as things like ownership goes and what the current balance is and what the available resources are. A couple of things I wanted to just bring up based on your comments, Camille. What is de-centralization and why do we need it? What is its real value in this case? In some cases it’s, it’s about trust, right? Bitcoin clearly was a response to fears that governments would be able to control fiscal systems. That’s its core basis for it. But there are a lot of things where we want a block chain and we want a community record that’s not based on distrust, but it may be based on longevity. So for example, a block chain lasts as long as any one of the organizations that’s participating in, it continues to exist. So it lasts as long as the community. So if we want to do– if we want to build, for example, on identity system, do I want to go to some big TSP or some big social network and say “you’re going to be my identity provider?” Well, that works as long as the, as long as the social network or the CSP, it continues to exist. But if we can get an aggregate of CSPs or an aggregate of social networks together and that they all support it, then as long as any one of them–as long as the community–continues to exist, we have a valid source of information about identities. So we manage de-centralization sometimes because it’s the only cost effective way of doing it. Because nobody would buy into to owning the centralization of it. But sometimes we do it for longevity, as well. For persistence. That it ties that record to a community rather than tying it to any individual organization. Camille: Okay. So what if there’s a mistake? You claimed it. I verified it and then we both agreed that wasn’t right. I shouldn’t have transferred you the a hundred dollars. We made a mistake. So let’s just go back and delete that. Mic: Yep. So in, in a financial system, the advantage of a centralized financial system is that I can call up one of the banks and I can say, “Hey, look, there’s a transaction that took place. It was fraudulent. It didn’t really happen.” They do the investigation, they can reverse it. At the other end of spectrum is Bitcoin and frankly, whatever the record says is truth whether it really happened or not. That is, if you’re claiming ownership of it, and you’ve got the keys, you’ve got the cryptographic keys in order to perform the transaction, then it’s yours and there’s no disputation on it. But there are these sort of gray areas in the middle where interesting things have happened. There were, for example, some attacks on a smart contract and Ethereum several years ago. And the Ethereum community agreed that they would roll back a set of transactions to a period before that attack took place and re-established truth. And that was a tough thing to do in that space. So there are some times we can kind of break the rules, but the whole point of the block chain is it’s really hard to break the rules. You’re going to have to be very intentional about it. Camille: And it’s really hard to push delete. Right? I mean, when we say I keep hearing immutable. Mic: Really there’s no delete. There’s no removing something from the record. There may be reversing it, but removing from the records very, very difficult. Camille: Okay. So what are people arguing about these days in the block chain world? Mic: What are they arguing about? Um, well, there are a lot of things, a lot of things that they argue about. There continues to be the, the kind of value of cryptocurrencies and in particular cryptocurrencies that get pegged to fiat currencies. Right? So that there’s a, there’s a tight binding between the value of a digital coin and the value of a physical coin that of corresponds to. In some sense we want those because they got a legitimacy to the, to the cryptocurrency that makes it less, it’s less venture driven and more value driven. So there’s that kind of whole class of things. But there’s another class of problems that’s related to, how do we do scalable transactions? The famous Crypto Kitties application on Ethereum several years ago was really creating digital pets. But at the time we had 25 to 30,000 Ethereum servers that were running, which means that every single one of those 25 or 30,000 servers was running every single crypto community transaction. And this is just, uh, it’s just the game, right? It’s just an online, fun digital pet. It was fun by the way. Camille: By the way, I saw a digital kitty that sold for 125,000 fiat dollars (laughs). Mic: I’m not even gonna go there with it. But we, we know that we can’t do these sort of power inefficient applications. That we have to find some way to make those things more– to make those kinds of decentralized transactions, more efficient. How we do that? Well, oftentimes it means that we give up some of the decentralization. That we move a consortium or define a consortium to take responsibility for it. So we’re back to some organizational trust, but with kind of one level of proxy in place for us. Camille: So public is sort on the sort of far one side Bitcoin side, which is like, well, even if you hacked somebody to get the keys, it’s yours now complete freedom. And then on the other side is, well we’re back to centralization here, the centralized model. And so you’re saying there could be maybe some kind of a hybrid with known trade-offs that a community has made. Maybe this happening in the enterprise space where people are trying to run business? Mic: Yes. We see a lot of this on the enterprise space. And it has to do with the specifics of the technology, how to do with the applications. And we’re, we’re learning how to build the applications that are sort of right sized– or the technology that’s right sized for the application. That sometimes it’s okay to have consensus among 25 or 30 organizations as a proxy for something that’s much larger–in supply chain spaces, for example. That’s usually sufficient for it. Where it wouldn’t be sufficient for a public cryptocurrency like Bitcoin, where even that, that sense of proxying is still too much centralization. Camille: Okay, let’s swing to the other side. I want to talk about internet of things and maybe supply chain and other enterprise use cases here. So can you please explain what is a smart contract? Mic: So a smart contract is really a chunk of code. But that chunk of code modifies some state in some way that you and I, or our community of users agrees is an acceptable way of doing things. You know, a simple, smart contract is that if I pay you Camille, my balance goes down and your balance goes up. It’s a really simple contract between us. And as long as we agree on the code that does that, you know, everything is fine. The more complex contracts can come into play for things like, look, if, if you, you can submit a bid and I can submit a bid and, uh, Pam can submit a bid to an auction and whoever has the highest bid is going to win the prize, whatever the prize is. And so the details of how the auction works–so for example, an auction might be set up in order to pick the second highest bit, which actually turns out to have some very nice properties for it. And we’ve done some work with, um, some of the other companies in, in Hyperledger, building out a blind auction that’s very similar to what might be used for doing spectrum auctions, for wireless spectrum auctions. And that contract is a very complex contract because it encodes all of the characteristics of what is historically a very complex process for auctioning off the spectrum that way. So the smart contract is basically an encoding of a set of behaviors that we all agree are acceptable for our community. Camille: And then what happens if somebody violates the contract? Mic: Um, in theory, that contract is truth, you know, in the same sense that the block chain is truth. That whatever the contract does, that’s what’s supposed to happen. And so at some level there is no violation. It’s not like you can walk away from a contract in a kind of institutional sense today; but because the contract is encoded as behavior, you get what you get. And this is actually it’s, it’s both a positive and a negative because sometimes the contracts aren’t well-written and people find the loopholes, um, and they manipulate them. Camille: Or the world changes and suddenly the contract doesn’t make sense, but you can’t renegotiate because you’re pushed to go (laughs). Mic: It’s very hard to evolve them. Yes. Camille: Okay. Um, I want to ask you one last question, which is if you were giving the CEO of a fortune 100 company advice as he or she was about to walk into a board meeting where the topic of conversation was going to be block chain and our plans and what they should be. This person isn’t going to become an expert, um, during your elevator ride. But what would you tell the person to think about or what kind of questions should they be asking? Mic: So I would remind this person, that the block chain is not the application. Right? Did you have to think about what the application is? What is the problem you’re trying to solve? Solve the problem? Block chain is a useful technology for addressing some problems, but it’s not the solution for all problems. There are almost always ways that are going to be more efficient, more cost-effective, but they may require us to buy into a little bit of trust. We have to understand what their problem is we’re trying to solve. Maybe block chain and decentralization is the right solution. In some cases that absolutely is, but not in every case. Camille: Okay. So in what cases is it absolutely the right decision? Mic: Um, when you need persistence that outlasts a single organization. When you would not trust any organization with the kind of power that would be given to somebody who can control those transactions. So for example, in a supply chain space, do I want one organization being responsible for an end-to-end– all of the information and an end-to–end supply chain? Hmm. In one of those situation it doesn’t take a lot of, sort of just trust to see that you need something that’s going to be more decentralized than that. Identity systems are another situation where we get a little uncomfortable if one organization controls, identity and identity management that way. And I think the final thing is where we really need something that doesn’t require us to stand up and organize a single organization in order to make it happen. That there are just times when we want to have something that spans the organization. We want it to be lightweight that we can spin up very quickly. And again, that’s not, that’s not really what Bitcoin is trying to solve, but many of the enterprise block chain solutions are looking at exactly those kinds of things. So we can stand something up very quickly, get some agreement on some things and then tear it down if we’re done with it. Camille: So is it relatively quick to set it up? What do you do? Mic: There a lot of these places that are providing block chain in a box. that you can stand up and you can very quickly stand up a solution experiment with it, figure out what works for you, and then tear it down if you don’t like it, or if you’re done with it. Camille: Okay, well, thank you very much, Mic. Appreciate the talk. Mic: Absolutely. Thanks Camille. Camille: Thanks joining us today on what that means. We’ll dissect more terms in the weeks ahead and for more discussions about technology and security, be sure and catch the next episode of Cyber Security Inside. Announcer: Subscribe and stay tuned for the next episode of cyber security inside. Follow at Tom M Garrison on Twitter to continue the conversation. Thank you for listening. .
<urn:uuid:96e04a79-4710-427e-a37a-470ab507dc6b>
CC-MAIN-2022-40
https://cybersecurityinside.com/episodes/14-what-that-means-with-camille-blockchain/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00728.warc.gz
en
0.957717
4,954
2.515625
3
Although they are mass produced, every firearm is unique, and when fired, they leave unique markings called toolmarks (see photos in gallery) on the bullet and cartridge casing. Law enforcement agencies have used these “fingerprints” to match firearms with bullets as part of their criminal investigations for more than a century. While forensic evidence of this kind wouldn’t likely be enough to get a conviction on its own, it has played a crucial role in linking suspects to crimes, and the ability of firearms examiners to make those matches has never been a source of controversy … until recently. In 2009, a report by the National Academy of Sciences questioned, among other things, the lack of objective methods for evaluating and identifying toolmarks. To address this, the Academies recommended development of objective methods rooted in mathematics and statistics. Development of objective methods, however, is hindered by the lack of access to diverse toolmark data sets. During a symposium held at NIST called “Measurement Science and Standards in Forensic Firearms Analysis 2012,” one of the main needs the attendees identified was for a database where bullets, cartridge cases, and toolmark surfaces could be shared among researchers. Having such a database, they said, would make it easier for them to develop and evaluate new systems, methods, and algorithms for matching bullets with firearms. In 2013, I was awarded a grant by the National Institute of Justice (NIJ) to begin creating this database. As my colleagues Johannes Soons, Robert Thompson and I started to look for datasets of bullets and cartridges cases, we found that a lot of research had been conducted by firearms examiners, university researchers, and instrument manufacturers all over the U.S. since the 2009 report. Before we approached them, though, we decided to begin to design the open-access database using the 500 measurements of bullets and cartridge cases gathered as part of the Forensic Toolmark Analysis Project (FTAP). Before we could even begin building the database, we had to figure out how to get all the data into the same format. We had generated all of our data at NIST using the same 3-D confocal microscope and saved all of it in the ASCII text format. This worked pretty well for us, but it wasn’t going to work once we expanded this database to include multiple instruments and other researchers. We needed to find a common standard file format for everything going into the database. This format needed to be efficient, standardized, and fit into the firearm toolmark measurement requirements. Ryan Lilien from Cadre Research and I formed an informal group of researchers and practitioners and started discussing the need for an interoperable file format. The teleconferences were heated at times with everyone promoting their own file format as “the one,” but after a couple of meetings, we discovered an ISO (25178-72) format called XML 3-D Surface Profiles (X3P). After doing a bit a research, we agreed that this was the format that we should use to make all 3-D topography measurements of firearm toolmarks. Even better was the fact that the major instrument manufacturers either already supported X3P or were willing to adopt it. Now that the file format issue was resolved, we focused on filling the database. The data we had gathered as part of FTAP came from the bullets and casings that we had fired under ideal laboratory conditions using only a couple types of guns. For the database to be successful, I desperately needed data from a diverse set of firearms and real world examples. I first approached researchers I had met in the past, but for the most part they had built their datasets to answer their individual questions, so they were limited in size and scope. I knew I had to cast a wider net to find the diversity I needed. In 2014, I presented the NIST Ballistics Toolmarks Research Database (NBTRD) at an Association of Firearms and Toolmark Examiners (AFTE) training seminar. I gave the audience a sense of what I was trying to build, and, because I knew each major crime laboratory housed their own reference collection of firearms that could prove to be invaluable for the database, I asked them for their help. I expected to arouse some interest, but I was surprised at the number of examiners from local, state, and federal forensic laboratories who approached me once I stepped down from the podium. Many offered test fires from previous research they’ve conducted, and some offered to generate test fires from their entire reference collection. Before I knew it, I was being inundated with data. One of the laboratories who agreed to help was the Prince George’s County Police Department (PGPD) here in Maryland. I worked closely with Scott McVeigh from PGPD’s firearms examination unit and came up with a plan to use their reference collection. It was great having a local laboratory I could turn to for help. Before I knew it, I had collected over 1,200 test fires for the database, many more than I could’ve imagined at the beginning of the project. Now that I had all this data, I needed a good way to share it. Up to this point, I only had zipped files for each dataset with no real way to index or search the data. In 2015, the NIJ awarded an extension for the NBTRD to develop a web database to index the collected data as well as allow users to upload their own data to further build the scale and diversity of the database. Luckily, NIST has the expertise for this task, and a year later, I announced the live launch of the NBTRD at the 2016 AFTE meeting in New Orleans. A year later, I’m happy to report that the future of the database is a bright. I’ll personally be curating and adding more and more datasets. I’ve recently completed an agreement with the FBI’s Firearms and Toolmark Unit (FTU) to collaborate and share knowledge in the future. Most importantly, the FTU houses a reference collection of over 7,000 firearms and thousands of different ammunition types. They are currently going through their entire reference collection and shooting at least six different ammunition types from each firearm. Toolmark testing has been an essential forensic tool for more than a century. It’s been great to have the opportunity to contribute toward the improvement of this discipline. My experience with building this database taught me that, while people may disagree about the specifics, any obstacle can be overcome so long as the group is committed to achieving the same goal. I hope the research generated from the NBTRD will one day enable examiners testifying in criminal trials to say, and back up, how confident they are that a bullet came from a particular gun. Until that day, my colleagues and I will keep working to improve the science of firearms and toolmark identifications.
<urn:uuid:4426e733-fc59-4145-b415-6ab82d064d14>
CC-MAIN-2022-40
https://www.mbtmag.com/home/blog/21101631/building-a-firearms-toolmark-database
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00728.warc.gz
en
0.9641
1,422
2.734375
3
From everyday individuals to major corporations and even the federal government, anyone is vulnerable to hacking in today’s threat-heavy digital environment. However, some organizations and systems tend to attract even more attention from hackers, with several concerning incidents having made headlines in the last few years. It’s no secret that businesses are vulnerable, but, unfortunately, the same can be said for government elections. This propensity for breaches could spell big trouble as we look ahead to 2020. Recent Headline-Making Hacks One of the most concerning signs of trouble occurred at DEFCON, where an 11-year-old attendee successfully hacked into an imitation of the Florida state voting website in under ten minutes. This young hacker was easily able to change the results of the mimicked election. And he wasn’t alone; several other kids and teens in attendance managed to infiltrate a variety of imitation websites and systems. The takeaway? Our elections are not nearly as secure as we once assumed. The National Association of Secretaries of State claims that the hacking stunt did not accurately convey current election security. However, real-life scenarios indicate that polling sites are just as vulnerable as DEFCON’s hacking efforts suggest. Unfortunately, the effects of current election vulnerabilities have gone way beyond amateur hacks at national conventions. In May 2019, for example, Florida Governor Ron DeSantis revealed that Russian hackers managed to obtain access to multiple voter databases in 2016. These sophisticated hackers sent emails containing malware to county employees. In turn, the hackers received access to seemingly off-limits voter registration information. Some states have recently taken extra efforts to prevent situations such as Florida’s alarming 2016 hack. Pennsylvania election officials, for example, recently warned counties of the urgent need for security updates. Since then, the New York Post reported that nearly 60 percent have taken action. More than $14 million in primarily federal funds have been dedicated to the implementation of new and improved electoral systems. However, even this effort may not prove sufficient. How Polling Sites Are Targeted by Hackers Hackers may call on a variety of tactics to infiltrate elections. According to the U.S. Department of Homeland Security, phishing emails are especially common. These fraudulent emails may appear to come from reputable sources, but they often trick otherwise savvy professionals into providing access to sensitive databases. Issues may also accompany unprotected USB or Ethernet ports, as evidenced by several hackers involved in the previously mentioned DEFCON effort. How to Prevent Data Hacking Polling hacks may be increasingly common, but they are by no means inevitable. With properly implemented security protocol, even the most devious hackers can be kept at bay. However, a proactive approach is essential. Key measures for reducing the risk of hacking include: - Upgrading to Windows 10 – Far too many individuals and businesses have stubbornly clung to Windows 7, even when doing so is clearly not in their best interest. This deep-seated reluctance to let go of the familiar is understandable, but it could ultimately lead to devastation. The good news? Windows 10 is not nearly as frightening as Windows 7 devotees believe. In addition to offering the functionality of Cortana and the return of the beloved start menu, it provides advanced security protocol designed to proactively address common threats. Meanwhile, technical support for Windows 7 is set to end soon, leaving polling sites that fail to upgrade that much more vulnerable. - Making the Most of Windows 10 Security Features – A simple upgrade to Windows 10 will go a long way towards improving data security, but that alone may not prove sufficient. It’s time to go beyond merely tolerating Windows 10 – instead, make a conscious effort to embrace its security features. Top options include the Windows Defender Smart Screen, User Account Control, Application Guard, and Defender Device Guard. The more layers of data protection, the better. - Setting Aside Time for Regular Audits – Despite implementing extensive security programs and protocol, many organizations remain highly vulnerable in ways they cannot possibly know until they are targeted. Audits provide a closer look at lingering risks, thereby ensuring that every loophole is closed long before it can be manipulated. - Pursuing Thorough and Regular Training for All Employees – Iowa Secretary of State, Paul Pate fears that overlooked public employees may be the most likely to give up sensitive information to hackers. Thus, education may prove one of the most valuable forms of prevention. Employees should be made aware of the many risks that accompany everyday tasks such as answering emails. They should also understand what’s at stake – our elections and the very fiber of our democracy. Hacking incidents may be on the rise, but they aren’t inevitable. Operating system upgrades, audits, and employee education can all go a long way towards keeping our elections secure. Likewise, these efforts can prove valuable among businesses and organizations of all sizes and in every sector. A proactive approach can make all the difference in this age of security threats. Prevent Database Hacking with Help from a Trusted IT Company Whether you’re involved in local elections or are simply eager to ramp up security for your small business, you can benefit from the services of a trusted company. NerdsToGo is a top option – we implement the advanced IT infrastructure needed to reduce risks across several organizations. We also provide data backup services to ensure that, in a worst-case scenario situation, sensitive data can be retrieved. Reach out today to learn which protocol our cybersecurity company in Portland can help you implement – and to discover how we can assist with disaster response. You will never regret having a highly respected team of experts in your corner.
<urn:uuid:95712cde-3318-4912-99ea-98d622e2abec>
CC-MAIN-2022-40
https://www.nerdstogo.com/blog/2019/october/online-polling-sites-are-extremely-vulnerable-to/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00128.warc.gz
en
0.950001
1,147
2.59375
3
Chip and NAND flash-memory manufacturers in Japan were affected by the earthquake and tsunami last week, but considering the circumstances the outlook could be far worse. Japan is a central hub of the technological supply chain, and the consequences of the catastrophic 8.9 magnitude earthquake that hit the country March 11 are beginning to come into clearer focus. Infrastructure, electric power supply and fears of meltdowns at nuclear plants are the biggest concerns for the people of Japan and its industries in the short term. Ports, airports, roads and rail lines have all been disrupted because of the quake with some of the major industrial export ports facing months of rebuilding before they'll be able to resume normal output. Yet, it looks like some of the major manufacturers of NAND flash-memory wafers and other semiconductor plants were relatively unharmed outside of minor damage and power outages caused by the earthquake. NAND flash memory, used in devices such as USB drives, smart phones and tablets, has been a major driver of mobile computing over the last decade. Toshiba and SanDisk factories produce flash memory south of Tokyo. The epicenter of the earthquake and subsequent tsunamis was located in northeast Japan, approximately 250 miles from Tokyo. According to the Wall Street Journal, the joint venture factories of Toshiba and SanDisk felt modest impact. Toshiba accounts for 35 percent of the world’s flash-memory output. SanDisk issued a press release March 11 stating that its factories in Yokkaichi Mie Prefecture were operational as of Friday. “The epicenter of the powerful earthquake was approximately 500 miles from Yokkaichi, Mie Prefecture, Japan, the location of the two Toshiba-SanDisk joint-venture semiconductor manufacturing plants, Fab 3 and Fab 4,” the SanDisk release stated. “Both fabs were down for a short period of time due to the earthquake and were back up and operational as of Friday morning, Pacific Time. There were no injuries to SanDisk employees based in Japan. SanDisk's current assessment is that there has been minimal immediate impact on wafer output due to the earthquake. SanDisk continues to assess the situation for any potential future impact that may arise from issues related to Japanese infrastructure and the supply chain.” Research firm IHS iSuppli Market Intelligence has released a report on the state of the Japanese technology industry and what some potential effects from the earthquake might be: - Japanese suppliers accounted for more than one-fifth of global semiconductor production in 2010. Companies based in Japan generated $63.3 billion in microchip revenue in 2010, representing 20.8 percent of the worldwide market. - Japan-headquartered companies in 2010 ranked No. 3 in semiconductor production among the world's major chip manufacturing regions. - DRAM manufacturing in Japan accounts for 10 percent of the worldwide supply based on wafer production. The Wall Street Journal reported March 11 that two manufacturers of microcontroller chips – Freescale Semiconductor and Renesas Electronics – were affected by the quake. The Freescale factory, located in Sendai in Miyagi Prefecture which was closest to the epicenter, was evacuated and without power. Microcontroller chips are used in devices that are automatically controlled, such as automobile engine control systems. Sony has suspended production in eight plants in Miyagi and Fukushima Prefectures (Fukushima is located directly south of Miyagi) and will affect its output of “magnetic tape, optical films, laser diodes, lithium-ion rechargeable batteries, CD, DVD, and Blu-ray discs” according to technology blog Engadget. The biggest problem for Japanese manufacturers will be electricity as the nuclear plants affected by the earthquake struggle against meltdowns. Prime Minister Naoto Kan announced Sunday that nine prefectures in Japan will face rolling blackouts, according to the Japan Times. The report cited Economy, Trade and Industry Minister Banri Kaieda saying that Japan’s energy needs amount to 41 million kilowatts per day but the country currently only can produce 31 million kilowatts. The Toyko Electric Power Company has said that the rolling blackouts might last until the end of April. Toshiba issued a release saying it would adhere to TEPCO’s decision. The problems facing Japan are steep in the aftermath of the earthquake, and the world technology supply chain will be affected in the short term as the country works to remedy its damaged infrastructure. Overall, though, it appears that the major manufacturers of the country missed the brunt of the earthquakes force and will be able to resume production, though probably not export scale, in the near future.
<urn:uuid:ad0a25c9-0f30-42a4-9366-4653ab490201>
CC-MAIN-2022-40
https://gcn.com/2011/03/japan-quake-disrupts-world-technology-supply-chain/282206/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00128.warc.gz
en
0.958817
959
2.59375
3
The Internet of Things (IoT) is a term that refers to the interconnection between physical objects that is also known as a physical or IP address. This Gartner Report states that, by 2020, the IoT is expected to build to 26 billion units. This means that all data centers must be prepared for the increase of data, not just in terms of storage but for other aspects as well. Here are 6 challenges that data centers should prepare for as a result of the IoT: 1. Volumes of data storage: The data of the IoT could come from personal consumers, devices and large enterprises. The combination of these sources could lead to an astronomical growth in the quantity of data that must be stored by a data center. While some amount of scalability is always planned for, in order to be proactive, a data center must plan specifically for the IoT. 2. Data security and privacy: With an increase in the amount of data, the security measures in the data center also need to be strengthened accordingly. The multiple devices used to access data add to the concerns about breach of privacy. These devices may vary from the smallest phone or tablet to a smart kitchen appliance or automobile. 3. Network requirements: Most data centers are equipped for medium-level bandwidth requirements for access to the data. With the IoT, the number of connections and the speed of access would both have to undergo significant improvements to satisfy the growing requirements. 4. Scaling of storage architecture: The increase in the storage requirement could also lead to a challenge in the way the storage and servers are configured. It is recommended that a distributed structure is adopted to make the storage and access most efficient. 5. Multiple locations: Providing a solution for storage of IoT data that comes from multiple locations could be a challenge for a data center at a single location. The trend might need to move towards a collection of connected centers that are administered from a central location. 6. Cost effectiveness: The type of detailed back up data that is possible in the current landscape may no longer be affordable both in terms of storage and in terms of required network bandwidth. This might encourage a need for selective backup with a well-thought out frequency of performing the operation. At Lifeline Data Centers, we believe that identifying future challenges is the first step towards finding solutions. Contact us to learn more.
<urn:uuid:baffffcb-f52b-4211-b561-4ff50ce50448>
CC-MAIN-2022-40
https://lifelinedatacenters.com/colocation/6-ways-the-iot-could-impact-data-centers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00128.warc.gz
en
0.945686
478
2.953125
3
Banking is a highly regulated industry and for good reason: it involves money and lots of it. Regulations in the banking industry are necessary to protect the government, financial institutions (from themselves), and most importantly, consumers like you and me. Remember when they were too big to fail in 2008? PSD2 and Open Banking Regulations protect, as well as streamline processes, by bringing standardization. In the European Union (EU), there is the Revised Payment Services Directive (PSD2), which is an EU Directive administered by the European Commission to regulate payment services and payment service providers throughout the European Union and European Economic Area. But PSD2 is mostly known for compelling European banks to create best practices in APIs, vendor integration and data management with the objective of standardizing open banking. What is open banking? Open banking (or “open bank data”) is a banking practice that provides third-party financial service providers open access to consumer banking, transaction, and other financial data from banks and non-bank financial institutions through the use of application programming interfaces (APIs). In other words, open banking is a secure way to give service providers access to customers’ financial information and it’s an innovation that allows third parties (for example, mortgage companies) to build apps and services around financial institutions like banks. Financial institutions can create new revenue streams and customers gain access to a multitude of offers with one set of credentials. There is another reality inside the European Union, and it’s called the General Data Protection Regulation (GDPR). The GDPR is a regulation in European Union law on data protection and privacy in the EU and the European Economic Area (EEA) that, among other things, addresses the transfer of personal data outside the EU and EEA. Any company that stores or processes personal information about EU citizens within EU states must comply with the GDPR, even if they do not have a business presence within the EU. Companies are required to comply if they have a presence in an EU country, so that means EU and EEA-based financial institutions that have embraced open banking. One of the many GDPR checklists for data controllers requires relevant companies to follow the principles of “data protection by design and by default,” including implementing “appropriate technical and organizational measures” to protect data. In other words, data protection is something to be considered whenever you do anything with other people’s personal data. Push and Pull Dynamic Between PSD2 and GDPR Here is the critical issue: How can you securely manage customers’ financial information, as leveraged in open banking, while being in full compliance of GDPR? The reality is that there is an inherent push and pull dynamic between GDPR and PSD2. The former pulls back to preserve the safety, privacy and control of customers’ private data whereas the latter pushes towards the convenience, efficiency and enhanced productivity of open banking. Given that inherent conflict, let’s take a closer look at open banking and especially at the way customer data is managed. The reality is that the industry is still plagued by identity compromises, which open the door to customer and employee impersonation with consequences ranging from financial fraud (obtaining a loan with a synthetic identity, for example) to major data breaches. Open banking infers that a customer’s username and password be used to access the website of a bank where he or she has a checking account, for example, before accessing external financial services by leveraging the very same set of credentials. Moreover, the individual’s financial information can be shared among financial institutions based on his or her activity and requests across several institutions. Sharing data means opening information systems, which inherently further exposes organizations to digital security threats that can lead to incidents that disrupt the availability, integrity or confidentiality of data and information systems on which the entire open banking model relies. And, even when individuals and organizations agree on and consent to specific terms for data sharing and data re-use, including the purposes for which the data should be re-used, there always remains a significant level of risk that a third party may intentionally or unintentionally use the data differently. This conflicts directly with some of the principles of GDPR on lawful basis and transparency, privacy rights and data security. Financial institutions involved in open banking can simply not ensure that GDPR regulations are being followed. The many data breaches that have occurred in the last two years are evidence of that, with financial institutions preferring to pay fines than finding sustainable solutions to protect the storage and sharing of customers’ financial information. Creating Privacy and Convenience Is there a platform that can eliminate this push and pull dynamic between GDPR and PSD2? Yes, and it consists of three elements: (1) Indisputable customer (and employee) ID proofing process that involves the triangulation of a claim (ID photo, address, last name, etc.) with a multitude of company or government-issued documents (driver’s license, passport, etc.) as well as sources of truth (government databases, passport’s issuing country, passport chip, credit cards, bank account, etc.), including advanced biometrics like a liveness test. This ID proofing process eliminates the potential use of synthetic identities. (2) Username and passwords along with the one or two extra factors used for authentication are replaced by a liveness test. The liveness test is as good if not more pertinent than a physical presence. The liveness test eliminates the risk of account takeover and consequently data breach. (3) The user data is stored encrypted in the Blockchain, which virtually eradicates the risk of cyberattacks. A Blockchain network is an infrastructure that powers peer-to-peer interactions. So, the user remains in control of his or her personal and financial information at all times and, when his or her data is about to be shared with a third party, he or she consents to send only the information that is pertinent to be shared. The GDPR’s guidelines on transparency, privacy rights and data security are being respected and followed. Conclusion: GDPR and PSD2 Can Co-Exist in Compliance Now we know that the convenience, efficiency and enhanced productivity that technology brings can co-exist with the regulations intended to increase security, privacy and control that legislators vote. Both can be fully complied with, as long as there is a platform in between that bridges the gap. This solution does not come from simply tweaking an existing technology, like a 2FA or MFA solution to authenticate the customer of a European financial institution who leverages the open banking system. No, it requires an advancement that goes far beyond the levels of IT security, identification and authentication that we have used in the past. The use of advanced biometrics, the storage of user data encrypted in the Blockchain and making sure the user remains in control of his or her data at all times represents this paradigm shift.
<urn:uuid:ab1123c1-df1a-4a95-b962-86aadd813b1a>
CC-MAIN-2022-40
https://www.1kosmos.com/identity-management/the-one-solution-that-reconciles-gdpr-and-psd2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00128.warc.gz
en
0.944649
1,429
2.515625
3
Published On July 30, 2020Backup vs archive: Two words that are often used interchangeably but have very different meanings. Learn about the differences between the two. Backup vs archive. Two words that are often used interchangeably but have very different meanings. While both technologies support primary data storage, there are key differences between them. Let’s explore the five key differences between backup vs archive technologies. 1. Definitions: A backup is a copy of your current data that you use to restore original data if it’s ever damaged. An archive is historical data you must keep long-term retention reasons, such as compliance. 2. Financial Value: Let’s talk about what many in IT are thinking about most: budget. Tiering data and putting it in its correct resting place via archiving is more cost-effective than backup. That’s why many companies choose to use tape storage as their primary mode of archiving data. While disk and cloud can help you get data quickly via lightning-fast backups, tape is a cost-effective method of storage for the data you need to keep for retention, litigation or business purposes. 3. Solutions: Backup and archive solve different problems. Now, let’s take a deeper dive. – Backup: Backup protects both your active and inactive data (all of your production data). You can back up your information via tape, disk or the cloud. Backup is a copy of production information. Your data still resides on the production storage systems themselves. That means if your backup system faces a major data loss (due to a security breach, disaster, infrastructure failure, etc.), you could continue normal operations. Your production data won’t be impacted, though you would be operating at an increased risk. – Archive: Archive solutions are often used to retain inactive or older data for extended periods of time. Archives are optimized for low-cost, long-term storage. Archives hold production data, meaning a loss or corruption of an archive system will likely result in the permanent loss of production information. Keep in mind that this data will likely be older or less used, but it could also be the only copy. 4. Access: Backup and archive solutions offer different levels of access based on use. – Backup: Typically used for fast, large scale recoveries. Backup data is written to deduplication appliances or tape libraries and for faster access to large volumes of information. Backup applications may be used to protect application and OS files, in addition to individual data objects—though it’s optimized for larger scale recoveries. It’s best for recovering applications or complete systems. – Archive: Designed to store individual data objects such as email messages, files and databases, along with their metadata. An archive can provide quick, specific access to stored information—so it’s easy to find that specific email from five years ago. Metadata can help you zoom in your content search. Unlike backup systems, however, archives do not provide volume level or full server recoveries. They contain only a subset of your business’ data. 5. Disaster Recovery: -Backup: Disaster Recovery (DR) is closely tied with backup. IT professionals typically run backup jobs to protect their information and a separate process to move their data offsite for disaster recovery purposes, creating a robust data protection process. -Archive: Maintaining your archive system disaster recovery can be difficult and costly. Organizations are often forced to purchase identical, expensive archive systems (for the DR site and for the production environment) because most replication implementations are proprietary. Unlike traditional DR, the ability to control replication, rollback data to previous restore points and manage bandwidth usage varies widely depending on the archive system. Conclusion: Each is Great, but Both are Better. Though backup and archive solve very different issues, they can easily complement one another in your company’s overarching data management plan. If your organization has had trouble delineating between the two in the past, accessing and retrieving your archive data may be a complex and time-consuming process. Working with a vendor that can manage your data across its lifecycle can significantly ease this process. To learn more about the differences between backup and archive, read the 2020 Active Archive Report, which offers recommendations on how to tier your data.
<urn:uuid:bc9da879-82c8-428b-948c-ad4ca42a7808>
CC-MAIN-2022-40
https://www.ironmountain.com/blogs/2020/5-major-differences-between-backup-vs-archive
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00128.warc.gz
en
0.933048
884
2.734375
3
A zero-day exploit is the attack that benefits from the security gaps of a program or an application. There is a natural problem in all threat detection models that depend on statistics and signatures. Although these methods are appropriate for the recognised security threats, they have been known to perform inadequately when it comes to zero- day attacks. As the traditional methods depend on the databases of the recognised threats, it was proven that they have had very limited abilities when it comes to the struggle against the changes within the attack methodologies. With zero-day attacks, the attackers can detect the weakness on the source code of the program or application software and develop malicious codes for a cyber attack by benefiting from the security gap. Detecting process of the Zero-Day attacks begins with Logsign SIEM correlation techniques by means of TI, Web Proxy, AD Auth, DNS server, IPS, Process events, and Endpoint protection platform (EPP) source logs. By means of correlation processes and Behavior Analysis, the user is tagged as Attacker, Victim, and Suspicious. Following the first activity started by the attacker on the side of the user, the logs are enriched by means of a behavior analysis conducted with the logs coming from the sources. The log activities formed during the activities of the attacker, which are both from the insider to the external and vice versa, are shown on the relevant dashboard panels by being subjected to correlation. The results are shared with relevant IT managers, and alert mechanisms as e-mail & SMS are formed. In order to prevent the zero-day attacks from logging in the C&C serves on the Internet and the exploit process, Logsign SIEM writes the relevant deny rules by means of an API sharing on Palo Alto, Fortigate and Checkpoint firewall. Improvement of digital threats oblige you to have qualified analysts in your security team. Threat detection needs human intuition to decrease the possibility of an unnoticed attack. As a strong Windows command file language, PowerShell is used by both IT specialists and attackers. PowerShell is an on-board command line tool. Windows file server acts as a file and folder storage that can be accessed by many users. Even though a working environment based on cooperation has many benefits, it may be difficult to prevent unauthorized access by monitoring the authorizations to shared folders.
<urn:uuid:2cdac572-a127-4f9d-aa1b-d9bcca421d92>
CC-MAIN-2022-40
https://www.logsign.com/siem-use-cases/identifying-and-detecting-zero-day-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00128.warc.gz
en
0.96138
469
2.796875
3
Messaging apps such as WhatsApp, Discord, and WeChat are a great way to keep in touch with friends and family. But not all messaging apps While making a phone call may seem harmless, you should always consider who’s on the other end of the line. Cybercriminals can use callback phishing scams to trick you into calling them directly. Once you’re on the phone, cybercriminals will ask you to share sensitive information or grant them access to your device.In one scam, cybercriminals send you an email that says you’ve subscribed to a service with automatic payments. The email also includes a phone number that you can call if you have any questions. When you call the phone number, the cybercriminals will ask for remote access to your desktop so that they can cancel the subscription for you. If you grant remote access, the cybercriminals will attempt to change your permissions so they can access your desktop later. Then, they can use ransomware to lock you out of your desktop and threaten to release your personal information unless you meet their demands. Don’t fall for these types of scams! Follow the tips below to stay safe: - Cybercriminals often use fake invoices to trick you into clicking or calling impulsively. Always think before you take action! - Never call a phone number provided in a suspicious email. Instead, visit the organization’s official website from your browser to find their contact information. - Never grant remote desktop access to any unverified agents or organizations. In most cases, legitimate support representatives will be able to solve your problem over the phone or via email.
<urn:uuid:dde10176-72f0-47e4-b5c4-a8b4416999f0>
CC-MAIN-2022-40
https://mapolce.com/scam-of-the-week-callback-phishing-scams/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00328.warc.gz
en
0.918009
337
2.5625
3
For today’s Cisco Certified Entry Network Technician (CCENT) and Cisco Certified Network Associate (CCNA) exam question topic today we will be covering Inter-VLAN Routing and how it works. The topic is not too hard, but you will see a lot of scenario questions about Inter-VLAN routing on the exam. Most of them will have a diagram and then you will see the various VLANs defined. Now here is a tip that should scare those of you who do CCNA brain dumps. Cisco has caught on to people’s use of brain dumps and what they have started to do is to use the same exact topology diagram on the exam question but they slightly change an IP address, VLAN name, trunk or such. I think this is a great way to weed out the paper CCNAs. But anyway, back to our exam question on Inter-VLAN routing. Which two statements are true about Inter-VLAN routing in the topology that is shown in the exhibit? (Choose two) CCNA Inter-VLAN Routing Exam Question A. Router 1 will not play a role in communications between PC A and PC B. B. Router 1 needs more LAN interfaces to accommodate the VLANs that are shown in the exhibit. C. The FastEthernet 1/1 interface on Router1 and Switch B trunk ports must be configured using the same encapsulation type. D. Router 1 and Switch B should be connected via a crossover cable. E. The FastEthernet 1/1 interface on Router1 must be configured with subinterfaces F. PC E and PC F use the same IP gateway address. Inter-VLAN routing is used to permit devices on separate VLANs to communicate. “Router-on-a-stick” is a type of router configuration in which a single physical interface routes traffic between multiple VLANs on a network. The router interface is configured to operate as a trunk link and is connected to a switch port configured in trunk mode. The router performs the inter-VLAN routing by accepting VLAN tagged traffic on the trunk interface coming from the adjacent switch and internally routing between the VLANs using subinterfaces. The router then forwards the routed traffic-VLAN tagged for the destination VLAN-out the same physical interface. In this scenario, answer A is wrong because the two PCs are on different VLANs and they can only communicate at layer 3 which is through the router. B is wrong since the router doesn’t need more than 1 LAN interface. The use of subinterfaces can work just as well as physical interfaces. The encapsulation type should be the same and in this case set to 802.1 Q on all the subinterfaces. Therefore C is right. Answer D is wrong since these devices should be connected with a straight through cable. Devices that require a crossover cable include: router to router, router to PC, switch to switch. E is correct since subinterfaces are required in this scenario to make the inter-vlan routing work. F is wrong because devices on different LAN segments cannot use the same gateway. So the correct answers were C and E. So you may be asking yourself when, where and why would you use Inter-VLAN routing? Sometimes you are going to have a switch with hundreds of ports on it (most switches you will use in a corporate environment will have 10 or so blades of 48 ports so that could be 480 ports), so you want to think about it from that perspective and not so much from the perspective a standard 24 port switch. Obviously you do not want them all contending for the same bandwidth and there can also be security concerns. Thus you will want to break up the switch into logical VLANs. But there will be times you will still need nodes to communicate from one VLAN to another. So you have to provide a mechanism for them to do that. That is where Inter-VLAN routing will come into play in your CCENT and CCNA studies! This is something really cool to play with in your home CCENT or CCNA lab. This can be done with pretty much any of our CCNA lab kits that have at least two switches (2950, 2960, 3550, 3560) and one router (1760, 1841, 2801, 2811) with a FastEthernet interface. If you want to check out some other lab kits (these will do ICND1 100-101, ICND2 200-101 and CCNA 200-120), take a look at the kits below so you can replicate these exam questions and really understand them for the real world!
<urn:uuid:f585fdab-17e9-4c30-9421-63a18ebdefb2>
CC-MAIN-2022-40
https://www.certificationkits.com/ccent-ccna-inter-vlan-exam-question/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00328.warc.gz
en
0.931124
981
3.390625
3
With the next phase of the pandemic upon us, state and local governments need to deploy resources to schools that aid in continuous COVID-19 response and recovery strategies. Many K-12 schools are expected to have well-developed emergency operations plans for responding to public health and other emergencies. Yet, many schools struggled to implement and update these plans throughout the pandemic. Ever-changing rules and guidelines, coupled with evolving scientific recommendations, placed unprecedented pressure on schools as the safety of their students, educators, staff, and families was threatened. Despite these challenges, many educators and school systems have demonstrated a deep commitment to the importance of every student getting an opportunity to learn in a safe environment. Drawing on two years of experience supporting state and federal COVID-19 response and assisting schools in developing safe in-school instruction plans, Maximus Public Health has developed a list of recommended strategies for continuing the improvement of school emergency preparedness. These strategies include: - Updating preparedness communication and response plans - Using data to guide decisions - Investing in workforce development - Increasing resource coordination Schools present unique challenges to battling COVID-19—a higher number of contacts per case in classrooms, difficulty maintaining social distancing and masking recommendations, and a large proportion of unvaccinated children. No single strategy is sufficient to prevent the spread of COVID-19, so it is critical for states and local governments to support schools in implementing these recommended strategies in order to keep everyone safe. Strategy 1: Update preparedness communication and response plans Effectively communicating changes in guidelines to staff, students, and families ensures these guidelines are followed. These communications can also point individuals to resources if they have further questions about their specific role in upholding guidelines. Unclear, mismanaged, and not culturally-competent communications can create confusion and, ultimately, put students and staff at risk. That is why Maximus Public Health recommends modernizing school emergency response communications plans, so they are evidence-based, tailored to the needs and resources of each school, and include contingency plans for scaling up to meet community needs. Schools should develop, implement, maintain, assess, and update these plans over time. The most effective emergency response communications plans are designed to evolve as the emergency situation evolves but are grounded in a strong foundation. They require the coordination of stakeholders and partners for the development of educational materials, specific policies and procedures, risk protocols, and surge action plans. Strategy 2: Use data to guide decisions COVID-19 highlighted the importance of accurate, timely data collection and analysis to manage response efforts better and keep schools open for in-person learning. Having access to and understanding of the most up-to-date information allows schools to better understand risk and make data-driven decisions. We recommend partnering with local public health departments for data integration, exchange, and analysis. Schools play a critical role in capturing and sharing relevant data on contact tracing, case investigation, and vaccine administration. Partnerships that promote data sharing and timely analysis will help ensure data is timely, complete, consistent, and accurate. Strategy 3: Invest in workforce development Contact tracers played a critical role in the country’s initial COVID-19 response, but as case counts are declining, so are these positions. State, local, and school-based contact tracing roles should pivot to other public health positions such as community health workers. These, and similar jobs, can play a critical role in health promotion throughout schools. Some responsibilities can include but are not limited to coordinating vaccine campaigns and events, answering health-related questions from families, and connecting students to additional resources. While this transition requires additional training of the workforce, these public health professionals carry lived, on-the-ground public health expertise. They can leverage these experiences to support school communities by promoting healthy lifestyles and health education and running initiatives beyond childhood vaccinations to include nutrition, vision, hearing test requirements, and more. Strategy 4: Increase resource coordination Having the ability to coordinate between public health and community health services is an important responsibility for schools. Maximus Public Health recommends hiring and training new community health workers within schools to act as these liaisons. This role ensures that health resources are equitably distributed and efficiently coordinated. They can support timely data collection, reporting, referrals to health services, and up-to-date, culturally-appropriate information sharing with the community. Community health workers are trusted members of the school community, which allows them to connect with students and families beyond school-related challenges. They can initiate addressing social determinants of health and health equity concerns with families. These non-traditional school roles can help families collect emergency food supplies and income supplement programs for rent or utilities and overall improve the coordination to keep schools healthy. Having a holistic support system across public health departments and schools, including communication plans, actionable data and supportive staff, will ensure schools continue to respond to the evolving pandemic and are prepared for whatever is next.
<urn:uuid:9a1c2eab-6ae0-49af-90dc-205467678d86>
CC-MAIN-2022-40
https://maximus.com/article/public-health-insights-effective-strategies-response-and-recovery-covid-19-schools
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00328.warc.gz
en
0.944898
1,018
2.890625
3
This month is a tribute to a close personal friend, co-worker, and technical genius who died tragically in April of 2003. I am writing this to honor his memory as well as show how one person can and has changed the technology we all use today. Laurence (Larry) P. Schermer began as a second shift customer engineer at Univac in 1977 after technical school and then completed a degree in math while working. In 1979, Larry moved to Cray Research, a highly innovative company that sought out equally innovative employees. Larry’s first job at Cray Research was to develop software for the unannounced Cray-1S I/O subsystem (IOS), so that it could read and write IBM 9 track tapes. Larry was part of a team that worked day and night for two straight years to develop software for: - Booting the IOS - Communicating with the Cray-1S system - Disk drivers - Most importantly, tape drivers and IBM tape emulation channels This development was critical for Cray Research’s strategy of selling to oil companies for seismic research and reservoir modeling, a market dominated by IBM because of tapes, not CPU performance. Without tape support on Cray systems, I believe that much of the oil found by these companies in the 1980s and early 1990s would not have been discovered, or the cost of finding it would have been significantly higher. Oil prices would likely be higher, which could have affected the entire economy. By the end of the 1980s, almost every major oil company in the world was using a Cray Research system. UNIX and High Performance Computing By the mid-1980s Cray had decided to move away from batch operating systems to a UNIX-based operating system. Many changes needed to be made to UNIX to support supercomputer functions. Larry helped develop functions such as: - Memory scheduling of a real memory machine as compared to a virtual memory machine - A caching mechanism to take advantage of the Cray’s SSD hardware - The listio(3RT) system call, which conceptually was developed for the older Cray COS operating system Each of these areas required Larry to make major modifications to the original UNIX source from AT&T. Some of these Larry did by himself, while others were done with the help of co-workers. A major achievement for that time was a new file system that required: - High performance I/O - Round-robin allocation - Separation of data and metadata - Different allocations for large and small files - Inodes that were 4K and could store data to reduce seek overhead Every high performance file system today has either all or almost all of these features, and the same is especially true for the shared file systems. These features reduced the number of disk head seeks, allocation time, and data fragmentation, and also significantly improved performance. All of these innovations were developed largely by a single person for the Cray nc1fs in the late 1980s and are still used today. Storage Area Networking Ahead of Its Time Larry left Cray Research in 1992 and took a few years off as, like many of us who work in this industry have been at one time or another, he was burnt out. In 1994, he joined a company called NetStar (later purchased by Ascend and then Ascend by Lucent) that had an idea for a new router. At the time HiPPI (High Performance Parallel Interface) had become the predominant network interface in the HPC community for high speed networking given its peek rate of 100 MB/sec. Larry worked with a hardware engineer to build a HiPPI to ATM converter. NetStar’s goal was to have local HiPPI networking to WAN ATM connectivity. At the same time, Cray Research and other companies such as MaxStrat (a RAID company later purchased by Sun) were producing or going to produce HiPPI-based storage products. Cray had also developed a shared file system. Larry and I often discussed how it would be nice to access file system data without having to be local to the machine. The product for the most part was a market failure primarily due to HiPPI not being a commodity product and VERY expensive costs at the time for ATM WAN lines. The key point, though, is that today we have commodity Fibre Channel directors from McData, Brocade, and Cisco that support WAN blades. We also have many products that support Fibre Channel to WAN connections from Nishan, LightSan, and others that support this concept. This originally was an idea that was developed in 1994 at NetStar. In 1995, Larry again walked away from computing and took another year off. Early in 1996, Larry worked on a project to develop a HiPPI device driver for a new Fujitsu Supercomputer, the VPP700. Larry had never worked on the operating system, but as usual he dove in head first. Very early on he realized the performance of the HiPPI channel (~80 MB/sec data rate) would not be achievable from any user applications given the implementation of the MaxStrat Gen-5 RAID controller. Remember, this was 1996 — very early in hardware RAID’s life. The Gen-5 had a very small cache (24 MB) and allocated 64KB per device. Given the backend structure of the GEN-5, the best performance from a LUN would be a 5+1 RAID-5 internally striped with another 5+1. 8+1 RAID-5 was not supported. Therefore, the stripe value would be 640KB (10*64KB). As almost all file systems at the time allocated in powers of 2, and HPC applications very often read and wrote in powers of 2, Larry immediately saw a huge problem. In his mind, the solution was simple — add to the device drive a cache that would: - Read/write on 640K boundaries to the RAID - Readahead/writebehind in powers of 2 for the user requests and file system allocations - Provide for syncing functions at shutdown or system crash Adding this feature allowed the MaxStrat RAID to run at full rate for almost any sequential I/O access reading or writing. It eliminated almost all read-modify-write requests in the RAID and made better use of the very limited RAID cache, which was not a multiple of 640KB. New File System and New Ideas Because of the success of the MaxStrat HiPPI driver for Fujitsu, in 1997 Larry and I were asked to develop a new file system for the Fujitsu VPP5000. A number of innovations came from this project, many of which have appeared in other file systems. Here are some of the features of this file system, which was delivered in March of 1998 — just 7 months after we started: - Separate data and metadata and separate caches for each file system with different allocation sizes for various RAID alignments. With this feature you could align data for large RAID allocations with RAID-5 and align metadata for small allocation on RAID-1 - Round-robin allocation for metadata. In cases where customers had a huge number of small files (especially common in some parts of the weather industry), the ability to have multiple metadata slices and different allocation sizes for metadata significantly improved performance for this type of environment - Variable length allocations up to 128MB (did not require powers of 2). This was for RAIDs that did not support power of 2 stripes - Metadata size could be up to 1MB. This was done so we could mkfs /usr file system. For example, where a metadata allocation equal to the largest command and read that command in without have to read the inode and read the data of the commands. This eliminated one disk read and associated head seek and missed revolution, significantly improving interactive response - A new allocation method that used allocation buckets and best fit instead of bitmaps and first fit and/or btrees and first fit. This eliminated data fragmentation to almost immeasurable amounts. - New fsck methodology. Given the overhead of logging file systems because metadata must be written two times, Larry believed that the requirement was fast fsck after reboot not logging. Instead of reading the file system meta blocks one at a time, Larry wrote an fsck that read the entire metadata device(s) and then processed it in memory. During our acceptance in 1998 with FW-SCSI RAID, we had to meet a requirement of being able to fsck 1 million files in a file system 7 directory levels in 20 seconds after a crash. We beat that by eight seconds, surpassing the acceptance criteria. I am unaware of any file system that uses this method even today. In 2000, Larry worked at SRC, a company developing a hybrid machine using Intel processors and ASICs. In 2003, Larry took his final contract at a startup called Scale8. I am sure both of those companies benefited from the innovation, work, and, most importantly, mentoring that Larry provided to those co-workers just starting out There are a number of unsung geniuses that have developed innovative technology we all use today. I have been lucky enough to know a few of them, and the industry just lost one of the best. I was fortunate to know him and even more fortunate to call him my friend. Thanks for reading this. I hope you enjoyed it.
<urn:uuid:f37b7d5c-0b19-4294-a429-d105fc36cf27>
CC-MAIN-2022-40
https://www.enterprisestorageforum.com/hardware/25-years-of-innovation-the-life-of-laurence-p-schermer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00328.warc.gz
en
0.971857
1,950
2.515625
3
VMware is best known for developing software packages that support virtualization. Using the company’s solutions, your business can adopt a multi-cloud environment. It also works to modernize applications and support security initiatives. You can even create an anywhere workspace for your team. One of the products that will help you accomplish these things is known as vSphere. vSphere is a suite of products. This means you can download and use components of the suite individually. However, many new users of the vSphere lineup may find themselves confused. Since each component serves a different function, it’s crucial to distinguish between them. To help you make the most of vSphere, let’s dive into the two key products and explain how they differ. - What Is vSphere? - What Is a Hypervisor? - What Is ESXi? - What Is vCenter? - How Do All of These Work Within VMware? What Is vSphere? vSphere is a suite of products. This means you cannot download and use vSphere itself. Rather, you can download each of its components. These include ESXi and vCenter Server. Together, ESXi and vCenter Server provide a complete virtualization solution. A virtualization service aims to reduce IT costs by partitioning hardware from software. Partitioning allows for a single server to host hundreds of independent virtual machines. As a result, a company can maintain a minimal hardware footprint. Meanwhile, they can accomplish key tasks, like providing separate computing environments for development purposes. The two components that make up vSphere serve two very different purposes. But, they complement each other excellently. As a quick overview, ESXi is the virtualization component. Meanwhile, vCenter is the server management component. In other words, you can use ESXi to create virtual machines and use vCenter to manage them. Of course, real-world applications are far more complex and beneficial than that. If you’re not yet familiar with the concept of a hypervisor, it’s essential to understand this technical term. To follow is a definition before going further into ESXi and vCenter’s capabilities. What Is a Hypervisor? Hypervisors are also known as virtual machine monitors (VMMs). They allow you to create multiple virtual machines using limited hardware. A hypervisor separates physical resources from virtual resources by creating a partition between the software and hardware on a computer. This allows you to “virtualize” the machine. Hypervisors that are installed on the physical hardware of a machine are called “bare-metal” or type 1 hypervisors. That is what ESXi is. Through virtualization, software resources begin acting as hardware resources. This creates an entirely virtual computer system (i.e., a virtual machine). Using a virtualization service like ESXi, you can turn one machine into multiple virtual systems. Each system will then have its own operating system and applications. The primary advantage of virtualization is that you can maximize resources. By splitting up a single server, you can minimize your hardware footprint. While examining the details of a hypervisor, it’s also worth giving a quick overview of virtual machines next to containers. Although similar in concept, containers partition off just enough resources to handle a set of specific processes. Meanwhile, each virtual machine (VM) contains a complete operating system. This means a VM is capable of completing many complex processes simultaneously. Both virtual machines and containers may be useful to your company, depending on the specifics of your use case. However, it’s also worth noting that virtual machines can be more than just an operating system. You can create a virtual machine that emulates a desktop, database, network, or entire server. Meanwhile, each VM only uses a set amount of the total physical resources. Here’s another key distinction. When using a hypervisor to partition a server to create virtual machines, you can run any operating system of your choosing. For instance, one virtual machine might run Linux while another virtual machine on the same server runs Windows. Meanwhile, a container on a Linux server would have to use the Linux OS. With all of these details in mind, let’s look at the ESXi hypervisor. This is one of the most important components of the vSphere product suite. What Is ESXi? Of the many components that make up vSphere, ESXi is easily the most notable. This component is a bare-metal hypervisor, better described as a virtualization service. ESXi sets the bar for support, performance, and reliability. It’s one of the best virtualization solutions on the market. Whether or not your team has experience with hypervisors, the features and flexibility of ESXi will prove plentiful for most use cases. Here are some of the top reasons to choose vSphere and use ESXi. - Small and Efficient: As a bare-metal hypervisor, ESXi is installed directly on the physical machine. This separates hardware from the built-in operating system. When adding virtualization hardware like this, it must have a small footprint. ESXi only takes up 150MB of space on the machine. - Highly Flexible: Flexibility is paramount when choosing a virtualization service. ESXi delivers flexibility without compromising on reliability. Using ESXi, you won’t run into configuration issues no matter how large of apps you are working with. You can configure multiple machines, each with up to 6 TB of RAM, 128 CPUs, and 120 devices. - Excellent Security: You can rest assured that all information on your ESXi virtual machines is secure. It uses logging, auditing, role-based access control, and powerful encryption. There are also tools for in-depth analysis when threats arise. - Intuitive Interface: Administration of your virtual machines is made easier with ESXi’s highly intuitive design. Developers can use REST-based APIs or tap into the vSphere Command-Line Interface to automate operations. The architecture of new virtual machines without taking on a lot of extra hardware is challenging. ESXi might be the ticket. However, it’s important to consider the type of underlying hardware (e.g., servers) that you need to set up before installing ESXi. This will depend on your company’s technical requirements. If you don’t already have a server available that you plan to use for virtualization, you need to plan ahead. You must consider the costs of different server configurations along with technical specifications. Once a server is picked out, ESXi is relatively easy to set up. If you have a paid vSphere edition, you can use ESXi (ESX) right away. Or, you can get started with the free version of vSphere Hypervisor. The free version is a good start but is better for light use. The free version does not offer load balancing or official support. If you only want to test out ESXi, the free version will suffice, but most businesses require the premium version. The premium version of ESX supports high availability and integration with vCenter. To follow is a closer look at how vCenter works to support virtualization efforts. What Is vCenter? Putting ESXi aside, another key component of vSphere is vCenter. This is software that facilitates virtual server management. vCenter provides centralized tools capable of controlling environments throughout your cloud infrastructure. For instance, the servers you run ESXi, and any virtual machines you have created could all be managed using vCenter. vCenter brings all of your virtual infrastructures into a centralized management tool. Thus, it improves visibility into your organization’s systems and resources. The main purpose of vCenter is to create efficiency. It gives your administrative team a single console to handle the day-to-day management and routine operations. This will save them a great deal of time, especially for larger operations. vCenter also lets you manage the configuration of various components and check on the health status of your environments. You don’t have to use vCenter to manage your ESXi virtual machines. However, it is the perfect match to simplify your infrastructure. Using them together, you can operate hundreds of workloads effectively. All while maximizing hardware and minimizing the time involved. The biggest advantages of vCenter include the following. - Effortless Security: A recent change to the vCenter Server Photon OS. This is a native solution, so you won’t have to worry about upgrades or patches from third parties. - Better Visibility: Using the search tool, you can quickly access any asset. You can instantly search for any network, datastore, host, or virtual machine in your environment. - Highly Extensible: Scale vCenter to meet your needs, even for the largest operations. You can add up to 2,000 hosts and an astonishing 35,000 virtual machines to a single vCenter Server instance. - Easy Access: With single sign-on (SSO) enabled, your entire team can access your vCenter instances without worrying about difficult authentication processes. - Notifications: Stay on top of what’s important by creating custom alerts and notifications. When a workflow fails to start, or another issue arises, the appropriate team members can automatically be notified so they can step in to help. - Fast Deployment: Using vCenter’s handy features, you can automatically capture the network, storage, and security settings of a given host. You can then apply it to other hosts of your choosing with a few clicks. This saves time and ensures consistent performance. These are far from the only features you can expect when using vCenter Server. With so many features, vCenter is one of the best solutions. The primary advantage of implementing it is that you can improve efficiency. By centralizing your expanding virtual infrastructure into one easy-to-use interface, vCenter improves productivity across departments. If you feel that your environments are disjointed or lack transparency, vCenter is the tool you need. How Do All of These Work Within VMware? Now you understand the key components that make up the vSphere virtualization platform. Next, it’s worth detailing how these two components complement and support one another. Some other tools will help you accomplish your goals. Once you have set up ESXi on your servers, you can step away from it. You can go into vCenter and use it for practically all other tasks. Both components were designed to work together flawlessly. So, you can use vCenter for provisioning resources for new virtual machines, managing infrastructure, and configuring access controls. After you start using vCenter, you’ll find that it is highly intuitive and extremely powerful. vSphere is bound to simplify and create efficiency in your operations. Both components genuinely come together to form a complete virtualization solution. You can also set up easy remote access to your server instance. With vSphere Client, you can access your vCenter instance using any Windows operating system. You can also choose to use vSphere Web Client, a browser-based interface. Using the client, you can access the instance from any operating system. The flexibility of vSphere does not stop there. vSphere is one of the more feature-rich virtualization platforms on the market. As such, it’s an excellent choice for operations of all sizes. Your developers and administrators will enjoy the thoughtful, intuitive interactions. Plus, they have the tools to automate routine operations. In addition to features, vSphere is also backed by great support.
<urn:uuid:2b927146-8bd0-4032-ae45-bc2ff3fa1ecc>
CC-MAIN-2022-40
https://www.logicmonitor.com/blog/vmware-vsphere-vs-esxi-vs-vcenter
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00328.warc.gz
en
0.907849
2,394
2.921875
3
You know what they say: Good things come in threes. After all, there are the three little pigs, the three French hens and the three ingredients in a BLT. The rule of threes applies to your IT solutions, too. The security protections employed by organizations to protect technological assets fall into three distinct categories: network security, cybersecurity and information security. To the uninitiated, those terms might seem like three ways to describe the same thing, but they’re actually quite different. A high-performing security infrastructure should be multifaceted, which means various devices and systems must work together to address various aspects of your business’ digital security. Here’s what each area encompasses and how they all work together to create the three dimensions of business security. Information is at its most vulnerable when in transit between devices and networks, as it’s at this point that hackers will try to intercept unprotected transfers, steal data, and install malware and viruses. Network security fortifies networks against these kinds of malicious intrusions while implementing access controls so unauthorized users can’t take up your bandwidth. Popular controls under the network security umbrella include firewalls, virtual private networks (VPNs), anti-malware software, and intrusion detection and prevention systems (IDS/IPS). You’re probably already familiar with anti-malware software, but let’s quickly run through the others. Firewalls are software or firmware products that dictate the type of traffic allowed on your network. Using a predetermined set of rules, firewalls examine incoming and outgoing data packets to prevent suspicious activity. A firewall won’t keep you safe when you log into networks from other locations, however—that’s where VPNs come in. VPNs create a secure, private connection when users access networks remotely, thereby ensuring that anyone attempting access via a public Wi-Fi hot spot can’t spy on your data. And that leads us to IDS/IPS. In some ways, IPS is very similar to a firewall, albeit a bit more sophisticated. Both follow a set of rules governing when to allow data inside the network, but IPS rules may be much more advanced. For instance, an IPS may look at how current data packets correlate over time. Although IDS is often rolled in with IPS, it actually has its own distinct function. The main job of an IDS device is to monitor network activity and report abnormalities. An IDS examines traffic at many points on the network to detect malware infections, disallowed applications, data leakage and unauthorized clients. Thorough IT solutions will use some combination of these tools to make sure all aspects of network security are addressed. Next in the security trifecta is cybersecurity, which includes the policies, training and tools human users have at their disposal to prevent malicious attacks. For instance, hosting an informational IT support session to raise your team’s awareness of the dangers of phishing emails would be part of your organization’s cybersecurity strategy. Although it’s tempting to think of security as the exclusive domain of machines and the IT support technicians who configure them, you probably have a lot more control over your cybersecurity than you think. In fact, individuals play a huge role in preventing malicious activity—hackers have found that human error is one of the most reliable ways into networks and other assets. Uneducated, unthinking or naive employees are behind some of the largest hacks in history. What’s more, cybercrime is only getting more sophisticated. Today’s hackers use complex social engineering tactics to trick human actors into doing their bidding, whether that involves giving up their network password or divulging intimate details about a business’ inner workings. Educating your team members and keeping them up to date on the latest hacking techniques lets users pick up where technology leaves off. Information security is pretty much what it sounds like: The IT solutions used to protect the physical and digital data that resides on your servers, in the cloud and inside devices. A robust information security approach safeguards against improper data access by both external actors and unauthorized users inside your organization. It also maintains data integrity, essentially preventing unsanctioned modifications and deletions. The last function simply ensures that data is available when authorized users need it, and that necessary updates and maintenance are performed in a timely manner. These three areas are often referred to by the abbreviation CIA, which stands for confidentiality, integrity and availability. One of the most popular information controls is encryption, a technique that keeps data under lock and key—literally. Data encoded with encryption techniques can only be viewed with the correct encryption key, which protects it while in transit as well as while sitting on servers and clouds. This is one of the most important steps you can take to guard against ransomware attacks, since stolen encrypted data is not as easily exploited. Other security IT solutions that fall under information security include developing a response plan to react to security events and completing risk management assessments to identify and address technology vulnerabilities. As you can see, it takes more than one layer to build a complete security strategy, and that means you have your work cut out when it comes to securing your company’s tech. Luckily, you’re not alone. Our technology resources center is chock-full of downloadable e-books, infographics and other IT support to quickly get you up to speed and help you address lingering security issues. With MyITpros, ensuring your company’s IT security can be as easy as 1-2-3!
<urn:uuid:407662f7-9bfb-48ad-afec-76762697a8eb>
CC-MAIN-2022-40
https://integrisit.com/security-made-simple-network-vs-cyber-vs-information/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00328.warc.gz
en
0.937261
1,148
2.84375
3
An increase in data breaches, data exposures, and data leaks has been an ongoing trend for the past few years. This is a result of improperly secured networks and applications and hackers finding new ways to break into the systems. In the first half of 2021, more than 118 million people were impacted by data breaches, exposure, and data leaks. Defining penetration testing scope plays an essential role in avoiding data breaches and securing a company’s data. Introduction to Penetration Testing Penetration Testing is a process to find security bugs within a software program or a computer network. It is used to find flaws or weaknesses within an existing software or computer network to make it more secure. Penetration Testing is often conducted by a third party that is not affiliated with the software company or network provider. The purpose of a Penetration Testing Service is to find the vulnerabilities within the IT infrastructure. Penetration testers can perform Vulnerability Assessment and Penetration Testing manually or by using software tools. The software tools are automated, and they perform the scanning of the system. What is Penetration Testing Scope? Penetration Testing scope is the combined list of everything that a penetration testing team will examine or has agreed to not examine in a pentest. Pentesting scope is never a single variable or a sole component. The scope of an engagement is the sum of all variable factors to be tested or excluded. The pentesting scope is never something that is limited to just a list of items. It includes the enumeration of all variable factors surrounding the engagement, including its policies and any external factors that may affect the test. Penetration testing scope is different from a test plan, which lists all the items to be tested. An organization’s project manager usually provides its scope. It’s usually written as a list of variable factors; the enumeration of the scope is usually a paragraph or a set of paragraphs. Why should you define Pentesting scope? The scope of a penetration test is an essential part of a successful penetration test. A penetration test is a fantastic way to learn about your organization’s risk posture. However, it needs to be appropriately scoped to ensure that the organization gets the most value. If a penetration test is too narrow, the organization may miss an opportunity to protect itself from an actual attack better. If a penetration test is too broad, the organization may waste time and resources that could have been put to better use. Before you begin any penetration test, it is essential to understand why you need to define the scope of your test. This is very helpful if companies or customers send you a test if the objective isn’t communicated. Understanding OOS: Out of Scope If you’re wondering what exactly “out of scope” means in the penetration testing scope, it’s a term used by many companies to describe a specific area or a set of conditions that a penetration test cannot include. Defining OOS in pentesting scope is a way for a company to ensure a clear understanding of what to expect from a penetration test and what isn’t going to be included in the test. Some common assets that are usually out-of-scope of Penetration testing scope: - Production applications - Support platforms - Customer-specific subdomains - Subdomains managed by third parties such as Shopify, Zendesk How is Vulnerability Assessment different from Penetration Testing? A vulnerability assessment is a method used to determine a system (or network’s) risk for being exploited by a threat source. It is also used to identify weaknesses and vulnerabilities in a system and assess the impact of these weaknesses and vulnerabilities. Vulnerability assessments are performed to identify and address weaknesses in systems and networks before potential threats can exploit them. On the other hand, penetration testing is a security testing technique to evaluate security mechanisms in place in a system or network. It is used to test the strength of the defenses in place to identify and exploit possible weaknesses in the system. It involves active attempts to breach security and determine the extent of possible damage that could be done. Penetration testing usually involves performing a series of actions and analyzing how the system reacts to them. It is an essential process for assessing risk and improving overall security in a network or system. Penetration Testers can perform penetration testing in several ways: in a real-world environment to identify vulnerabilities and in a test lab setup to simulate a real-world environment to identify vulnerabilities and security breaches in a controlled manner. What is inside a Penetration Testing Scope? The scope of a vulnerability assessment is one of the essential parts of the process. It’s the document that defines what you are doing. It’s where you define what you are testing for vulnerabilities, what you are not testing for, and what you are checking to ensure that the correct procedures are being followed. Penetration Testing Scope document usually contains the following details: - Assets in scope - Assets out of scope - Vulnerabilities in-scope - Vulnerabilities out-of-scope The scope is part of the contract between the customer and the security assessor. The scope defines what will be tested, in what manner, and in what time frame. A pentesting scope should be in writing (electronic or paper) to communicate clearly between the customer and the security assessor. How Penetration Testing is performed? Penetration Testing is performed in 4 steps. Let’s understand them in depth: Step 1: Information Gathering and Pentesting Scoping Information gathering is the process of gathering data about a target using a variety of methods. The goal is to collect as much data as possible for a given target. This can include their network configuration, operating system, services, users, and so on. Information gathering is the first step in any penetration testing engagement. It is also a significant step in incident response engagements. The data that is gathered from information-gathering is used in the following phases. It is used to decide which phases to do. Step 2: Vulnerability Analysis Vulnerability Analysis is a methodology used in the penetration testing process to test for vulnerabilities in a web application. It involves using a variety of tools and techniques to determine the security risks in a system. It is a way of examining a system with a completely open mind and curiosity about what you find. Vulnerability analysis focuses on the security of a system. It is a way of looking at a system and determining whether it is secure enough, given its role and the conditions of its use. You can use vulnerability analysis to help determine what security controls you need to implement. Step 3: Exploitation After the Vulnerability analysis phase, the team of penetration testers starts gathering public exploits for the vulnerabilities that were found. The exploitation phase is not limited to public exploits; the team tries all the possible exploits, even handcrafted, to exploit the vulnerability. Step 4: Reporting and Remediation The process of reporting and remediating the vulnerabilities found in the application is always a challenging part. It is important to document each vulnerability and its risk level, along with a detailed explanation of how to fix it. This will help developers in understanding the severity of the issue and help them in fixing the issue. Checkout Astra’s Amazing Sample VAPT Report Why should you know about Astra’s Pentesting Methodology? When it comes to penetration testing, there are many different types of customers. Some of them are small businesses that need to protect their websites. Some are large enterprises with particular security needs. One of the biggest challenges penetration testers face is how to satisfy different customers with different needs. At Astra, we use our proprietary Pentesting methodology to help our clients understand what penetration testing is all about and how it will help them keep there infrastructure secure. Astra’s Pentest solution starts with an initial call known as Penetration Testing scoping call as part of our on-boarding process. The main motive of this call is to understand the customer’s requirements to serve them in the best possible way. Still not sure? Checkout why you should choose Astra as your VAPT Service Provider. The scope is everything. It’s what separates the security professionals from the wannabes. The penetration testing scope is what defines how you go about doing your work for conducting a comprehensive vulnerability assessment and penetration testing for your website or network asset. At Astra, We understand the need for a well-defined scope; classifying assets into in-scope and out-of-scope is the first and foremost step. 1. How is penetration testing performed? Penetration testing consists of four key steps. Security experts use various methods to gather information from the target’s network configuration, operating systems, services, etc. The pentest scope is defined during this step. In the next step a combination of tools and techniques is used to test the target for security vulnerabilities. Then the security team tries all possible ways to exploit the vulnerabilities found in the earlier step. Each of the vulnerabilities is reported along the recommendations for remediation. 2. How much does penetration testing cost? The cost for penetration testing ranges between $349 and $1499 per scan for websites. For SAAS or web applications it ranges between $700 and $4999 per scan, depending on your requirements. Learn more 3. How does Astra help with Penetration testing? The security engineers at Astra perform extensive manual pentest on top of machine learning driven automated scans. The vulnerability reports appear on your dashboard with detailed remediation guides. You will have access to a team of 2 to 10 security experts to help you with the fixes.
<urn:uuid:24d7e2e8-a01d-40ce-9f3c-648d6458bdf1>
CC-MAIN-2022-40
https://www.getastra.com/blog/security-audit/penetration-testing-scope/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00328.warc.gz
en
0.936744
2,021
3.125
3
Smart devices, mobile phones and tablets are becoming the control hub for the myriad of connected devices and the Internet of Things (IoT) – as are bring-your-own-device (BYOD) policies in many organizations today. But this is creating unique and heightened security risks for IT professionals, such as higher exposure to data breaches, malware and compliance violations. BYOD is no longer an industry trend; it’s a fact of life for enterprise-wide operations and the IT organizations that must support them. Safe K-12 schools rely on the adoption of successful risk mitigation and management strategies to create secure and open learning environments. Meeting the moment requires a multi-faceted approach to the convergence of security technology, policy, personnel, and mental health measures. What Every District Leader Should Know About Implementing School Safety and Security is a two-hour digital forum designed specifically for school board members, trustees, superintendents and administrators. In this session you will hear from a panel of renowned and trusted experts in public education advocacy, physical security and mental health to gain an understanding of the best practices and guidelines for:
<urn:uuid:eb882584-4a5f-4f47-be49-37e73b1cbf60>
CC-MAIN-2022-40
https://info.hidglobal.com/2021-09-k12-digital-forum.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00328.warc.gz
en
0.929112
229
2.625
3
Creating a smart environment is impossible without a thorough, real-time awareness of conditions and fluctuations within a building. And sensors are a key component in gathering the required real-time intelligence. After the enduring impacts of the global COVID-19 pandemic, commercial building management systems are held to new standards as many companies transition back to the office. For example, occupancy sensing is important for designing smaller, more flexible workspaces for hybrid environments. But beyond occupancy sensors, smart buildings leverage a range of other types of sensors that support Heating, Ventilation, and Air Conditioning (HVAC), air quality, maintenance, and flood prevention, to name a few. A greater number of qualitative and quantitative data translates to greater control over a building—and more profitability. Incentives for Commercial Building Managers to Invest in Smart Technologies At the end of the day, property owners have one main reason to deploy smart building sensors: monetary gain. Indeed, Return on Investment (ROI) can come from a number of channels. However, other motivations stem from public safety concerns and compliance. - Energy Savings: Investing in smart building technologies helps identify sources of energy waste. As an example, occupancy sensors can ensure that lights are turned off in unoccupied rooms, spaces, and units. - High Property Value: A property that possesses greater space utilization capabilities and environmental care is more attractive than properties that do not. According to the European Commission (EC) report The Macroeconomic and Other Benefits of Energy Efficiency, smart buildings have about 11.8% greater lease value and up to 35% higher sales value. Moreover, a Massachusetts Institute of Technology (MIT) study concluded that smart building owners collect, on average, 37% higher rent than other property owners. - More Flexibility: The shift back to in-office work environments also comes with a need for smaller and more adaptable shared spaces. These work conditions require greater control over resources. - Health and Wellbeing: The COVID-19 pandemic brought the health liability of a building environment to the forefront of corporate issues. As a result, there is a growing need for smart sensors that can measure air quality, an individual’s temperature level, and the number of people in a space, among other factors. - Environmental Reporting: Both commercial and residential buildings face pressure from local and national authorities to track environmental impact. Of course, this will require sensors to be installed in smart buildings as a means to collect data. Using these data, building managers can detect inefficiencies and find ways to improve performance/value. Additionally, Environmental, Social, and Governance (ESG) reporting is seen as a way to increase the value of the property. For example, a smart apartment can pull in more residents because the technologies provide greater convenience and comfort. Clearly, there are substantial reasons to make a building “smarter.” So, let’s explore several types of sensors that make it all possible and can be used to strengthen building management systems. Heating and Cooling on Autopilot People want to feel comfortable whether it’s in an office building, an apartment unit, a museum, a hotel, or somewhere else. Above all, automating a building’s heating and cooling is key to delivering comfort. HVAC systems are made smarter when they can detect temperature fluctuations in specific rooms or zones and automatically adjust accordingly. Temperature sensor use cases can even extend to something like detecting hot desks in a work office setting—to enhance resource planning and space utilization. How Do Humidity Sensors Work? Humidity sensors are key to smart HVAC control, as humidity goes hand-in-hand with air quality management. The three types of humidity sensors are: - Capacitive Sensors: Capacitive sensors are reliant on water vapor and have an electric insulator at the center, which is surrounded by two electrodes. When vapor hits the electrodes, a voltage change is recorded. - Resistive Sensors: While not quite as sensitive as capacitive, resistive sensors operate similarly. The main difference is that resistive sensors use ions in salts to measure a change in humidity instead of water vapor. - Thermal Sensors: A thermal sensor is dual-sensor-based. While one sensor is exposed to the air and measures ambient quality in an area of a building, the other is enclosed in a nitrogen-layered chamber. The difference between the two determines an accurate reading of humidity. Extending Equipment and System Life Span with Maintenance Sensors As any building manager will attest, maintaining equipment can be timely and expensive. Fortunately, maintenance sensors like Infineon’s Internet of Things (IoT)-enabled XENSIV sensors, can provide predictive maintenance and condition monitoring for an abundance of equipment and systems. Maintenance sensing extends to monitoring smart building components like: - HVAC equipment Chart Source: ABI Research The Many Technologies Enabling Occupancy Sensing and Presence Control in Smart Buildings Occupancy sensing and presence control technology shipments will grow at a Compound Annual Growth Rate (CAGR) of 47% between 2019 and 2030, with nearly 80 million shipments expected in 2030. That’s a far cry from the 1 million shipments in 2019. The underlying motivations for adopting these technologies come from safety occupation limits, energy management, and emerging concerns like air quality and space allocation. Below is a further breakdown of occupancy sensing and presence control technologies for building management. High- or Low-Resolution Cameras: More functionality than other sensors and also enable real-time visual confirmation of occupancy status, individual tracking, and object recognition. On the downside, this deployment requires expensive upfront costs and can be seen as an invasion of privacy. Passive Infrared (PIR) Motion Sensors: Usually the least expensive sensor for monitoring movement and occupancy sensing. As the pyroelectric sensors detect heat energy in the Line of Sight (LoS), it activates an action like setting an alarm or turning on a light. Pressure Sensors: Installed at the threshold of an area, these types of sensors provide data on foot traffic, occupancy, and where clear entry/exit points exist. LiDAR)/60 Gigahertz (GHz) Radar/Ultrasonic: Light Detection and Ranging (LiDAR) applications are greatly beneficial for hygiene and social distancing mandates because the technology can scrutinize individuals and objects to a great degree. Moreover, vendors like TDK InvenSense utilize Micro-Electro-Mechanical Systems (MEMS)-based ultrasonic time-of-flight sensing to offer improved accuracy and detail in perceiving slight motions. Microphones: On one hand, voice detection is another great avenue for occupancy monitoring. But on the other hand, it’s an unlikely deployment because eavesdropping on conversations introduces a host of privacy concerns. Avoiding Water Leakage with Flood Sensors The cost of water damage restoration is US$3.75 per square foot, according to National Flood Services. And that’s just for category 1 clean water; for category 2 gray water and category 3 black water, water damage restoration costs are US$4.50 per square foot and US$7 per square foot, respectively. However, building managers can avoid this pricey repair by deploying flood sensors. Flood sensors provide real-time notifications to circumvent costly damage in vulnerable areas or where expensive equipment/machinery is located. Some of these sensors let users modify water detection sensitivity to dodge false alarms produced by high humidity levels. Other products can sense when the room temperature approaches the freezing point, which contributes to pipe bursting. That way, facility managers can take the necessary steps to prevent failures before they materialize. Reducing Operational Costs by Keeping Tabs on the Building’s Electrical Current Electrical Current Transformer (CT) sensors measure the current flowing through an electrical conductor. The building’s energy flow is measured via a wire and the magnetic field. Electrical CTs take large, often dangerous voltages and break them down into smaller and more manageable energy outputs that are proportionate between the primary and secondary circuits. The four main types of CT sensors are: - Split Core: Can be opened and molded around a conductor, which makes it well-suited for existing setups. - Hall Effect/Direct-Current (DC): Monitors both Alternating Current (AC) and DC and can either be open or closed loop. - Rogowski Coil: Flexible and requires minimal effort for installation. It measures AC currents with a thin coil wrapped around the conductor and shut closed. - Solid Core: Highly accurate and ideal for new installations. These are complete loops that cannot be pried open—preventing debris from negatively affecting the accuracy of a meter. Smart Building Management Enters a New Stage The world today emphasizes energy and environmental control more than ever. A wide variety of sensors and enabling technologies can support smart building management by measuring desired criteria in enormous detail. These technologies provide far greater autonomy than in the past, making them pivotal to improving building performance (e.g., climate action, comfort, energy consumption). Vendors can expect fierce competition in this growing market. Technologies like PIR, LiDAR, and ultrasonic will all be popular enablers of building intelligence. At the same time, many potential customers will be hesitant to adopt brand-new solutions and will wait to see how others implement sensing technologies. While keeping an eye on the future with new innovative products, vendors should also strive for technological breakthroughs with existing building sensors, such as more precise insights, multi-functionality, and easier deployment.
<urn:uuid:b260bfd8-a800-408b-aefc-b35c7b67488f>
CC-MAIN-2022-40
https://www.abiresearch.com/blogs/2022/09/13/sensors-for-commercial-smart-building-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00328.warc.gz
en
0.910244
2,005
2.5625
3
Email is increasingly an integral part of global life, but business email compromise (BEC) attacks could place these communications at risk. Research by The Radicati Group found that 2.9 billion people worldwide will be using email portals by 2019. Each business user will send 126 messages daily by that time, compared to 122 emails sent and received per user every day in 2015. As email is increasingly used for notifications and interpersonal connections in company and consumer settings, it will be essential to evaluate its security capabilities and protect it appropriately. Receiving spam mail is nothing new, but new threats have taken on a completely new look to fool users into revealing sensitive information or downloading malicious links. BEC attacks in particular have become more popular to target unsuspecting employees. Let’s take a closer look into BEC threats and how hackers have improved this attack method.
<urn:uuid:b8f6316d-d5ca-413c-bcd5-72f1d52b15c2>
CC-MAIN-2022-40
https://cybersecurityminute.com/security-blogs/hackers-improved-bec-attack-methods/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00328.warc.gz
en
0.945683
172
2.53125
3
According to a study, 91% of cyber-attacks start with an email. Scammers hack email accounts so that they can send messages from a trusted email address in hopes of getting the recipients to take action. Their main goal is to get these email contacts to send money, reveal personal information, or click on a link that installs malware, spyware, or a virus. • A hacked email account can put you and your email contacts at risk of identity theft and other security and privacy intrusions, affecting finances and reputation. • Your email account can act as a gateway into other accounts. The hacker can simply click “forgot password” at login and have a password reset link sent to your email inbox, which they now control. • If a business email account is hacked, it is a data breach which can lead to revenue loss, damaged brand image and others. HOW TO KNOW IF YOUR EMAIL HAS BEEN HACKED • Your contacts received suspicious emails from you which you did not send. • You may have trouble logging into your email account and get an error message that your username or password is incorrect. • Sent messages folder may contain strange emails that you did not compose. • Strange messages appear on your social media accounts WHAT TO DO IF YOUR EMAIL ACCOUNT IS HACKED • Take back control of your account. • If you have been locked out of your account, contact your email service provider. • Change your username and password Use long, unique, and complex passwords or passphrases for different accounts. • Change security questions Avoid choosing questions with answers that can easily be guessed or found online. • Turn on two-step verification Also known as multifactor authentication, this extra security measure will require you to enter your username, password, and one-time passcode each time you login. • Perform a security scan Run a FULL scan of your computer with your UPDATED anti-malware software - Don't just run a quick scan. • Warn your contacts Inform your colleagues, friends, and family in your email contact list that your email has been hacked. Ask them to delete any suspicious messages that come from your account. Advise them not to open attachments, click on links, share any information, or send money. Check your email settings Hackers may have made changes to ensure they have access to your emails even after you have taken back control. 1. Make sure your emails are not being auto forwarded to someone else. 2. Hackers may also change your "reply to" email address to one that looks similar to yours, so when someone replies to your email, it goes to the hacker's account instead. 3. Check your email signature to make sure it does not contain any unfamiliar links. 4. Check to make sure the hackers have not turned on an autoresponder, turning your out-of-office notification into a spam machine • Beware of phishing scams Never respond to an unexpected email or website that asks for your personal information or login details, no matter how professional it looks. • Use up-to-date internet security software That includes anti-spyware and do keep it UPDATED! Spyware hides itself on your computer, collects personal information and passes on personal details without knowing. • Use secure and private networks Never use unsecured or public wi-fi to login to any of your online accounts. • Do not use public computers There is a high chance of spyware on untrusted computers, and they may have keylogging programs which monitor and record what you type. • Maintain strong passwords Never share your passwords and change your passwords every 3 to 6 months. Use a different password for each site or account. • Beware what you share Limit the amount of information shared on social media and to the public. Hackers and identity thieves are quick to gather personal information on social media so be careful and keep personal details private. • Bookmark your trusted websites This will prevent you from accidentally landing on the wrong website where hackers could slip malicious code or phishing links.
<urn:uuid:2797d57b-04d5-4e3f-a7f7-07eadf908e3c>
CC-MAIN-2022-40
https://brucert.org.bn/node/207
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00528.warc.gz
en
0.902733
873
2.796875
3
Corporate phishing attempts are becoming more and more sophisticated, threatening the integrity of sensitive data. Here is all you need to know to make sure you are well protected! “Phishing”, remains one of the most frequent methods of cyberattack. It represents one of the main cybersecurity challenges for companies, as the modus operandi to steal personal information or corporate funds has been perfected over time. In order to properly protect your structure, it is necessary to go beyond the popular imagery of phishing email: a poorly written email with grotesque promises. Hackers have been improving their medium and technique, which makes their phishing attempts more and more difficult to spot. It is important to know them in order to train your employees and limit the risks. The word “phishing” is a portmanteau of “fishing” and “phreak”, which is itself a portmanteau of “phone” and “freak” and historically designates people who elaborated phone calling tricks to avoid paying charges during the twentieth century. The National Cyber Security Centre defines phishing as follows: “Phishing is when attackers attempt to trick users into doing 'the wrong thing', such as clicking a bad link that will download malware, or direct them to a dodgy website. [...] it could be the first step in a targeted attack against your company, where the aim could be something much more specific, like the theft of sensitive data.” The three components that qualify a cyberattack as “phishing” are: Protecting yourself well from cyber attacks first of all implies knowing how to recognise them. A phishing attempt can take various forms. You could receive a call from your bank, a message on social networks, a text message, an email. Those messages might seem to come from a recently visited ecommerce site, your phone company, public service, or your energy company. In any case, the hacker is using the logos you already know to gain your trust. Usually, phishing scams are based on two main types of content: There are also other trends in phishing emails: Fortunately, a phishing attempt can be identified through a few criteria which are recurrent in this kind of emails: 1 / The offer seems too tempting, or the demand too urgent. Public services always leave several opportunities for their users to regularise their procedures, no need to pay urgently. 2 / The object of the email is rather vague, or is the same as your email address username. Also, on some occasions, the message is not directly addressed to the recipient. 3 / Spelling, grammar or syntax errors. 4 / Shortened or misspelled links, due to spoofing of legitimate websites. So mouse over the links to check them, without ever clicking on them. 5 / Attachments you were not expecting. 6 / Any request concerning the confirmation or communication of your personal data and sensitive information. 7 / A request issued by a company that is not one of your suppliers, or which you have no specific interaction with. 8 / An unusual website address (domain name), you must always check the address of the organisation that is supposed to contact you. Finally, what is the difference between spam and phishing? The distinction between spam and phishing emails lies in the intent of the senders. Spammers flood with unwanted advertisements, but without any other harmful consequence. Phishers, on the other hand, use fraudulent techniques to steal sensitive data from you. In France a 2020 study conducted by the Experts Club on Digital and Information Security (CESIN) found that phishing constitutes the most frequent way of carrying out the cyber attacks CISOs from French corporate companies have to deal with. All companies are therefore directly concerned by fraud attempts, be they SMEs or large groups listed on the stock exchange. However, large groups are better prepared for them than SMEs. The latter ones tend to think they are less targeted than the former ones, and as a result, suffer attacks more often. The European Union Agency for Cybersecurity (ENISA) indeed reports that 70% of European SMEs use basic security controls only. Furthermore, phishing constitutes the most common method of cyberattack for these companies (41% of the reported incidents). Additionally, European banking information is often targeted by cybercriminals, a known breach is via European PSD2 regulation which often constitutes a good way in. Phishing mainly jeopardises the security of confidential data of a company, as well as its finances: From a legal perspective, phishing falls under different types of offenses: Regardless of the medium, the phishing process remains the same. It is about conveying a message that relies on social engineering to convince the victim to click on a link or an attachment. The end goal is gaining access to their personal data. This strategy is based on different media, but fortunately you can learn how to recognise the various techniques that might be used. Phishers most often send emails, but it is not their only means to get to you: Social media phishing involves hacking into your accounts to send malicious links to your contacts. This method also relies on creating fake user accounts. This type of phishing particularly affects companies for which “social engineering” is key to business. In this case, phishers collect information about the company to plan their future cyber attacks better. There are also phishing attempts through Linkedin accounts. Hackers send links to their victims, who provide their usernames and passwords. The criminals then use it to take control of the Linkedin account and send fraudulent messages to their contacts. As detailed above, hackers use different methods to pass as reliable interlocutors. When they target companies, they use more specific methods: Among the phishing attacks that everyone has been talking about, one may remark that databases of partner hotels of Booking.com are particularly affected. The users of this website all had been the target of fraudulent attacks via Whatsapp or text messages. Much has also been said about the phishing attack experienced by Target stores in 2013. It had resulted in a data breach for 110 million consumers. That event has had a lasting impact on the reputation of this American brand. The first step in protecting your business from phishing scams is to train your employees on the communication situations they should be wary of. You should also invest in the right anti-phishing software. Once an attack has been carried out, however, there are a few good practices that can help mitigate negative consequences. Here are some essential preventive measures against phishing attempts, you may want to communicate them to your staff: If the above recommendations lead your employees to spot one or more phishing attempts during their professional activities, here are a few steps you should follow: In the event of a successful phishing cyber attack, by which hackers have obtained confidential data or money from you: In the case of bank phishing, will your business be compensated for its financial losses? The courts do not have a universal decision on that matter. The possible compensation depends on the appearance of the fraudulent message: indeed, considering whether the email looked illegitimate, whether the demanded amount of money was delusional, or whether the request seemed illogical, your behavior could be deemed negligent or not . This is what happened in a legal case between Crédit Mutuel Nord Europe and one of its clients. The local jurisdiction had recognised the fraud, arguing that the hackers had stolen her bank details. The highest French court (Cour de Cassation) preferred to agree with the bank, suspecting the client of gross negligence in handling her bank details. Phishing is a form of cyber attack that grants the hacker access to confidential data: personal file or banking credentials. The hacker usually resorts to emails, text messages, or phone calls, pretending to be a legitimate interlocutor. If you come across a questionable email, mouse over the URLs to make sure they correspond to legitimate websites. If in doubt, end the communication (text, call, email) and contact the official sender yourself to make sure that it is indeed a message on their behalf. You can report spam and other phishing emails to Google via certain extensions. related to Cybersecurity and Cyber Risk Quantification
<urn:uuid:7140c81f-ce09-4371-8356-cc48cf2a862e>
CC-MAIN-2022-40
https://www.c-risk.com/en/blog/phishing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00528.warc.gz
en
0.955162
1,712
2.96875
3
2020 provided a glimpse of just how much AI is beginning to penetrate everyday life. It seems likely that in the next few years we’ll regularly (and unknowingly) see AI-generated text in our social media feeds, advertisements, and news outlets. The implications of AI being used in the real world raise important questions about the ethical use of AI as well. Copyright by www.insidebigdata.com So as we look forward to 2021, it is worth taking a moment to look back at the biggest stories in AI over the past year. GPT-3: AI Generated Text Perhaps the biggest splash of 2020 was made by OpenAI’s GPT-3 model. GPT-3 (Generative Pretrained Transformer 3) is an AI capable of understanding and generating text. The abilities of this AI are impressive — early users have coaxed the AI to answer trivia questions, create fiction and poetry, and generate simple webpages from written instructions. Perhaps most impressively, humans cannot distinguish between articles written by GPT-3 and those written by humans. Although GPT-3 is not yet approaching the technological singularity, this model and others like it will prove incredibly useful in the coming years. Companies and individuals can request access to the model outputs through an API (currently in private beta testing). Microsoft now owns the license to GPT-3, and other groups are working to create similar results. I expect we’ll soon see a proliferation of new capabilities related to AI’s that understand language. AlphaFold: Protein Folding Outside of Natural Language Processing, 2020 also saw important progress in biotechnology. Starting early in the year, we got the rapid and timely advancement of mRNA vaccines. Throughout the year, clinical trials proved these to be highly effective. As the year came to a close, another bombshell — DeepMind’s AlphaFold appears to be a giant step forward, this time in the area of protein folding. This fall, the latest version of AlphaFold competed against other state-of-the-art methods in a biennial protein folding prediction contest called The CASP Assessment. In this contest, algorithms were tasked with converting amino acid sequences into protein structures and were judged based on the fraction of amino acids positions the model predicts correctly within a certain margin. In the most challenging Free-Modeling category, AlphaFold was able to predict the structure of unseen proteins with a median score of 88.1. The next closest predictor in this year’s contest scored 32.4. This is an astonishing leap forward. Going forward, scientists can use models like AlphaFold to accelerate their research on disease and genetics. Perhaps at the end of 2021, we’ll be celebrating the technology that work like this enabled. Democratizing Deep Learning As highlighted above, deep learning — the primary method underlying many state-of-the-art Ais — is proving useful in domains as disparate as biology and natural language. Efforts to make deep learning more accessible to domain experts and practitioners is accelerating the adoption of AI in many fields. Anyone with an internet connection can now generate a realistic but completely fake photograph of a human face. Similar technology has already been used to create more realistic — and more difficult to detect — fake social media accounts in disinformation campaigns, including some leading up to the 2020 U.S. election. And OpenAI is planning to make the capabilities of GPT-3 available to vetted users through a comparatively easy-to-use API. There is genuine concern that as deep-learning-enabled technology becomes more accessible, it also becomes easier to weaponize. But pairing AIs with human domain experts can also be leveraged for good. Domain experts can steer the AIs towards impactful, solvable problems and diagnose when the AIs are biased or have reached incorrect conclusions. The AIs provide the ability to rapidly process enormous volumes of data (sometimes with higher accuracy than humans), making analyses cheaper and faster, and unlocking insights that might otherwise be out of reach. User-friendly tools, APIs, and libraries facilitate the adoption of AI, especially in fields that can leverage already well-established techniques such as image classification. One of the interesting consequences of AI and ML systems becoming more readily accessible has been the resulting shift of priorities in the field of AI Ethics. […]
<urn:uuid:79692376-28f7-40f0-8de6-0609cd0dfa53>
CC-MAIN-2022-40
https://swisscognitive.ch/2021/01/09/biggest-stories-in-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00528.warc.gz
en
0.943196
896
2.671875
3
A bug in the Mac OS X was uncovered five months ago and is now of renewed concern. It gives hackers almost unlimited access to files when they alter clock and user timestamp settings. All versions of OS X from 10.7 through 10.8.4 are at risk of being hacked. Ars Technica reports that this bug is now a real concern because of a new testing module in Metasploit, which can make it easier for hackers to exploit Mac OS X vulnerabilities. Metasploit is an open-source framework that makes it easier for security researchers to penetrate and test networks. Although it’s a useful tool for locating and correcting flaws, it can also be used by cybercriminals to exploit a Unix component called sudo. Sudo requires a password before “Superuser” privileges are granted. The flaw is in the authentication process where the clock in the Mac OS X can be set back to Jan 1, 1970, (the beginning of time for the machine) and alter the sudo-user timestamp settings. As a result hackers can obtain root access without using a password. A post on the Common Vulnerabilities and Exposure (CVE) page explains further: “sudo 1.6.0 through 1.7.10p6 and sudo 1.8.0 through 1.8.6p6 allows local users or physically-proximate attackers to bypass intended time restrictions and retain privileges without re-authenticating by setting the system clock and sudo user timestamp to the epoch.” The good news is that for hackers to exploit this flaw, they must have administrator privileges, and have previously run the sudo. They also need physical or remote access to the device. According to HD Moore, the Founder of Metaspoit: “The bug is significant because it allows any user-level compromise to become root, which in turn exposes things like clear-text passwords from Keychain and makes it possible for the intruder to install a permanent rootkit.”
<urn:uuid:79eead70-1034-40e3-acb1-f6e69e4b7610>
CC-MAIN-2022-40
https://integrisit.com/alert-a-mac-bug-is-cause-for-concern/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00528.warc.gz
en
0.90782
412
2.546875
3
Maxtrain.com - [email protected] - 513-322-8888 - 866-595-6863 Your training and experience using Microsoft® Office Access® has given you basic database management skills, such as creating tables, designing forms and reports, and building queries. In this course, you will expand your knowledge of relational database design; promote quality input from users; improve database efficiency and promote data integrity; and implement advanced features in tables, queries, forms, and reports. Extending your knowledge of Access will result in a robust, functional database for your users. This course is the second part of a three-course series that covers the skills needed to perform database design and development in Access 2019. In this course, you will optimize an Access 2019 database. Topic A: Restrict Data Input Through Field Validation Topic B: Restrict Data Input Through Forms and Record Validation Topic A: Data Normalization Topic B: Associate Unrelated Tables Topic C: Enforce Referential Integrity Topic A: Create Lookups Within a Table Topic B: Work with Subdatasheets Topic A: Create Query Joins Topic B: Create Subqueries Topic C: Summarize Data Topic A: Apply Conditional Formatting Topic B: Create Tab Pages with Subforms and Other Controls Topic A: Apply Advanced Formatting to a Report Topic B: Add a Calculated Field to a Report Topic C: Control Pagination and Print Quality Topic D: Add a Chart to a Report To ensure your success in this course, it is recommended you have completed Microsoft® Office Access® 2019: Part 1 or possess equivalent knowledge. It is also suggested that you have end-user skills with any current version of Windows, including being able to start programs, switch between programs, locate saved files, close programs, and use a browser to access websites. You can obtain this level of skills and knowledge by taking either of the following Logical Operations courses, or any similar courses in general Microsoft Windows skills: This course is designed for students wishing to gain intermediate-level skills or individuals whose job responsibilities include constructing relational databases and developing tables, queries, forms, and reports in Microsoft Office Access 2019. 1 Day Course
<urn:uuid:44ecfba5-8526-4e25-aed2-6b188a2d6cec>
CC-MAIN-2022-40
https://maxtrain.com/product/microsoft-access-part-2-data-integrity-and-advanced-forms-queries-and-reports/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00528.warc.gz
en
0.869092
469
2.734375
3
The hardware-based random number generators used in most modern IoT devices have a serious fundamental weakness that undermines the security of the encryption keys they generate for communications: The RNGs don’t really generate random numbers. That’s a fairly serious issue when the whole function of the RNG is to generate random numbers, which are then used as seeds for encryption keys. Researchers from Bishop Fox found that the overwhelming majority of the tens of billions of IoT devices in use today have flawed RNGs, a vulnerability that is not limited to a group of vendors or specific IoT operating systems. It’s a widespread problem that security researchers have been discussing in various forms for some time, and there’s no simple way to address it. The root of the problem is the fact that the hardware RNGs in these devices throw errors relatively often, and the software usually doesn’t check those error codes. “When an IoT device requires a random number, it makes a call to the dedicated hardware RNG either through the device’s SDK or increasingly through an IoT operating system. What the function call is named varies, of course, but it takes place in the hardware abstraction layer (HAL). This is an API created by the device manufacturer and is designed so you can more easily interface with the hardware through C code and not need to mess around with setting and checking specific registers unique to the device,” Allan Cecil and Dan Petro of Bishop Fox wrote in a post detailing their research, which they discussed in a talk at DEF CON 29 last weekend. The errors generated by the RNG HAL function can vary, but the important ones in the current context are when the RNG becomes overwhelmed by too many requests for random numbers, which causes those requests to fail at some point. Depending on the implementation, that could result in the use of partial entropy for a random number, a random number of zero, or uninitialized memory. Partial entropy can be problematic because it weakens the key strength, and the RNG just choosing zero isn’t ideal either, but the creation of uninitialized memory is much worse. “It can always be worse. No matter how bad you think it is, it can always be worse. What could be worse than encryption keys of zero? Uninitialized memory,” Petro said. “This is a case of a missing feature and one across a very heterogeneous ecosystem." A contributing factor to this issue is that IoT devices by design are generally fairly bare-bones when it comes to software and so they don’t have the safety net of an operating system to help handle serious errors. “There’s no OS to smooth over errors you might make. You’re just running C or C++ on bare metal,” Petro said. “As it happens no one actually checks the return error codes on the RNG.” Addressing the weakness in the RNGs on IoT devices will not be either simple or easy, and Cecil and Petro said the most straightforward and effective way to do it would be for designers of IoT operating systems to implement a cryptographically secure RNG (CSPRNG) in their OS. CSPRNG systems differ from RNGs in that they’re not hardware-based and are designed to be resistant to attacks. CSPRNGs also don’t have the failure states that other RNGs do. “When an application needs a cryptographically secure random number on a Linux server, it doesn’t read from the hardware RNG directly or make a call to some HAL function and fight with error codes. No, it just reads from /dev/urandom. This is a cryptographically secure pseudo-random number generator (CSPRNG) subsystem made available to applications as an API. There are similar subsystems on every major operating system, too: Windows, iOS, MacOS, Android, BSD, you name it,” the Bishop Fox post says. “Importantly, calls to /dev/urandom never fail and never block program execution. The CSPRNG subsystem can produce an endless sequence of strong random numbers immediately. This squarely solves the problem of HAL functions that either block program execution or fail.” Cecil and Petro stressed that the problem is not due to errors by the various developers writing software for IoT devices. It’s a systemic issue with the way these RNGs work. “This is a case of a missing feature and one across a very heterogeneous ecosystem. The IoT world needs a CSPRNG implementation,” Cecil said.
<urn:uuid:089c9933-c9ac-4e1f-92f5-9413612843ca>
CC-MAIN-2022-40
https://duo.com/decipher/fundamental-flaw-in-rngs-affects-many-iot-devices
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00528.warc.gz
en
0.919907
963
3.15625
3
To be a real “full-stack” data scientist, or what many bloggers and employers call a “unicorn,” you have to master every step of the data science process — all the way from storing your data, to putting your finished product (typically a predictive model) in production. But the bulk of data science training focuses on machine/deep learning techniques; data management knowledge is often treated as an afterthought. Data science students usually learn modeling skills with processed and cleaned data in text files stored on their laptop, ignoring how the data sausage is made. Students often don’t realize that in industry settings, getting the raw data from various sources to be ready for modeling is usually 80% of the work. And because enterprise projects usually involve a massive amount of data that their local machine is not equipped to handle, the entire modeling process often takes place in the cloud, with most of the applications and databases hosted on servers in data centers elsewhere. Even after the student landed a job as a data scientist, data management often becomes something that a separate data engineering team takes care of. As a result, too many data scientists know too little about data storage and infrastructure, often to the detriment of their ability to make the right decisions at their jobs. The goal of this article is to provide a roadmap of what a data scientist in 2019 should know about data management — from types of databases, where and how data is stored and processed, to the current commercial options — so the aspiring “unicorns” could dive deeper on their own, or at least learn enough to sound like one at interviews and cocktail parties. The Rise of Unstructured Data & Big Data Tools IBM 305 RAMAC (Source: WikiCommons). The story of data science is really the story of data storage. In the pre-digital age, data was stored in our heads, on clay tablets, or on paper, which made aggregating and analyzing data extremely time-consuming. In 1956, IBM introduced the first commercial computer with a magnetic hard drive, 305 RAMAC. The entire unit required 30 ft x 50 ft of physical space, weighed over a ton, and for $3,200 a month, companies could lease the unit to store up to 5 MB of data. In the 60 years since prices per gigabyte in DRAM has dropped from a whopping $2.64 billion in 1965 to $4.9 in 2017. Besides being magnitudes cheaper, data storage also became much denser/smaller in size. A disk platter in the 305 RAMAC stored a hundred bits per square inch, compared to over a trillion bits per square inch in a typical disk platter today. This combination of dramatically reduced cost and size in data storage is what makes today’s big data analytics possible. With ultra-low storage costs, building the data science infrastructure to collect and extract insights from huge amount of data became a profitable approach for businesses. And with the profusion of IoT devices that constantly generate and transmit users’ data, businesses are collecting data on an ever-increasing number of activities, creating a massive amount of high-volume, high-velocity, and high-variety information assets (or the “three Vs of big data”). Most of these activities (e.g. emails, videos, audio, chat messages, social media posts) generate unstructured data, which accounts for almost 80% of total enterprise data today and is growing twice as fast as structured data in the past decade. 125 Exabytes of enterprise data was stored in 2017; 80% was unstructured data. (Source: Credit Suisse). This massive data growth dramatically transformed the way data is stored and analyzed, as the traditional tools and approaches were not equipped to handle the “three Vs of big data.” New technologies were developed with the ability to handle the ever-increasing volume and variety of data, and at a faster speed and lower cost. These new tools also have profound effects on how data scientists do their job — allowing them to monetize the massive data volume by performing analytics and building new applications that were not possible before. Below are the major big data management innovations that we think every data scientist should know about. Relational Databases & NoSQL Relational Database Management Systems (RDBMS) emerged in the 1970s to store data as tables with rows and columns, using Structured Query Language (SQL) statements to query and maintain the database. A relational database is basically a collection of tables, each with a schema that rigidly defines the attributes and types of data that they store, as well as keys that identify specific columns or rows to facilitate access. The RDBMS landscape was once ruled by Oracle and IBM, but today many open source options, like MySQL, SQLite, and PostgreSQL are just as popular. RDBMS ranked by popularity (Source: DB-Engines). Relational databases found a home in the business world due to some very appealing properties. Data integrity is absolutely paramount in relational databases. RDBMS satisfies the requirements of Atomicity, Consistency, Isolation, and Durability (or ACID-compliant) by imposing a number of constraints to ensure that the stored data is reliable and accurate, making them ideal for tracking and storing things like account numbers, orders, and payments. But these constraints come with costly tradeoffs. Because of the schema and type constraints, RDBMS are terrible at storing unstructured or semi-structured data. The rigid schema also makes RDBMS more expensive to set up, maintain and grow. Setting up an RDBMS requires users to have specific use cases in advance; any changes to the schema are usually difficult and time-consuming. In addition, traditional RDBMS were designed to run on a single computer node, which means their speed is significantly slower when processing large volumes of data. Sharding RDBMS in order to scale horizontally while maintaining ACID compliance is also extremely challenging. All these attributes make traditional RDBMS ill-equipped to handle modern big data. By the mid-2000s, the existing RDBMS could no longer handle the changing needs and exponential growth of a few very successful online businesses, and many non-relational (or NoSQL) databases were developed as a result (here’s a story on how Facebook dealt with the limitations of MySQL when their data volume started to grow). Without any known solutions at the time, these online businesses invented new approaches and tools to handle the massive amount of unstructured data they collected: Google created GFS, MapReduce, and BigTable; Amazon created DynamoDB; Yahoo created Hadoop; Facebook created Cassandra and Hive; LinkedIn created Kafka. Some of these businesses open-sourced their work; some published research papers detailing their designs, resulting in a proliferation of databases with the new technologies, and NoSQL databases emerged as a major player in the industry. An explosion of database options since the 2000s. Source: Korflatis et al. (2016). NoSQL databases are schema-agnostic and provide the flexibility needed to store and manipulate large volumes of unstructured and semi-structured data. Users don’t need to know what types of data will be stored during set-up, and the system can accommodate changes in data types and schema. Designed to distribute data across different nodes, NoSQL databases are generally more horizontally scalable and fault-tolerant. However, these performance benefits also come with a cost — NoSQL databases are not ACID compliant, and data consistency is not guaranteed. They instead provide “eventual consistency”: when old data is getting overwritten, they’d return results that are a little wrong temporarily. For example, Google’s search engine index can’t overwrite its data while people are simultaneously searching a given term, so it doesn’t give us the most up-to-date results when we search, but it gives us the latest, best answer it can. While this setup won’t work in situations where data consistency is absolutely necessary (such as financial transactions), it’s just fine for tasks that require speed rather than pinpoint accuracy. There are now several different categories of NoSQL, each serving some specific purposes. Key-Value Stores, such as Redis, DynamoDB, and Cosmos DB, store only key-value pairs and provide basic functionality for retrieving the value associated with a known key. They work best with a simple database schema and when speed is important. Wide Column Stores, such as Cassandra, Scylla, and HBase, store data in column families or tables and are built to manage petabytes of data across a massive, distributed system. Document Stores, such as MongoDB and Couchbase, store data in XML or JSON format, with the document name as key and the contents of the document as value. The documents can contain many different value types and can be nested, making them particularly well-suited to manage semi-structured data across distributed systems. Graph Databases, such as Neo4J and Amazon Neptune, represent data as a network of related nodes or objects in order to facilitate data visualizations and graph analytics. Graph databases are particularly useful for analyzing the relationships between heterogeneous data points, such as in fraud prevention or Facebook’s friends graph. MongoDB is currently the most popular NoSQL database and has delivered substantial values for some businesses that have been struggling to handle their unstructured data with the traditional RDBMS approach. Here are two industry examples: after MetLife spent years trying to build a centralized customer database on an RDBMS that could handle all its insurance products, someone at an internal hackathon built one with MongoDB within hours, which went to production in 90 days. YouGov, a market research firm that collects 5 gigabits of data an hour, saved 70 percent of the storage capacity it formerly used by migrating from RDBMS to MongoDB. Data Warehouse, Data Lake, & Data Swamp As data sources continue to grow, performing data analytics with multiple databases became inefficient and costly. One solution called Data Warehouse emerged in the 1980s, which centralizes an enterprise’s data from all of its databases. Data Warehouse supports the flow of data from operational systems to analytics/decision systems by creating a single repository of data from various sources (both internal and external). In most cases, a Data Warehouse is a relational database that stores processed data that is optimized for gathering business insights. It collects data with predetermined structure and schema coming from transactional systems and business applications, and the data is typically used for operational reporting and analysis. But because data that goes into data warehouses need to be processed before it gets stored — with today’s massive amount of unstructured data, that could take significant time and resources. In response, businesses started maintaining Data Lakes in the 2010s, which store all of an enterprise’s structured and unstructured data at any scale. Data Lakes store raw data, and could be set up without having to first define the data structure and schema. Data Lakes allow users to run analytics without having to move the data to a separate analytics system, enabling businesses to gain insights from new sources of data that was not available for analysis before, for instance by building machine learning models using data from log files, click-streams, social media, and IoT devices. By making all of the enterprise data readily available for analysis, data scientists could answer a new set of business questions, or tackle old questions with new data. Data Warehouse and Data Lake Comparisons (Source: AWS). A common challenge with the Data Lake architecture is that without the appropriate data quality and governance framework in place, when terabytes of structured and unstructured data flow into the Data Lakes, it often becomes extremely difficult to sort through their content. The Data Lakes could turn into Data Swamps as the stored data become too messy to be usable. Many organizations are now calling for more data governance and metadata management practices to prevent Data Swamps from forming. Distributed & Parallel Processing: Hadoop, Spark, & MPP While storage and computing needs grew by leaps and bounds in the last several decades, traditional hardware has not advanced enough to keep up. Enterprise data no longer fits neatly in standard storage, and the computation power required to handle most big data analytics tasks might take weeks, months, or simply not possible to complete on a standard computer. To overcome this deficiency, many new technologies have evolved to include multiple computers working together, distributing the database to thousands of commodity servers. When a network of computers are connected and work together to accomplish the same task, computers form a cluster. A cluster can be thought of as a single computer, but can dramatically improve the performance, availability, and scalability over a single, more powerful machine, and at a lower cost by using commodity hardware. Apache Hadoop is an example of distributed data infrastructures that leverage clusters to store and process massive amounts of data, and what enables the Data Lake architecture. Evolution of database technologies (Source: Business Analytic 3.0). When you think Hadoop, think “distribution.” Hadoop consists of three main components: Hadoop Distributed File System (HDFS), a way to store and keep track of your data across multiple (distributed) physical hard drives; MapReduce, a framework for processing data across distributed processors; and Yet Another Resource Negotiator (YARN), a cluster management framework that orchestrates the distribution of things such as CPU usage, memory, and network bandwidth allocation across distributed computers. Hadoop’s processing layer is an especially notable innovation: MapReduce is a two-step computational approach for processing large (multi-terabyte or greater) data sets distributed across large clusters of commodity hardware in a reliable, fault-tolerant way. The first step is to distribute your data across multiple computers (Map), with each performing a computation on its slice of the data in parallel. The next step is to combine those results in a pair-wise manner (Reduce). Google published a paper on MapReduce in 2004, which got picked up by Yahoo programmers who implemented it in the open-source Apache environment in 2006, providing every business the capability to store an unprecedented volume of data using commodity hardware. Even though there are many open-source implementations of the idea, the Google brand name MapReduce has stuck around, kind of like Jacuzzi or Kleenex. Hadoop is built for iterative computations, scanning massive amounts of data in a single operation from disk, distributing the processing across multiple nodes, and storing the results back on disk. Querying zettabytes of indexed data that would take 4 hours to run in a traditional data warehouse environment could be completed in 10–12 seconds with Hadoop and HBase. Hadoop is typically used to generate complex analytics models or high volume data storage applications such as retrospective and predictive analytics, machine learning and pattern matching, customer segmentation and churn analysis, and active archives. But MapReduce processes data in batches and is therefore not suitable for processing real-time data. Apache Spark was built in 2012 to fill that gap. Spark is a parallel data processing tool that is optimized for speed and efficiency by processing data in-memory. It operates under the same MapReduce principle but runs much faster by completing most of the computation in memory and only writing to disk when memory is full, or the computation is complete. This in-memory computation allows Spark to “run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk.” However, when the data set is so large that insufficient RAM becomes an issue (usually hundreds of gigabytes or more), Hadoop MapReduce might outperform Spark. Spark also has an extensive set of data analytics libraries covering a wide range of functions: Spark SQL for SQL and structured data; MLib for machine learning, Spark Streaming for stream processing, and GraphX for graph analytics. Since Spark’s focus is on computation, it does not come with its own storage system and instead runs on a variety of storage systems such as Amazon S3, Azure Storage, and Hadoop’s HDFS. In an MPP system, all the nodes are interconnected, and data could be exchanged across the network (Source: IBM). Hadoop and Spark are not the only technologies that leverage clusters to process large volumes of data. Another popular computational approach to distributed query processing is called Massively Parallel Processing (MPP). Similar to MapReduce, MPP distributes data processing across multiple nodes, and the nodes process the data in parallel for faster speed. But unlike Hadoop, MPP is used in RDBMS and utilizes a “share-nothing” architecture — each node processes its own slice of the data with multi-core processors, making them many times faster than traditional RDBMS. Some MPP databases, like Pivotal Greenplum, have mature machine learning libraries that allow for in-database analytics. However, as with traditional RDBMS, most MPP databases do not support unstructured data, and even structured data will require some processing to fit the MPP infrastructure; therefore, it takes additional time and resources to set up the data pipeline for an MPP database. Since MPP databases are ACID-compliant and deliver much faster speed than traditional RDBMS, they are usually employed in high-end enterprise data warehousing solutions such as Amazon Redshift, Pivotal Greenplum, and Snowflake. As an industry example, the New York Stock Exchange receives four to five terabytes of data daily and conducts complex analytics, market surveillance, capacity planning and monitoring. The company had been using a traditional database that couldn’t handle the workload, which took hours to load and had poor query speed. Moving to an MPP database reduced their daily analysis run time by eight hours. Another innovation that completely transformed enterprise big data analytics capabilities is the rise of cloud services. In the bad old days before cloud services were available, businesses had to buy on-premises data storage and analytics solutions from software and hardware vendors, usually paying upfront perpetual software license fees and annual hardware maintenance and service fees. On top of those are the costs of power, cooling, security, disaster protection, IT staff, etc, for building and maintaining the on-premises infrastructure. Even when it was technically possible to store and process big data, most businesses found it cost-prohibitive to do so at scale. Scaling with on-premises infrastructure also requires an extensive design and procurement process, which takes a long time to implement and requires substantial upfront capital. Many potentially valuable data collection and analytics possibilities were ignored as a result. “As a Service” providers: e.g. Infrastructure as a Service (IaaS) and Storage as a Service (STaaS) (Source: IMELGRAT.ME). The on-premises model began to lose market share quickly when cloud services were introduced in the late 2000s — the global cloud services market has been growing 15% annually in the past decade. Cloud service platforms provide subscriptions to a variety of services (from virtual computing to storage infrastructure to databases), delivered over the internet on a pay-as-you-go basis, offering customers rapid access to flexible and low-cost storage and virtual computing resources. Cloud service providers are responsible for all of their hardware and software purchases and maintenance, and usually have a vast network of servers and support staff to provide reliable services. Many businesses discovered that they could significantly reduce costs and improve operational efficiencies with cloud services, and are able to develop and productionize their products more quickly with the out-of-the-box cloud resources and their built-in scalability. By removing the upfront costs and time commitment to build on-premises infrastructure, cloud services also lower the barriers to adopt big data tools and effectively democratized big data analytics for small and mid-size businesses. There are several cloud services models, with public clouds being the most common. In a public cloud, all hardware, software, and other supporting infrastructure are owned and managed by the cloud service provider. Customers share the cloud infrastructure with other “cloud tenants” and access their services through a web browser. A private cloud is often used by organizations with special security needs such as government agencies and financial institutions. In a private cloud, the services and infrastructure are dedicated solely to one organization and are maintained on a private network. The private cloud can be on-premises or hosted by a third-party service provider elsewhere. Hybrid clouds combine private clouds with public clouds, allowing organizations to reap the advantages of both. In a hybrid cloud, data and applications can move between private and public clouds for greater flexibility: e.g., the public cloud could be used for high-volume, lower-security data, and the private cloud for sensitive, business-critical data like financial reporting. The multi-cloud model involves multiple cloud platforms, and each delivers a specific application service. A multi-cloud can be a combination of public, private, and hybrid clouds to achieve the organization’s goals. Organizations often choose multi-cloud to suit their particular business, locations, and timing needs, and to avoid vendor lock-in. Case Study: Building the End-to-End Data Science Infrastructure for a Recommendation App Startup Machine learning packages for different types of data environments (Source: Kosyakov (2016)). Building out a viable data science product involves much more than just building a machine learning model with scikit-learn, pickling it, and loading it on a server. It requires an understanding of how all the parts of the enterprise’s ecosystem work together, starting with where/how the data flows into the data team, the environment where the data is processed/transformed, the enterprise’s conventions for visualizing/presenting data, and how the model output will be converted as input for some other enterprise applications. The main goals involve building a process that will be easy to maintain, where models can be iterated on and the performance is reproducible, and the model’s output can be easily understood and visualized for other stakeholders so that they may make better-informed business decisions. Achieving those goals requires selecting the right tools, as well as an understanding of what others in the industry are doing and the best practices. Let’s illustrate with a scenario: suppose you just got hired as the lead data scientist for a vacation recommendation app startup that is expected to collect hundreds of gigabytes of both structured (customer profiles, temperatures, prices, and transaction records) and unstructured (customers’ posts/comments and image files) data from users daily. Your predictive models would need to be retrained with new data weekly and make recommendations instantaneously on demand. Since you expect your app to be a huge hit, your data collection, storage, and analytics capacity would have to be extremely scalable. How would you design your data science process and productionize your models? What are the tools that you’d need to get the job done? Since this is a startup and you are the lead — and perhaps the only — data scientist, it’s on you to make these decisions. First, you’d have to figure out how to set up the data pipeline that takes in the raw data from data sources, processes the data, and feeds the processed data to databases. The ideal data pipeline has low event latency (ability to query data as soon as it’s been collected); scalability (able to handle massive amount of data as your product scales); interactive querying (support both batch queries and smaller interactive queries that allow data scientists to explore the tables and schemas); versioning (ability to make changes to the pipeline without bringing down the pipeline and losing data); monitoring (the pipeline should generate alerts when data stops coming in); and testing (ability to test the pipeline without interruptions). Perhaps most importantly, it had better not interfere with daily business operations — e.g., heads will roll if the new model you’re testing causes your operational database to grind to a halt. Building and maintaining the data pipeline is usually the responsibility of a data engineer (for more details, this article has an excellent overview on building the data pipeline for startups), but a data scientist should at least be familiar with the process, its limitations, and the tools needed to access the processed data for analysis. Next, you’d have to decide if you want to set up on-premises infrastructure or use cloud services. For a startup, the top priority is to scale data collection without scaling operational resources. As mentioned earlier, on-premises infrastructure requires huge upfront and maintenance costs, so cloud services tend to be a better option for startups. Cloud services allow scaling to match demand and require minimal maintenance efforts so that your small team of staff could focus on the product and analytics instead of infrastructure management. Examples of vendors that provide Hadoop-based solutions (Source: WikiCommons). In order to choose a cloud service provider, you’d have to first establish the data that you’d need for analytics, and the databases and analytics infrastructure most suitable for those data types. Since there would be both structured and unstructured data in your analytics pipeline, you might want to set up both a Data Warehouse and a Data Lake. An important thing to consider for data scientists is whether the storage layer supports the big data tools that are needed to build the models and if the database provides effective in-database analytics. For example, some ML libraries such as Spark’s MLlib cannot be used effectively with databases as the main interface for data — the data would have to be unloaded from the database before it can be operated on, which could be extremely time-consuming as data volume grows and might become a bottleneck when you’ve to retrain your models regularly (thus causing another “heads-rolling” situation). For data science in the cloud, most cloud providers are working hard to develop their native machine learning capabilities that allow data scientists to build and deploy machine learning models easily with data stored in their own platform (Amazon has SageMaker, Google has BigQuery ML, Microsoft has Azure Machine Learning). But the toolsets are still developing and often incomplete: for example, BigQuery ML currently only supports linear regression, binary and multiclass logistic regression, K-means clustering, and TensorFlow model importing. If you decide to use these tools, you’d have to test their capabilities thoroughly to make sure they do what you need them to do. Another major thing to consider when choosing a cloud provider is vendor-lock in. If you choose a proprietary cloud database solution, you most likely won’t be able to access the software or the data in your local environment, and switching vendors would require migrating to a different database, which could be costly. One way to address this problem is to choose vendors that support open-source technologies (here’s Netflix explaining why they use open-source software). Another advantage of using open source technologies is that they tend to attract a larger community of users, meaning it’d be easier for you to hire someone who has the experience and skills to work within your infrastructure. Another way to address the problem is to choose third-party vendors (such as Pivotal Greenplum and Snowflake) that provide cloud database solutions using other major cloud providers as storage backend, which also allows you to store your data in multiple clouds if that fits your startup’s needs. Finally, since you expect the company to grow, you’d have to put in place a robust cloud management practice to secure your cloud and prevent data loss and leakages — such as managing data access and securing interfaces and APIs. You’d also want to implement data governance best practices to maintain data quality and ensure your Data Lake won’t turn into a Data Swamp. As you can see, there’s so much more in an enterprise data science project than tuning the hyperparameters in your machine learning models! We hope this high-level overview has gotten you excited to learn more about data management, and maybe pick up a few things to impress the data engineers at the water cooler. Co-author: Robert Bennett
<urn:uuid:1665cd7b-e56d-4e0a-92b2-785a7c2c5aa9>
CC-MAIN-2022-40
https://resources.experfy.com/bigdata-cloud/everything-a-data-scientist-should-know-about-data-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00728.warc.gz
en
0.937219
5,866
2.828125
3
What is a Functional Approach? Organizations (companies) are complex systems that must somehow be decomposed in order to be manageable. A common way to decompose a company is to divide it hierarchically into functional departments (e.g. sales and production). Such an approach is “functional” (Figure 1). In the case of a functional approach, a company is actually hierarchically divided into “sub-companies”, each performing a specific function (e.g. sales and production). This offers several benefits, since it divides a big system into smaller systems that are specialized and easier to manage (since they are less complex). The major drawback of a functional approach is that a company needs to perform as a whole when producing a specific outcome, which means that different functional departments have to communicate and collaborate in an efficient and effective way. However, since each organizational department is usually managed vertically (top-down) responsibilities will be non-transparently divided amongst separate functional units. Consequently, problems that occur at the interfaces between departments are often given less priority than the short-term goals of the departments. This leads to little or no improvement to the customer, as actions are usually focused on the departmental functions, rather than overall benefit to the organization. In addition, end customers and their requirements are not always visible to all departments (i.e. sales has contact with customers where production does not). What is a Process Approach? In contrast to a functional approach, a process approach does not divide a company ‘top-down’ into a smaller concepts, but defines the ways (i.e. processes) in which particular services or products are developed. This means that a process approach interrelates different organizational functions to produce a specific outcome. Graphically, a process approach is most commonly represented as a horizontal cross-section of organizational functions (Figure 2). Each organization runs many processes, which are commonly divided into managerial, production and supporting processes. The application of a system of organizational processes together with the identification and interactions of these processes, and their management, can be referred to as a “process approach”. The processes are managed as a “system”, by creating and understanding a network of processes and their interactions. The consistent operation of this network is commonly referred to as “system approach” to management. A process approach is a common way of improving the performance of an organization. What Benefits Does a Process Approach Provide? A process approach offers several benefits when compared to the traditional, functional approach: - It focuses on integrating, aligning and linking processes and organizational functions effectively to achieve planned goals and objectives. - It allows an organization to focus on improving its effectiveness and efficiency by focusing on end-products and customers. - It enables and facilitates consistent performance through well- defined workflows, which in turn provide assurance to customers about the organization’s quality and capability. - It promotes the smooth and transparent flow of operations and information within the organization. - It treats processes as valuable assets and focuses on continual improvement of process execution and process outcomes. - It contributes to lower costs and shorter cycle times, through continual improvement and the effective use of resources. - It facilitates the involvement and empowerment of people, the clarification of their responsibilities and minimizes the risk of potential conflicts. The Role of BPMN in a Process Approach BPMN plays a central role in a process approach, because it enables us to visually represent business processes in a standardized way. It does this by linking different types of organizational assets into a flow of activities that fulfill a common objective – usually a product or a service for a customer. Secondly, many BPMN tools support the simulation and analysis of processes from BPMN diagrams- which can be used to improve processes and organizational performance. Finally, BPMN is executable, meaning that defined business processes can be executed on process engines, enabling the automation organizational processes. This article explained the term “process approach” and compared it to the more traditional, ‘functional’ approach for organizational management. Both approaches (functional and process), offer several benefits for organizational management, Whilst a process approach is more customer and product oriented, a functional approach is based on the well-proven principle of “divide and conquer” used to decompose complex systems (i.e. organizations, companies) into simpler ones. An organization can decide to implement a functional approach, a process approach, or a combination of both.
<urn:uuid:68bcae78-d4d1-4203-821b-c3ed4a0de45f>
CC-MAIN-2022-40
https://blog.goodelearning.com/subject-areas/bpmn/bpmn-faq-process-approach/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00728.warc.gz
en
0.947057
939
2.59375
3
The official answer is no one, but it is a half-truth that few swallow. If all nations are equal online, the US is more equal than others. Not that it is an easy issue to define. The internet is, essentially, a group of protocols by which computers communicate, and innumerable servers and cables, most of which are in private hands. However, in terms of influence, the overwhelming balance of power lies with the Internet Corporation for Assigned Names and Numbers, based in Marina Del Rey, California. ICANN is a not-for-profit organisation that regulates online addresses, known as domain names, and their suffixes, such as ".com" and ".org". Since ICANN reports to the US government's Department of Commerce, the domain name process is effectively overseen by the US government. China, Russia and Europe have all expressed concern at this situation because it means the US has leverage over the global coordination of the internet. "It has a role that is different from the role of all other governments," says Massimiliano Minisci, a regional manager at ICANN. "That's a concern around the world." It is not hard to see why. Take, for instance, a scenario in which a country wants to change certain aspects of its domain names. Any changes carried out at the "top" level - adding new country-level suffixes, for example - have to be checked by the US Department of Commerce, which verifies that proper procedure has been followed. Once that check has been done, the actual implementation of the change is carried out by Verisign, a US-based private company that manages the root name database, which contains the full official list of recognised suffixes. "The US government could block a modification to this database," Minisci says. "So the US can, theoretically, decide who is on the internet and who isn't." This unsatisfactory situation will be up for discussion by the parties involved in September 2009. So what can be done? One drastic option would be to break the internet into chunks. A more realistic idea is for more governments to get involved with ICANN, while the most straightforward option would be for the US government to release its grip on ICANN. Milton Mueller, an internet governance expert at Syracuse University in New York, considers this outcome unlikely. "No one wants to let go. The thinking is that, at the crudest level, it's something under our control, so why mess with it?"
<urn:uuid:826f1ed0-2f1f-4ae4-ba7d-6c4123b305b1>
CC-MAIN-2022-40
https://www.computerweekly.com/news/2240089169/Who-controls-the-Internet
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00728.warc.gz
en
0.960175
509
2.9375
3
The International Traffic in Arms Regulations (ITAR) and the Export Administration Regulations (EAR) are two important United States export control laws that affect the manufacturing, sales and distribution of technology. The legislation seeks to control access to specific types of technology and the associated data. In simpler words, it means two things: 1. US intellectual property has to be checked if it can be shared with non-US citizens. 2. Arms regulation requires highly sensitive state information to be protected from access by other countries’ intelligence services.
<urn:uuid:ed10b5f9-73b2-4580-b987-80bb71121825>
CC-MAIN-2022-40
https://www.getvisibility.com/grc/itar-ear/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00728.warc.gz
en
0.888703
111
2.59375
3
Safety First Mindset Safety is a concern that should not be put aside or is merely for compliance. Safety in data centers should be a continuous commitment, transparent, and an integral part of day-to-day operations. A data center must prioritize creating a culture where safety is always at the top of the list. Due to the high power availability in a data center, the safety leader should have an electrical engineering background. This leader should be able to: - Establish And Implement Safety and Health Policies - Develop Training Programs - Investigate and Acquire High-Quality Safety Equipment - Conduct Periodic Audits And Inspections - Continuously Monitor And Actively Comply With Safety Standards Understand The Relevant Standards Since a data center is mostly run by IT Professionals and not electric engineers, the IT administrators may not always be familiar with the dangers posed by certain high-power electrical systems in the data center. As a result, data center operators must be aware of and follow the important safety standards for facilities allowing workers to work on electrical equipment. These prerequisites are as follows: − NFPA 70 edition 2002: National Electrical Code. − FPA 70E edition 2000: Standard for Electrical Safety Requirements for Employee Workplaces. − IEEE Standard edition 1584-2002: Guide for Performing Arc Flash Hazard Calculations. − OSHA 29 Code of Federal Regulations (CFR) Part 1910 Subpart S. − Provide and be able to demonstrate a safety program with defined responsibilities. − Calculate the degree of arc flash hazard. − Use correct personal protective equipment (PPE) for workers. − Train workers on the hazards of arc flash. − Use appropriate tools for safe working. − Provide warning labels on equipment. Two portions of the OSHA standard, commonly referred to as lockout or Tagout procedures, should be of special relevance to data center operators. OSHA 29 Code of Federal Regulations Part 1910.147, The Control of Hazardous Energy (Tagout and Lockout), establishes explicit principles and procedures for disabling machinery or equipment to prevent the release of hazardous energy while employees perform service and maintenance activities. Controlling hazardous energies, such as electrical, mechanical, hydraulic, pneumatic, chemical, thermal, and other energy sources are outlined in the standard. In addition, OSHA’s Title 29 Code of Federal Regulations (CFR) Part 1910.147 lays out the rules for employees who work with electric circuits and equipment. Workers must follow safe work practices, such as lockout and tag-out procedures, according to this provision. These restrictions apply when personnel is working on, near, or with conductors or systems that use electric energy and are exposed to electrical risks. These criteria must be understood and followed by data center operators. Compliance with these rules, according to OSHA, prevents 120 deaths and 50,000 injuries each year. Observe Safe Working Practices Safe working practices should begin even before the data center is built. An arc flash hazard study for the facility’s electrical distribution system is a key aspect of complying with electrical safety requirements. This is engineering research usually composed of: - Short Circuit Analysis - Protective DeviceTime-Current Coordination Study - Actual Arc Flash Danger Analysis This study projects estimated potential energy that would be released in the event of an arcing fault inside the equipment at each location in the system. This analysis can be used to estimate the level of hazard and the proper personal protective equipment (PPE) that personnel should wear. Have The Right Personal Protective Equipment (PPE) For The Job The last line of defense against arc flash injury or death is personal protective equipment (PPE). This might include the following: - Flame-Resistant (FR) Clothes. - Voltage-Rated Loves. - Hard Hats With Full Face Shields. - Full-Coverage Flash Suits. - Insulated Blankets. When a worker exceeds the flash protection barrier, PPE is required. Although the type and amount of PPE required varies depending on the hazard. The proper level of PPE is determined by the risk that is expected. A worker who wears insufficient PPE is at risk of serious injury or death. Too much PPE can be cumbersome and hinder vision and movement, lengthening work time and increasing the risk of an accident. In all circumstances, NFPA 70E Table 130.7(C)(1)(a) describes the proper PPE to employ. Use Proper Warning Labels All equipment must be labeled with an appropriate warning sign. It is the data center operators, not the manufacturer, supplier, or installer, responsibility to label the equipment. Arc flash warning labels should provide additional information above what is necessary by law for maximum protection. “Danger” is written on a sticker. While “High Voltage” complies with regulations, it is too broad and is likely to be overlooked. Workers will have the information they need to make informed decisions about safety procedures if more information is provided on the sticker. Many companies opt to label equipment with the flash danger values found during the analysis. Current standards do not require these values to be labeled on equipment, however, it is a good practice for workers’ safety. Safe Working Area Compliance When an individual works on or near exposed energized components in a data center, arc flash boundaries are necessary around switchboards, panelboards, industrial control panels, motor control centers, and similar equipment. Examining, adjusting, repairing, maintaining, or troubleshooting equipment are all activities that are subject to arc flash boundaries. Specific boundaries include: - Boundary For Flash Protection If an electric arc flash occurred, a worker could incur a second-degree burn if they approached too close to exposed live parts. Workers entering the flash protection zone must wear the proper personal protective equipment (PPE). - The Approach Boundary Is Restricted. A distance from an exposed live part within which a shock hazard occurs that has an approach restriction. A person who crosses this line and enters the restricted area must be qualified to do the job or task. - Approach Is Restricted At This Point. An approach limit at a distance from an exposed live part within which persons working close to the live part face an elevated risk of shock from an electrical arc mixed with unintended movement. Crossing this approach boundary and entering the restricted space requires a documented work plan that has been approved by authorized management, as well as the proper PPE for the job being done and the voltage and energy levels involved. - Approaching The Edge Of The Forbidden Zone Is Prohibited. A distance from an exposed living part within which work is treated as though it were made touch with the live part. To work on energized conductors or live parts, anybody entering the restricted area must have the necessary training. In this location, tools must be rated for direct contact at the voltage and energy levels involved. The Danger Of Electrocution Electrical shock or electrocution is the most evident and well-understood threat to data center workers. To get shocked, the worker must touch an energized surface, such as a terminal or bus bar, and become a part of the electrical route. The current travels through the body, injuring or killing people. The keys to avoiding electrocution are simple to understand. Employees must wear the proper safety gear and unplug or otherwise de-energize live equipment before working on or near it, according to OSHA requirements. Today’s data centers are increasingly using low- and medium-voltage touch-safe panelboards, breakers, switches, and other devices that are doubly insulated and help avoid worker exposure to live parts to further reduce the risk of electrical shock. This eliminates the risk of electrocution and eliminates the need to de-energize vast areas of the data center in order to maintain them. Consider Possible Arc Flash Accidents Arc flashes are possibly the most dangerous threat to data center workers. As data centers are operating with multi-megawatt electricity, they are easily vulnerable to arc-flash incidents. Undoubtedly, can result in significant damage, fire, injury, or death. The United States averages 5 to 10 a day for arc flashes. With 20% of them occurring in motor control centers and switchgear and another 18% in bespoke control panels. The risks of electric arc have just been properly known in the last 20 to 30 years. Arc flashes were first mentioned in a paper published in the IEEE Transactions on Industrial Applications in 1985. National Fire Protection Association (NFPA) defined this phenomenon as a dangerous condition connected with the release of energy induced by an electric arc. With a hazard to workers being equivalent to the available short circuit current in the circuit and the arc duration. Two or more separated, energized conducting surfaces attract electricity to travel between them, forming an arc. When an arc is formed, the surrounding atmosphere is rapidly heated to temperatures of up to 20,000°K, resulting in the formation of plasma. Conductors evaporate. Metal transitions from solid to a gaseous state. This expansion also generates a high-pressure wave that can reach 500 pounds per square inch, strong enough to rip 3/8″ bolts and blast entire substations apart. It can even crush a person’s chest to kill them without burning them. Electric arc burns account for a large percentage of electrical-related injuries. Arcs can produce temperatures as high as 35,000°F, approximately four times that of the sun’s surface. At a distance of up to five feet, the outcome can be fatal burns, and at a distance of up to ten feet, the result can be serious burns. The arc flash creates a 5,000-foot-per-second fireball that contains metal droplets and copper fumes that behave as shrapnel, injuring not only the worker but also anyone close. People can be temporarily or permanently blinded or have their eyes damaged by the strong light created by the arc. Sound levels can reach startling 160 decibels vs 115 concert decibels and cause permanent hearing loss. Arc flashes can ruin equipment, creating significant downtime and necessitating costly replacement and repair, in addition to the possibility of injury and death. They can also ignite flammable things in the vicinity, resulting in secondary fires that can destroy entire buildings. Protection Against Arc Flash Is A Must The massive amounts of electricity available at a data center’s switchgear and motor control centers make arc flashes a distinct possibility. Today’s data centers are increasingly adopting technologies that protect against arc flash incidents in one of two ways: by confining the event’s energy or by swiftly cutting off the arc’s source. Products that provide passive arc flash protection seek to confine the arc flash to the area where it occurs. This reduces possible damage to equipment and humans. Arc-resistant switchgear is being installed in many data centers to lessen the risk of employee harm and equipment damage. When an arc fault occurs, this equipment directs the expanding hot gases away from employees at the front, back, and sides of the switchgear through a system of vents and flaps. To survive the transient pressure spike until relief vents and flaps activate. Then thee switchgear doors, walls, and panels are reinforced and sealed. Damage is contained in the compartment where the fault originated, rather than spreading to adjacent compartments, thanks to this design. Products that provide active protection against arc flashes aim to minimize or eliminate the energy released by the arc flash itself. These technologies are designed to swiftly detect the massive and practically instantaneous rise in light intensity that accompanies an arc flash fault—up to several thousand times typical ambient lighting levels—and then act to reduce the arcing period. Arc flash detection solutions can send out a trip signal in as low as 2.5 milliseconds after the arcing fault has started. A low-voltage detection and relay system, for example, uses a fiber optic sensor to detect the flash, quickly analyses the signal, and sends a trip or disconnect signal to terminate the arc. In the event of an arcing fault, another form of the product combines numerous components, such as detection and release electronics and related primary switching elements, to launch a parallel three-phase short-circuit to earth. The primary switching element’s exceptionally fast switching time, often less than 1.5 milliseconds, along with the rapid and reliable detection of overcurrent and light, ensures that an arc fault is extinguished practically instantly. Switchgear may accomplish the highest level of protection for both people and equipment with this method. Remote monitoring items are being used to guard against arc flash. Many organizations use products that monitor and diagnose electrical equipment remotely. This decreases the risk of an arc flash occurrence by limiting direct employee exposure to equipment. Small, equipment-specific monitoring items, such as remote temperature sensors for power buses, to centralized systems, such as AKCPro Server: Centralized Monitoring Software, which monitors almost all data center processes, are examples of these monitoring solutions. Interlocks or lockouts prevent personnel from operating a potentially dangerous device or provide alternate control techniques to assure safe operation while preserving uptime in the latter. Don’t Get Comfortable There are no regulations, standards, or safety czars that can substitute common sense when it comes to safeguarding the safety and health of data center employees. There’s always an inherent risk when human people work with electrical equipment. This is the reason for having the standards, and why they must be strictly adhered to. Don’t get too comfortable. A false sense of security could be the biggest threat for human employees in a data center. This could lead to carelessness and should always be avoided. Always keep in mind that every task involving electricity is risky. Always be vigilant and practice double-checking. And never deviate from the rules! AKCP Monitoring Solutions For data center monitoring, including uninterruptible power supply monitoring, AKCPro Server is a must-have remote monitoring software solution. We remote battery, generator, and UPS monitoring and management in real-time. AKCPro Server acts as the eyes and ears of your physical spaces, sending SMS and app alerts as soon as something happens. If any of the following occurs, certain personnel are notified: - Extreme Temperature - Water Leak - Humidity Fluctuations - VOC Gas Presence - Unauthorized Entry - Power or Equipment Failure - Loss of Connectivity - Airflow Disruption An independent cellular gateway powers up to 30 individual environmental sensors. It records the trend and notifies designated employees when equipment status or environmental threats are detected. To enhance operating efficiency and safety and health in the data center, use proprietary predictive analytics to analyze circumstances and keep track of routines. The following data center and server room equipment are monitored by our data center monitoring solution: UPS, generator, switchgear, communications gear, other mission-critical backup battery systems, and TCP/IP devices. - Scalability is required in all data centers, from data closets to data centers. - Ensure that employee productivity is not harmed as a result of switch failures. - Preventing disruptions to IT infrastructure, unplanned outages, and downtime. - Prevents disaster thru E-mail, SNMP Trap, SMS, and phone-call warnings are all accessible. When your sensors are in catastrophic condition, you’ll know it. You can ensure staff safety and health in the data center, quickly identify occurrences that could cause service interruptions, and protect your investments by using AKCPro Server to boost data center safety in terms of data center battery monitoring and UPS monitoring.
<urn:uuid:a2466eae-748d-4552-a346-a4fe43107eea>
CC-MAIN-2022-40
https://www.akcp.com/blog/safety-and-health-in-the-data-center-for-employees/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00728.warc.gz
en
0.910514
3,379
2.734375
3
Lets take a look at the impact of SIP – Aide from the pure technology play, SIP represents a fundamental change in the economics of the telecommunications market. The carriage of telecommunications has been in transition with a steady migration for distance sensitive to usage sensitive pricing. Historically there where three components of the cost of a telephone call: origination, interexchange (e.g. Inter-LATA) and termination. The US telecommunications market has been moving toward a consolidation of service providers. Local Exchange Carriers (LEC) and competitive local exchange carrier (CLEC) is becoming as consolidated and the Inter-Exchange (IXC) carriers. Where do we draw the line on Enhanced Service Providers? Generally, throughout the rest of the planet, telecommunications services are still owned and operated by government monopolies. Rural telecommunications for much of the planet is predicated on the payment of termination fees. If your favorite telephone company wants to interconnect with your parents in another country, they must pay a termination fee to the phone company in that country. This is not unlike the model of a US based LEC paying the IXC who paid a fee to terminate your phone call in another LEC. A complex price model and tariff structure exists even with the current “bucket of minutes” concepts borrowed from the wireless carriers. At issue here is the impact of SIP phone calls made through the internet, both public and private. With the growing acceptance of SIP trunking and the development of E164, internet alternative carriage is also pressuring the move from TDM to VoIP. The Electronic Numbering Mapping System (ENUM) provides users with, what marketing people would call “experiential compatibility”. Being able to dial a phone numbers in the manner that users have become comfortable is absolutely essential to the success of the migration to and the adoption of VoIP solutions. SIP and ENUM work together to accomplish this. The impact on the economics of telephony services is dramatic, both at the infrastructure level and and at the usage level. The cost of building packet switch networks, like the cost of build out wireless networks requires significantly less capital investment. The cost to the users will certainly not be distance based; but access and bandwidth based. SIP also provides for the increased use of multimedia communications solutions, also a bandwidth intensive application.
<urn:uuid:66050bd6-38e1-428c-a98d-48c756b2ccc2>
CC-MAIN-2022-40
https://drvoip.com/tag/tariff/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00728.warc.gz
en
0.944031
482
2.765625
3
The terms Internet of Things (IoT) and Internet of Everything (IoE) may sound similar, but they are two different concepts. However, the two concepts are. While IoT is all about interconnected devices or physical objects that can be accessed through the internet, IoE refers to the connection of people, business processes, things or devices, and data with the internet. So, IoE is a broader concept that IoT and it encompasses people, processes, and things. There are basically four pillars of IoE – people, data, processes, and things. On the other hand, IoT is mainly concerned with things. IoT refers to connecting devices and objects to the internet and also to each other for establishing machine-to-machine (M2M) communications for better and intelligent decision-making. The concept of IoE emerged at Cisco, which describes it as “the intelligent connection of people, process, data, and things.” According to Cisco, “IoE is bringing together people, process, data, and things to make networked connections more relevant and valuable than ever before-turning information into actions that create new capabilities, richer experiences, and unprecedented economic opportunity for businesses, individuals, and countries.” The basis of this concept is the idea that in the future, it will be possible to extend internet connections beyond the machines. With a greater access to data and opportunities for expanded networking, machines are likely to become more intelligent. In the future, IoE is likely to revitalize and transform industries and businesses at multiple levels such as business model, business processes, and business moment. IoE is expected to boost the emergence of new business models that will change the way a particular business is carried out. For example, we all know that smart cars are going to revolutionize the automobile industry. By developing autonomous or driverless cars, IT Company like Google will not only enter the automobile industry but will also disrupt the current business practices of the industry. Likewise, IoE will also make business processes more efficient. With IoE, it will be possible to get greater insights for better decision-making. Moreover, with the expansion of data sources, new type of information will be available for enterprises, which they can use to generate deeper insights on business processes. The term, ‘business moment’ refers to the requirement to compete with speed and agility to keep pace with the speed of data generation. With millions of new objects and sensors, IoE will lead to the creation of huge volume of data. Therefore, businesses have to speed up and reinforce the process of collecting, analysing, storing, and leveraging data, in order to maintain their agility and competitiveness. To sum up, IoE will present huge opportunities for growth. But, businesses will have to equip themselves to handle, store, and analyse a huge amount of data and for this, they have to reinvent themselves by investing more and more in R&D and the technology front.
<urn:uuid:a4bd03f9-49f2-4e13-b47b-58466a30a72a>
CC-MAIN-2022-40
https://www.alltheresearch.com/blog/what-is-internet-of-everything-its-components-and-impacts
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00728.warc.gz
en
0.946128
609
3.53125
4
Threat intelligence provides organizations with actionable cybersecurity insights. You can use threat intelligence to improve your overall security posture or determine specific threats that put your network and users at immediate risk. In this article, you will learn what is threat intelligence, what types of threat intelligence exist, and best practices for integrating threat intelligence. What Is Threat Intelligence? Threat intelligence is information related to security threats, including vulnerabilities, vulnerability exploits, malware, threat actors, attack methods, and indicators of compromise or attack (IoC / IoA). It is different than threat data in that it is actionable and requires analysis of data, it is not simply facts about threats. Threat intelligence is used to improve security tooling and inform security professionals about the methods, capabilities, motives, and infrastructures used by potential attackers. As the number of cybercrimes has increased, threat intelligence has become a significant industry collected by many security professionals. Two major contributors to threat intelligence are commercial providers and government agencies. On a smaller scale, organizations can also collect and create threat intelligence. Commercial providers, such as security vendors, often manage security research teams and may sell intelligence in the form of data feeds or as integration in products. Government agencies also host research teams and either make information publicly available or use information privately to increase National security. Types of Threat Intelligence There are four main types of threat intelligence that you can implement: - Strategic—includes analyses of threat trends and risks. It is used to create a non-technical overview of existing threats. Strategic intelligence is often distributed in policy documents, whitepapers, and industry publications. - Tactical—includes specific details of tactics, techniques, and procedures (TTP) used by attackers. This type of intelligence is more technical in nature and is often distributed via security research documents, feeds, or security forums. - Technical—includes indicators of threats, such as malicious URLs or malware hashes. This type of intelligence is incorporated into many security tools but must be constantly updated to remain effective. - Operational—includes details of specific attacks, including attack intent, timing, and responsible parties. Data for this type of threat intelligence is often directly collected during incident investigations by incident responders. The Threat Intelligence Lifecycle When threat intelligence is created, organizations and researchers use a structured process. This process is modeled after intelligence processes developed by military or governmental intelligence agencies. It involves six stages. The direction stage is effectively the planning and reconnaissance stage of the intelligence creation cycle. In this stage, the intelligence creator must determine what assets need to be protected and what information is needed to accomplish that protection. During the collection stage, threat data is collected from a wide variety of sources, both internal and external. Common sources include: - Log data from previous attacks - Vulnerability databases and datasets - Public threat data feeds - Interviews with cybercrime experts - Publicly available news and security research - Dark web forums and inside sources The processing stage involves transforming collected data into a usable format that can be analyzed and correlated. For qualitative data, this could mean ranking and categorizing data. For quantitative data, this could mean cleaning and reformatting data. The analysis stage involves taking the processed data and forming actionable conclusions. For example, creating files that can be ingested by security tooling to detect IoCs. Or, creating clear reports with risk evaluations that organizations can use to accurately budget for defensive security strategies. In the dissemination stages, the created reports or files are delivered to security tooling or end-users. From there, intelligence can be applied to improving the security of systems, procedures, and solutions. In the final stage, intelligence consumers provide feedback to the providers. This feedback might include how helpful intelligence is for detecting or preventing attacks, whether it can be implemented successfully via tooling, or how it does or doesn’t support response decisions. Intelligence providers can then use this feedback to improve future intelligence. Best Practices for Integrating Threat Intelligence Incorporating threat intelligence into your security strategies and processes can help you ensure that your system remains as secure as possible. Consider starting with the following best practices. Integrate with security solutions Threat intelligence is meant to be incorporated into existing security procedures and solutions. In particular, threat intelligence can help direct automated systems. These systems can correlate data with intelligence faster than humans can, making more efficient use of intelligence. For example, solutions such as System Information and Event Management (SIEM) combine well with intelligence. SIEM solutions provide centralized collection and monitoring of system event data. With intelligence information, SIEMs can more accurately identify suspicious events and can alert security teams earlier. Use to reduce “alert fatigue” Security teams can quickly get overwhelmed with alerts from security solutions, resulting in “alert fatigue”. This can lead to alerts being overlooked or purposely ignored, both of which increase an organization’s risk. Incorporating threat intelligence can help you refine the thresholds for your alerts, ensuring that the most important are prioritized. Implement threat hunting Threat hunting involves proactively searching for threats and threat evidence. It is useful for detecting threats that have bypassed your protective measures as well as for internal threats. Threat hunting often requires in-depth threat intelligence to identify evidence that has otherwise been overlooked. It can also be a good source of data for future intelligence. Focus on prevention Intelligence is most helpful when you can use it to improve your security policies and patch your vulnerabilities before an attack. You can use intelligence as a guideline for how to restrict permissions, limit system access, or identify patchable vulnerabilities. Even if an attacker breaches your systems, preventative steps can help you limit any damage caused. For example, you can use intelligence to help tools automatically classify threat priority levels or to perform automated responses. This can help you halt attackers early on. Threat intelligence is incredibly useful for providing teams with actionable data, typically categorized into strategic, tactical, technical, and operational intelligence. The threat intelligence lifecycle is divided into six stages: direction, collection, processing, analysis, dissemination, and feedback. This working model is adjustable but recommended for use by many organizations and entities. When introducing threat intelligence into your ecosystem, check for native solutions that fit your current SIEM or other cybersecurity solution. Once you are properly connected with viable data sources and controls, you will be able to reduce alert fatigue, implement threat hunting, and prioritize threats.
<urn:uuid:6a1399f0-310d-41d2-b96c-9be50a10abf2>
CC-MAIN-2022-40
https://www.appknox.com/blog/threat-intelligence-getting-the-most-from-security-research
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00728.warc.gz
en
0.915638
1,352
2.640625
3
In computing, the Post Office Protocol (POP) is an application-layer Internet standard protocol used by local e-mail clients to retrieve e-mail from a remote server over a TCP/IP connection. POP and IMAP (Internet Message Access Protocol) are the two most prevalent Internet standard protocols for e-mail retrieval. Virtually all modern e-mail clients and servers support both. The POP protocol has been developed through several versions, with version 3 (POP3) being the current standard.
<urn:uuid:a40aa74a-5f99-4c79-b58a-dd6e495b7932>
CC-MAIN-2022-40
https://www.dcsny.com/glossary/pop3/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00128.warc.gz
en
0.84351
110
2.96875
3
- Includes Employee X, Employee Y, and Employee Z - Has access to Folder 3, Folder 4, and Folder 5 - Includes Employee A, Employee B, and Employee C - Has access to Folder 1, Folder 2, and Folder 3 You can see how quickly this could become extremely complex and difficult to manage. Fine-grained access control, including attribute-based access control, is a more elegant and granular way of controlling access to data and resources, is a much more powerful alternative. Read More: RBAC vs. ABAC Explained Monthly Insights Delivered Right to Your Inbox. 35,000 data professionals receive our monthly newsletter to stay up to date on the latest insights, best practices, resources, and more.Subscribe What is Fine-Grained Access Control? Fine-grained access control is a method of controlling who can access certain data. Compared to generalized data access control, also known as coarse-grained access control, fine-grained access control uses more nuanced and variable methods for allowing access. Most often used in cloud computing where large numbers of data sources are stored together, fine-grained access control gives each item of data its own specified policy for access. These criteria can be based on a number of specific factors, including the role of the person requesting access and the intended action upon the data. For example, one individual may be given access to edit and make changes to a piece of data, while another might be given access only to read the data without making any changes. Why is Fine-Grained Access Control Important? In cloud computing, the ability to store large amounts of information together is a substantial competitive advantage. However, this data can vary in terms of type, source, and security level — particularly when taking into account data security compliance laws and regulations relating to customer data or financial information. Coarse-grained access control may work when data types can be stored separately and access to specific data types can simply be assigned based storage location (e.g., Tim can access X folder, Natalie can access Y folder, etc.), as in on-premises environments. But when data is stored together in the cloud, fine-grained access control is essential since it allows data with different access requirements to ‘live’ in the same storage space without running into security or compliance issues. How is Fine-Grained Access Control Used? Here are some of the most common use cases for fine-grained access control: Use Case 1: Multiple Data Sources Stored Together In the cloud, large batches of diverse data types are stored in one place. You can’t simply grant wholesale access to these storage segments based on roles — there may be certain data types that can be accessed by a certain role and others that should not be. Fine-grained access control is therefore essential because it sets access parameters for specific data types, even when stored together. Use Case 2: Varying Degrees of Access, Based on Roles One of the most significant benefits of fine-grained access control is that it allows for varying degrees of access, rather than a pass/fail approach based on the user, their role, or the organization to which they belong. In coarse-grained systems, data may simply fall into one of two categories — permitted or forbidden — based on who is attempting to access it. But with fine-grained access control, there’s room for a bit more subtlety and variation. For example, imagine three employees with different roles and levels of access. For a certain piece of data, you might set parameters so that one of the employees can access the file, make changes to it, and even move its location. The second employee may be allowed to see the file and move it but not access it. The third employee might only be given permission to read the file. This level of specificity can help your company avoid the inconvenience and frustration that comes with someone needing to view data but being unable to because their permissions are fully restricted. Use Case 3: Securing Mobile Access More and more companies are offering support for accessing data remotely through mobile devices such as smartphones. Meanwhile, the standard workday is being extended as people work from home or at differing hours. With this in mind, companies may need to implement data access controls that are based not just on role or identity but also on factors such as time or location. Fine-grained access control allows for this. For example, you may be able to limit access permissions to a specific location so that employees can’t access it from third-party wireless servers that could be exposed to breaches. Use Case 4: Third-Party Access In many cases, a B2B business may want to give a third-party access to some of its assets stored in the cloud, without risking accidental changes to data or compromised security. Fine-grained access control can allow these companies to grant third-parties read-only access, keeping their data properly secured. What are the Elements of Fine-Grained Access Control? There are generally considered to be three primary forms of access control solutions: - Role-based access control - Attribute-based access control - Policy-based access control Role-based solutions are considered coarse-grained because they organize users into ‘roles’ and grant or deny access rights based only on these roles, while ignoring other factors. This means they may be overly broad or restrictive, and not able to scale efficiently. In fact, an independent study found that Apache Ranger’s RBAC approach required 75x more policy changes than Immuta’s attribute-based method. When it comes to fine-grained access control, the two primary approaches are attribute-based access control and purpose-based access control. Attribute-Based Access Control Attribute-based access control assigns ‘attributes’ to specific users and data, then determines access based on those attributes. These attributes could include a user’s position or role, but may also include their location, the time of day, and other factors. Data attributes might include the type of data, creation date, or storage location, among others. Purpose-Based Access Control The most flexible form of access control authorization, purpose-based access control combines a range of roles and attributes using logical connections that are flexible and evolving. It is considered a fine-grained access control solution because it uses multiple attributes to determine whether data can or cannot be accessed, and to what extent. This blog on enabling Zero Trust with Databricks and Immuta walks through purpose-based access control in action. Choosing a Data Access Control Tool Looking for a data access control tool that will provide fine-grained access control and much more? Immuta enables self-service data access with automated, attribute- and purpose-based controls that are dynamically applied at query time. These data access controls are complemented by a suite of other features that enhance data access control and universal cloud compatibility and security, including: - Sensitive Data Discovery and Classification - Dynamic Data Masking - Data Policy Enforcement and Auditing With Immuta’s fine-grained, dynamic access control capabilities, data engineering and operations teams have decreased their number of roles by 100x and reduced self-service data access from months to mere seconds. Ready to learn more? Request an Immuta demo today.
<urn:uuid:528e7eaa-2555-436b-af4e-8b213cc1a4f3>
CC-MAIN-2022-40
https://www.immuta.com/blog/what-is-fine-grained-access-control-and-why-its-so-important/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00128.warc.gz
en
0.922436
1,543
2.734375
3
New Training: Explain the Principles of SD-WAN In this 8-video skill, CBT Nuggets trainer Jeff Kish teaches you about the principles of Cisco’s Software-Defined Wide Area Networks (SD-WAN). Gain an understanding of SD-WAN architecture, topology options, programmability tools, and more. Watch this new Cisco training. Watch the full course: Cisco CCNP Enterprise Core This training includes: 40 minutes of training You’ll learn these topics in this skill: Traditional WAN Challenges Benefits of SD-WAN The Cisco SD-WAN Solution Cisco SD-WAN Management Cisco SD-WAN Topologies Cisco SD-WAN Programmability Review and Quiz What is SD-WAN? A Software-Defined Wide Area Network (SD-WAN) is a WAN that uses software to control management, connectivity and services between locations, such as data centers, branches and cloud instances. By using an SD-WAN, you can separate your control plane from the data plane. SD-WANs can include a variety of devices, such as routers, switches and vCPE (virtualized customer premise equipment). These devices run software that provides features that manage policy, security and networking functions. SD-WANs can also manage the connections between your transports, such as MPLS, broadband and LTE. It can further segment, partition and secure your traffic across the WAN, to protect it from vulnerabilities in other parts of your organization. One of the driving forces behind many organizations turning to SD-WANs was the need to provide secure remote access to cloud applications. With an SD-WAN, you can set up secure zones and then direct traffic from them based on policies that you define.
<urn:uuid:f3dccf45-abfa-4d42-aa7f-4139d9a44b6e>
CC-MAIN-2022-40
https://www.cbtnuggets.com/blog/new-skills/new-training-explain-the-principles-of-sd-wan
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00128.warc.gz
en
0.911636
389
2.703125
3
So far, 2020 has been a rough year for everyone not just for the cybersecurity space. Various threats and vulnerabilities have already led to cyberattack incidents during the first quarter. Now, with the coronavirus pandemic, users also have to be wary of both physical and digital risks. Malicious actors are using COVID-19 to launch email spam, business email compromise (BEC), and malware attacks on individual users and organizations. Hackers often exploit disasters by releasing themed malware into the wild. Fake websites and applications that supposedly provide mapping and tracking information on coronavirus cases have been stealthily installing malware on target devices. Some of these sites have been identified to install a variant of the AZORult malware, designed to steal sensitive data and cryptocurrencies, and propagate its spread across networks. For organizations, this increase in threats only adds to the already rampant problem of malware attacks. Threat groups have been using automated mechanisms to constantly probe networks and infrastructure and deploy malware. Considering the complexity of modern malware, it is crucial for companies to have tools that can capably mitigate these kinds of risks. Malware disarm firm odix looks to provide enterprise-grade security to organizations through its ecosystem of services. A destructive threat Modern malware can perform a variety of destructive actions. Viruses can delete work and system files to render endpoints useless. Trojans and spyware can be used to steal sensitive information from users resulting in massive data breaches. Remote access tools can provide advanced persistent threats (APTs) and threat groups the means to access and hijack infected devices and endpoints over the internet. Ransomware, which has been the bane of many enterprises over recent years, is designed to encrypt files on target computers to force users to pay a ransom in exchange for means to regain access to their documents. The WannaCry outbreak of 2017 took down critical operations of large governmental and commercial organizations worldwide. Newer ransomware variants are now even coded to exfiltrate private data before running encryption, allowing hackers to already gain from stolen data even if the victims opt not to pay the ransom. Malware is often used as part of more complex attacks by APTs. They often use social engineering attacks such as phishing or spoofing in order to trick users into downloading malware into their computers. Once malware enters a device or network, it can then execute its designed purpose and wreak havoc across an organization’s entire infrastructure. For organizations, malware attacks are among the most expensive ones to deal with, costing upwards of $2.6 million to deal with annually. Conventionally, users can protect their devices using antiviruses and antimalware solutions. Many of these tools rely on signature-based detection which compares a file against a database of known malware samples. If a suspicious file matches a signature in an antivirus database, the tool would then remove the file either through deletion or quarantine. Unfortunately, malware designers have become quite creative in trying to circumvent these measures by using polymorphic code which automatically and continuously alters the malware’s form and signature. As such, these conventional tools are inadequate to deal with these new threats. Organizations need to implement more capable solutions. Odix uses content disarm and reconstruction (CDR) as its key approach to disarming malware. Through CDR, a potentially malicious file or document is scanned for any traces of malicious code. The suspect code is then removed, and the file is reconstructed to regain its usability. Odix’s TrueCDR even retains the file type during the sanitization process to improve its recovery of the infected file. Files are also scanned at a binary level in order to weed out malware that uses polymorphic code to disguise itself. A key advantage of CDR is that it can detect and disarm newer variants of malware unlike with signature-based solutions that require a malware’s signature to be in their databases first. Odix can be deployed at various parts of an organization’s infrastructure. The solution can work with network files applications, allowing it to scan and sanitize files as it moves through the network. It can be deployed on email servers to ensure that malicious attachments are disarmed even before they reach a recipient’s inbox. Odix also has an application programming interface (API) which can be used to integrate CDR with other enterprise applications. In addition, the company offers a Kiosk solution which serves as a standalone workstation on which users can perform CDR on removable storage devices like USB drives and portable disks. This is useful for industrial facilities that keep parts of their infrastructure isolated from networks but require that their data be moved around using these devices. Stronger posture for organizations Falling victim to a cyberattack can have grave consequences for any enterprise. Considering how malware has become a constant threat even with other attack methods, companies would do well to improve their malware protection. Solutions such as odix provide the necessary capabilities to deal with the complexities of modern malware. The company is also working toward making its solutions accessible to smaller firms. Recently, the firm was awarded a €2 million grant by the European Commission to make its solutions available to small to medium-sized enterprises (SMEs). This is potentially a game changer as SMEs are frequently targeted by hackers due to their lack of robust protection. With the availability of such solutions, organizations can work toward developing a stronger security posture. By capably dealing with the malware threat, they can minimize their risk of being victimized by catastrophic cyberattacks and keep their own data and their stakeholder’s information safe.
<urn:uuid:67ec74b2-749e-40bf-8c41-e6bd08257c75>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2020/04/22/odix-is-disarming-the-growing-malware-threat/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00128.warc.gz
en
0.952018
1,144
2.78125
3
Kzenon - stock.adobe.com It would be fair to say that the Covid-19 coronavirus pandemic has witnessed the greatest shift in society this century, with most nations currently in varying states of lockdown. Covid-19 first appeared in China in December 2019; the UK’s first case was recorded a month later. The UK went into lockdown on 23 March, and this is still ongoing at the time of writing. The legal basis for the lockdown, the Coronavirus Act 2020, was introduced on 19 March and passed all remaining stages of consideration in the House of Commons on 23 March. It passed without a vote, before passing through the House of Lords and gaining royal assent two days later. “Through this bill, we are implementing at least a dystopian society. Some will call it totalitarian, which is not quite fair, but it is at least dystopian. The bill implements a command society under the imperative of saving hundreds of thousands of lives and millions of jobs, and it is worth doing,” commented MP Steve Baker during the reading of the bill. “This is the right thing to do but, my goodness, we ought not to allow this situation to endure one moment longer than is absolutely necessary.” While the Covid-19 infection rate has dropped since the lockdown came into effect, there are concerns that a second wave of infections could occur if lockdown and social distancing measures are lifted too early. However, there are also concerns that a prolonged, economically damaging lockdown could be just as harmful. A report by the Tony Blair Institute for Global Change, titled A price worth paying, noted that most countries were torn between three possible outcomes of the pandemic: an overwhelmed health system, an economic crash, or increased surveillance. It states: “If they try to keep their economies open for business, then there will be significant loss of life as health systems are overwhelmed. Alternatively, if they impose strict lockdowns to suppress the spread of the virus, then the resulting economic damage – counted in both statistical lives and jobs and prosperity – may ultimately be worse than the disease itself.” Contact tracing: the story so far Google and Apple are releasing an application programming interface (API) that will enable Bluetooth technology to be used as a tool for contact tracing. This API will allow interoperability between Android and iOS devices for apps from public health authorities. In the coming months, this functionality will be built into these platforms to offer a more robust solution than the initial API. “We have been notified by Google and Apple regarding their work to support contact-tracing initiatives. During these unprecedented times, data protection law can work flexibly to protect lives and data,” said a spokesperson for the Information Commissioner’s Office (ICO). “We will be reviewing the technical details and engaging with Google and Apple as their work continues, to ensure privacy issues are considered, while also taking into account the compelling public interest in the current emergency.” NHSX has been developing a contact-tracing app. This is intended to form part of the government’s response to lifting the quarantine measures. Contact tracing is intended to act as a reporting mechanism, warning smartphone owners if they have come into contact with someone who has started displaying symptoms of Covid-19, or with someone who has tested positive for the virus. The UK’s contact-tracing app will use a different model to the method proposed by Apple and Google, despite concerns raised about privacy and performance about the NHSX app. How it works There are several methods by which contact-tracing apps could collect contact data and track users’ movements, including: - Location-tracking through a device’s GPS; - Identifying proximity contacts through Bluetooth or Bluetooth LE; - Using mobile phone towers to determine approximate location. For contact tracing to be successful in the UK, at least half of all smartphone users will need to install the app. If this is not achieved, as with Singapore’s TraceTogether app (which was downloaded by less than 20% of the population), there would be insufficient data gathered for it to be effective. A further challenge, just as with Singapore, is that a fifth of the UK population do not own a smartphone, including half the population over 50. Unfortunately, it is this demographic who are most at risk, and therefore in greatest need of contact tracing. What we have learned from other use cases of contact tracing is that they can play a valuable role in easing restrictions, but by themselves are insufficient. There are broadly two methodologies by which contact-tracing apps determine who has potentially been exposed to Covid-19: - Centralised – proximity contacts are uploaded to a central server, which then notifies people who have been exposed to the virus. - Decentralised – the server notifies all devices of contact IDs that potentially carry the virus, and the device determines if the device owner has been in contact within the time period. This is the method used by the Apple and Google API. Once the app has been informed that a device owner is displaying symptoms of Covid-19, those who have been in contact with that person will receive an amber warning and told to self-isolate. If device owner then tests positive for Covid-19, a red warning will be sent. The decentralised model is more secure, due to contact tracing being performed on the device. However, it is also harder to update the multiple versions of the app, due to platform differences. Furthermore, using the centralised model offers the NHS an invaluable pool of data for research and logistic planning. Although contact tracing is sound in theory, each method carries challenges that need to be overcome. GPS struggles when devices are inside buildings and it cannot discern changes in elevation. Using mobile phone towers for tracking purposes is insufficiently precise to determine an accurate location, but could be used to confirm that people are observing lockdown measures. This method also cannot detect who an individual comes into contact with. Conversely, while Bluetooth can record which devices a user comes into contact with, it is not intended to passively operate in the background for long periods. It can also drain the battery and is subject to interference with other Bluetooth and location-tracking apps. The impact of the number of false positives in suspected cases of Covid-19 will need to be considered. After a series of false positives, some people may be disinclined to continue following the warnings. This will be compounded when using Bluetooth to determine when people have been in close proximity to each other in buildings but separated by walls and floors. These barriers will block the transmission of the virus, but not Bluetooth signals. One of the key concerns, especially with the centralised methodology, is that the server will contain all the location and connections data of everyone using the app. While this data would be valuable to the NHS, it could be exploited by malicious actors and stalkers. It could be argued that this data would be anonymised and encrypted, but these are not sufficiently foolproof measures for protecting such personal data, which would be held in a centralised location and could be accessed remotely. There is also the likelihood of spoofed versions of contact-tracing apps being distributed. These could install spyware or malware on a subject’s phone, as well as reducing the number of users for the official apps. Engagement with Apple and Google may mitigate this, but will not eliminate the problem. Given our current understanding of Covid-19, infection can occur up to two weeks prior to a person becoming symptomatic. Contact data would therefore need to be deleted after the recommended period. One of the primary concerns around contact tracing is how the information generated by such apps could be used by other agencies. While the current pandemic makes personal tracking apps a necessity, such apps – which would record where we go and who we speak to – would never be acceptable in normal times. From a legislative perspective, rather than attempting to justify the increased surveillance required for contact tracing through existing legislation, such as the Data Protection Act 2018 and the Investigatory Powers Act 2016, this has been granted through the emergency public health powers of the Coronavirus Act. However, concerns over mission creep remain. For example, contact tracing would provide useful data for security agencies. Trials of the NHSX app are now getting underway using use Bluetooth LE for detecting potential contacts and the contact tracing performed using a centralised server. Just as the Coronavirus Act needs to be reviewed, so too should the contact-tracing app automatically be deleted once the pandemic has passed. This should include all contact and location data. Contact-tracing development needs to be conducted transparently and as open source for external oversight. Doing so should reassure the population that data would be collected responsibly and securely stored. Building the app with a privacy-first methodology will also go some way to reassure people that their data is being respected and protected. Combined, this should persuade a sufficient proportion of the population to use the app. However, this would not address the fact that the most vulnerable members of the population, who would benefit most from contact tracing, are also least likely to own a smart phone. Ultimately, we are living in extreme times, which unfortunately require extreme measures. Under normal situations, such invasive contact tracing would be untenable, but a temporary increase in surveillance is a small price to pay for preserving our health and livelihoods. Read more about contact-tracing apps - UK scientific experts address doubts over contact-tracing app’s effectiveness, particular regarding data privacy, but admit that its nature may have to evolve as the roll-out scales up. - Legal experts have told Parliament’s Human Rights Committee that legislation is desirable to ensure public trust in the data security of the Covid-19 coronavirus contact-tracing app. - BCS and Cass Business School call for proposed UK contact-tracing app not to be launched without alignment to testing. - Leading UK scientific experts assure MPs that much-anticipated contact-tracing app will be ready for launch in May, but concede that challenges exist on achieving meaningful uptake. - Singapore’s Government Technology Agency contributes the source code of the BlueTrace protocol that powers its contact-tracing app. - Australian government launches Coronavirus Australia app to keep people updated on the latest developments in fight against pandemic. - Telecoms will play a key role in controlling the coronavirus through contact-tracing apps that can mitigate its spread, but only if people actually use the apps – and that’s not guaranteed.
<urn:uuid:fb4dc8e1-f801-4e40-ad21-2df091f85480>
CC-MAIN-2022-40
https://www.computerweekly.com/feature/Contact-tracing-The-privacy-vs-protection-debate
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00128.warc.gz
en
0.953938
2,218
2.609375
3
What Is a DDoS Attack and How to Stay Safe from Malicious Traffic Schemes Imagine you’re driving down a highway to get to work. There are other cars on the road, but by and large everyone is moving smoothly at a crisp, legal speed limit. Then, as you approach an entry ramp, more cars join. And then more, and more, and more until all of the sudden traffic has slowed to a crawl. This illustrates a DDoS attack. DDoS stands for Distributed Denial of Service, and it’s a method where cybercriminals flood a network with so much malicious traffic that it cannot operate or communicate as it normally would. This causes the site’s normal traffic, also known as legitimate packets, to come to a halt. DDoS is a simple, effective and powerful technique that’s fueled by insecure devices and poor digital habits. Luckily, with a few easy tweaks to your everyday habits, you can safeguard your personal devices against DDoS attacks. DDoS Attacks Are on the Rise The expansion of 5G, proliferation of IoT and smart devices, and shift of more industries moving their operations online have presented new opportunities for DDoS attacks. Cybercriminals are taking advantage, and 2020 saw two of the largest DDoS offensives ever recorded. In 2020, ambitious attacks were launched on Amazon and Google. There is no target too big for cybercriminals. DDoS attacks are one of the more troubling areas in cybersecurity, because they’re incredibly difficult to prevent and mitigate.. Preventing these attacks is particularly difficult because malicious traffic isn’t coming from a single source. There are an estimated 12.5 million devices that are vulnerable to being recruited by a DDoS attacker. Personal Devices Become DDoS Attack Soldiers DDoS attacks are fairly simple to create. All it takes are two devices that coordinate to send fake traffic to a server or website. That’s it. Your laptop and your phone, for example, could be programmed to form their own DDoS network (sometimes referred to as a botnet, more below). However, even if two devices dedicate all of their processing power in an attack, it still isn’t enough to take down a website or server. Hundreds and thousands of coordinated devices are required to take down an entire service provider. To amass a network of that size, cybercriminals create what’s known as a “botnet,” a network of compromised devices that coordinate to achieve a particular task. Botnets don’t always have to be used in a DDoS attack, nor does a DDoS have to have a botnet to work, but more often than not they go together like Bonnie and Clyde. Cybercriminals create botnets through fairly typical means: tricking people into downloading malicious files and spreading malware. But malware isn’t the only means of recruiting devices. Because a good deal of companies and consumers practice poor password habits, malicious actors can scan the internet for connected devices with known factory credentials or easy-to-guess passwords (“password,” for example). Once logged in, cybercriminals can easily infect and recruit the device into their cyber army. Why DDoS Launches Are Often Successful These recruited cyber armies can lie dormant until they’re given orders. This is where a specialized server called a command and control server (typically abbreviated as a “C2”) comes into play. When instructed, cybercriminals will order a C2 server to issue instructions to compromised devices. Those devices will then use a portion of their processing power to send fake traffic to a targeted server or website and, voila! That’s how a DDoS attack is launched. DDoS attacks are usually successful because of their distributed nature, and the difficulty in discerning between legitimate users and fake traffic. They do not, however, constitute a breach. This is because DDoS attacks overwhelm a target to knock it offline — not to steal from it. Usually DDoS attacks will be deployed as a means of retaliation against a company or service, often for political reasons. Sometimes, however, cybercriminals will use DDoS attacks as a smokescreen for more serious compromises that may eventually lead to a full-blown breach. 3 Ways to Prevent Your Devices from Being Recruited DDoS attacks are only possible because devices can be easily compromised. Here are three ways you can prevent your devices from participating in a DDoS attack: - Secure your router: Your Wi-Fi router is the gateway to your network. Secure it by changing the default password. If you’ve already thrown out the instructions for your router and aren’t sure how to do this, consult the internet for instructions on how to do it for your specific make and model, or call the manufacturer. And remember, protection can start within your router, too. Solutions such as McAfee Secure Home Platform, which is embedded within select routers, help you easily manage and protect your network. - Change default passwords on IoT devices: Many Internet of Things (IoT) devices, smart objects that connect to the internet for increased functionality and efficiency, come with default usernames and passwords. The very first thing you should do after taking your IoT device out of the box is change those default credentials. If you’re unsure of how to change the default setting on your IoT device, refer to setup instructions or do a bit of research online. - Use comprehensive security: Many botnets are coordinated on devices without any built-in security. Comprehensive security solutions, like McAfee Total Protection, can help secure your most important digital devices from known malware variants. If you don’t have a security suite protecting your devices, take the time to do your research and commit to a solution you trust. Now that you know what a DDoS attack is and how to protect against it, you’re better equipped to keep your personal devices and safe and secure. Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:c924db5e-d468-4d9f-afc3-503aab104e5d>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/consumer/consumer-threat-reports/ddos-attack-work/?hilite=%27DDos%27
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00128.warc.gz
en
0.93982
1,264
3.078125
3
As seasons change, so too does the weather. Hurricanes, lengthy storms, catastrophic flooding, uncontrollable wildfires and more — all cause devastation and destruction. Unfortunately, when victims of disasters need help the most, cybercriminals and scammers are very quick to step in and take advantage of both the vulnerable and the compassionate. Here are some ways they exploit unfortunate situations: Disaster fraud is when cybercriminals and scammers target victims of a regional disaster by posing as aid provided by government agencies or reputable relief organizations. They contact victims directly via email, phone or even text messaging, asking for personal information such as social security numbers, credit card details, names and addresses. Once obtained, the scammers can use the information for identity theft or credit card fraud. Tech support scams are common in the aftermath of disasters, especially when essential infrastructure such as phone lines or water and electrical equipment need repairing. A tech support scammer targets victims of a disaster with the promise of getting them back on track and online as quickly as possible. Always be wary of unsolicited tech support calls — especially in dire circumstances like a natural disaster. They are almost always phishing — never fixing. Charity phishing occurs when cybercriminals and scammers create fake charities that allegedly raise money for victims of a disaster. They attempt to tug at heartstrings with emotion and feelings of helplessness when people are at their most vulnerable. Charity phishing scams target more than just victims of disaster — and before you can report them to law enforcement, they’ve disappeared. Before committing to a donation, always verify the legitimacy of a charity by using search engines and charity scam watchdogs. Four things you must know to protect yourself from disaster fraud: - Government relief agencies will not contact you directly via email, phone, or text to ask for your financial or social security information. - Government relief agencies will never compel you over the phone to hastily accept or do anything. If the call sounds off, hang up and report it to local law enforcement. - Government officials never ask for money in exchange for the relief services they are obligated to provide its citizens. - Never enter your email address and password into forms that appear to be from government agencies without verifying its legitimacy first. Always use a unique password and, if possible, a unique username to protect your other accounts.
<urn:uuid:7efb7204-bbc2-44b2-b82e-d51c02ba000d>
CC-MAIN-2022-40
https://globallearningsystems.com/phishing-for-misfortune-disaster-fraud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00128.warc.gz
en
0.925949
475
2.890625
3
Cyber security is no longer just a concern for the IT departments in large corporations. Over the last decade, the frequency and variety of cyber attacks have increased drastically. The rise of the dark web and increase in sophistication of tools used to launch attacks has made the job of managing cyber risk more difficult than ever. Organizations today need a robust cyber security risk management strategy in order to stay on top of all the threats and to respond effectively. What is Cyber Security Risk Management? Cybersecurity risk management is the practice of identifying, evaluating and addressing an organization’s cybersecurity threats. It involves developing a strategy to evaluate the potential impact of cyber threats to the organization and prioritizing them to ensure that the most critical threats are handled effectively. While it may not be a foolproof method to prevent all cyber attacks, it does help organizations tackle the most critical flaws, threat trends and attacks and mitigate the eventual damage that may be inflicted. Why is Cyber Security Important? Many organizations still treat cybersecurity as an afterthought, with policies and processes to be updated in the event of an attack. This can be fatal and without an ongoing review process that assesses the risk of cyber attacks, organizations can be caught off-guard by new and evolving threats. With businesses increasingly moving their operations to the cloud, there is also a need to think about how they can manage security of their assets in the cloud. Additionally, new regulations are also being introduced and changed to meet and keep up with the changing threat landscape, as government bodies and authorities are becoming more aware of how these attacks can impact the economy and the nation’s reputation. What is a Cyber Threat? While cyber threats are commonly associated with malicious actors brute-forcing their way into a network, it can also include other scenarios that compromise security, known as “attack vectors”. Cyber threats can include any vector that is susceptible to breach and which can be exploited to gain access to an organization’s confidential data in order to cause damage. These threats broadly fall into four categories – adversarial threats (organized groups like hackers, espionage from nation-states and third-parties), system failure and the resulting data loss, natural disasters such as floods and earthquakes, and last but not the least human error in the form of accidents or individuals falling prey to phishing scams and social engineering tactics. The common threat vectors in organizations include unauthorized access from malicious actors, misuse of access by authorized users themselves such as those from insider threats, data leaks resulting from improper security configurations and backup processes, as well as downtime due to service disruption which can open an organization to bigger threats such as DDoS attacks. Benefits of Cyber Security Risk Management Having a robust cyber security risk management in place allows an organization to proactively stay on top of cyber security threats and avoid the negative consequences of cyber attacks. By following a concrete set of procedures and policies, an organization can be ready to mitigate the effects of a cyber attack and keep critical business operations running normally. Cybersecurity Risk Management plans usually involve procedures and policies that involve phishing detection, cloud security protection of websites against DDoS attacks, ransomware and other attacks as well as monitoring of data leakage and sensitive credentials. How to Develop a Cyber Security Risk Management Process Developing a cyber security risk management requires a strategic approach that prioritizes the most critical threats and which lays out the steps to take in order to attend to these threats. The process should include steps for identifying, assessing and mitigating risks as well as plans for monitoring them on a regular basis. There are 4 common stages involved in developing a cybersecurity risk management process. - Identifying risk – where you examine your organization’s IT environment to identify risks that are current or which could affect operations in the future - Assessing risk – where the identified risks are analyzed to see how much of an impact they will have on your organization - Controlling risk – where you document the specific plans, procedures and tools to effectively mitigate the risks - Reviewing – which lays out the steps for evaluating how effective your controls are at mitigating the risks, and how to go about adjusting them on a regular basis. What are the Best Practices in Cyber Security Risk Management? The exact processes and procedures that an organization prescribes in their cyber security risk management strategy can vary but they generally cover some essential best practices. Know Your Organization’s IT Environment Understanding all the components of your IT environment is crucial in order to protect the most sensitive data. Audit your IT environment to document all data, digital assets, devices (including BYOT), endpoints, networks and workflows. Don’t forget to account for third party components and services. Go further and prioritize these assets so you can divert more resources to the most valuable and business-critical assets. Strictly Monitor Your Environment Your IT environment is never static. It is important to continuously monitor this as and when new elements are introduced into the environment or when old or obsolete elements leave it. You will especially want to stay on top of regulatory developments, new vendor onboarding and what technologies your internal teams are using. Identify The Potential Risks Understand the likelihood of existing threats exploiting a vulnerability and assess the potential consequences. Threats can affect your routine operations through unauthorized access to confidential data and can be in the form of hostile attacks automated by bots, human errors, system failures or even accidents. An effective web application firewall can also go a long way towards protecting web applications from threats such as credential stuffing and zero-day vulnerabilities. Build a Cyber Security Risk Management Plan Once you identify your risks, develop a plan for responding and escalating. This should include actions for employees and stakeholders to follow, which should be integrated into their training. This should also be updated as and when the need arises. Prepare and Train Your Employees An organization’s employees can be the weakest link in cyber security if they are not trained properly to implement the best practices. So it is crucial to document your strategies, plans and procedures so that all stakeholders can be made aware and trained to manage cyber risks. Enable the Basic Security Features Last but not the least, there are some basic cyber security services that ought to be enabled such as an intelligent web application firewall (WAF), automating patching of software and multi-factor authentication (MFA). CDNetworks offers Application Shield, a cloud-based solution that integrates WAF with DDoS protection and Content Delivery Network (CDN) acceleration. This solution has the ability to constantly learn and protect web applications from new malicious actors and attack vectors, so as to keep your business safe and operational.
<urn:uuid:1860edd6-bd58-4587-98c8-636697c2d394>
CC-MAIN-2022-40
https://www.cdnetworks.com/cloud-security-blog/cyber-security-risk-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00128.warc.gz
en
0.945931
1,361
2.875
3
1- Describes some text analysis techniques used for information retrieval 2- Describes how to build a recommender system 3- Describes the principle of K-means clustering, Decision Trees, K-NN, Naive Bayes and Neural Networks 4- Introduces WEKA software for classification and recommender systems 5- Introduces StanfordNLP framework for text analysis What am I going to get from this course? Learn how to: - Use regular expressions to find and replace patterns in text. - Use Stanford NLP framework for part-of-speech tagging and named entity recognition. - Use GloVe tool to estimate semantic similarity between words. - Use TF-IDF weighting scheme in a search application. - Detect problems where clustering can be used. - Define supervised learning problems. - Use WEKA software to solve automatic classification problems. - Use WEKA software to generate rules for a recommender system. Prerequisites and Target Audience What will students need to know or do before starting this course? - Bachelor's Degree in computer science - Economy, or a related specialty is recommended. Who should take this course? Who should not? This course is adapted for: - A technical person willing to know how to use Artificial Intelligence techniques to solve his/her problems. - A management person willing to understand the practical applications of Artificial Intelligence in his/her business. - A technical or non-technical person wishing to start using existing frameworks for text analysis, clustering, automatic classification and recommender systems. This course is not adapted for: - A technical person wishing to understand the algorithmic details so that he/she can implement Artificial Intelligence algorithms. - A technical person wishing to modify/improve existing Artificial Intelligence algorithms.
<urn:uuid:f8c61dd2-5384-4a45-8145-9dc91715b457>
CC-MAIN-2022-40
https://training.experfy.com/courses/marketing-analytics-text-analysis-recommendation-systems
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00329.warc.gz
en
0.77338
390
2.859375
3
Around the world, the number and size of datacenters are both growing at a fast pace, and the devices housed in them are consuming more and more power as well to deliver ever-increasing performance. And as a consequence, datacenters are using more and more of the planet’s energy to do our processing and storage as well as to keep us entertained and connected. When we were talking to Liqid, which makes composable infrastructure fabrics, during the SC21 supercomputing conference about its latest hardware and software, Sumit Puri, the company’s co-founder and chief executive officer, through the stats at us and they are relevant as we consider Treehouse, an energy-aware datacenter application scheduler that is being proposed by researchers at Microsoft Research, MIT, Columbia University, the University of Michigan, and the University of Washington and that just published a paper on the idea. The numbers are striking. There were around 500,000 datacenters around the world in 2012, according to Puri, who is on a mission to make datacenters more efficient by driving up the utilization of CPUs, GPUs, and other ASICs, and by 2020, that number has grown to over 8 million. Depending on who you ask, these datacenters now consume anywhere from 2 percent to 5 percent of global energy, and there are projections that somewhere between 20 percent and 25 percent of the world’s energy will be consumed by datacenters by 2025. (Such stark projections have been done in the past and have not materialized, by the way, because of the relentless innovation in making datacenters and the systems inside of them more energy efficient.) So just how much energy consumption by datacenters are we talking about? It’s a lot. While the hyperscalers and cloud builders have built ever-more-efficient infrastructure, according to data compiled by Statista, in 2015 these two groups together consumed 93.1 terawatt-hours of juice in 2015 compared to 97.6 terawatt hours for traditional, enterprise datacenters as a group. Power usage by traditional datacenters has fallen by two-thirds by 2021, to a mere 32.6 terawatt-hours, but power usage by the hyperscalers and cloud builders has climbed by 70 percent over that same time to reach 158.2 terawatt-hours. With Moore’s Law running out of juice and devices getting hotter as they drive higher performance, datacenter power consumption can only keep rising. And that is the impetus behind the Treehouse project, which bills itself as a means to make datacenter software “carbon-aware” in keeping with the common language used among those who (rightly) espouse energy efficiency. We don’t like that term because in many cases, energy supplied to datacenters does not come from burning coal or natural gas, but quite purposefully comes from wind, solar, and hydroelectric generation, and even though no one talks about it, if you plug into the power grid in the United States, Europe, and Japan, you are getting juice that comes from nuclear reactors, too. Yes, we know that energy forms often get converted to carbon equivalents to make comparisons. Nonetheless, no one would argue that energy efficiency is not a big deal for datacenters, and we think “energy-ware” is a better term for what the Treehouse project is proposing. According to the researchers who are starting the Treehouse project, datacenter application demands for compute, storage, and networking are growing so fast that they are going to outpace expected energy efficiency improvements in the coming decades. We have already done server consolidation using virtualization and containerization, as well as new ways to distribute power to gear in the datacenter and innovative cooling techniques to keep that datacenter energy usage from exploding more than it has. Now, argue the Treehouse researchers, the time has come to dig into the software and make it aware of the energy it consumes before it eats the watts. Here are the gaps they see: In essence, the Treehouse effort seeks to treat energy like a first-class datacenter resource, alongside compute, storage, and networking. The way they see it, a number of things have to be developed and agreed upon by the IT community for energy-aware datacenter application scheduling – meaning not only finding the right, open source for running a particular piece of software, but also being cognizant of its energy consumption while it is running – to be a reality. The first thing is to have a universal mechanism for tracking energy usage by software on all hardware. The second is to have a common interface to expressing the service level agreements (SLAs) for application software so SLAs and energy efficiency can be interplayed. Sometimes, of necessity, a job needs to be run relatively inefficiently because more than anything else, it needs to run now. Every datacenter operator understands this, and every end user demands this. The researchers also say that we need to develop a common, fine-grained, fungible unit of execution called a μfunction to enable more efficient and portable usage of hardware. The energy consumption by applications is tricky, say the researchers, in that there are direct energy costs for running code in user space on a processor with memory, but also because there are also associated energy costs from using the operating system and middleware, storage devices access, and network transport of data. They call this combined power consumption profile the energy provenance of an application, and it looks like it will require some AI assistance to get it right. “Since it is difficult to directly measure the lifecycle energy provenance of individual applications through hardware mechanisms alone, we believe it will be necessary to construct a supervised machine learning model to estimate the energy provenance of the application, given its resource usage,” the Treehouse researchers write. “The input (or features) of the model will be metrics that are easily measured in software, including the network bandwidth (for switches and network interface cards), bytes of storage and storage bandwidth (for memory and persistent storage) and accelerator cycles, as well as the type and topology of hardware the application runs on.” Integrating SLAs into the datacenter-wide application scheduling stack is intuitively obvious, but the μfunctions may not be so obvious. As we all know, there is a lot of stranded resource in the datacenter due to static provisioning and overprovisioning, and the best way to improve energy efficiency is to fully utilize the resources that are burning juice. (You never turn a server off, but you always find it something to do, as James Hamilton of Amazon Web Services has said many, many times. Once you spend money on it, you have to work a server to its technical and economic death.) Applications are currently too coarse-grained to allow for efficient packaging and deployment on hardware, so Treehouse is trying to intersect with evolving containerized microservices and serverless application development and push this finer-grained approach and use it to bin pack smaller chunks of software onto shared hardware resources. The plan is to use an RPC-based API for μfunctions and weave that together with the SLA and energy budget to determine when these μfunctions run and where they run. These μfunctions will, as the term suggests, operate on the scale of microseconds. “In order to exploit fine-grained variations in resource usage and concurrency, we plan to support microsecond-scale invocations of µfunctions, an improvement of several orders of magnitude over existing serverless systems.” To do this, the researchers say, will require some hacking. First, it takes too long to coldstart serverless functions today. After optimizations, the Firecracker lightweight virtualization layer for serverless applications, takes at least 125 milliseconds to fire up. The fact that RPC protocols are built on top of the HTTP Web protocol doesn’t help, either. It’s just too slow. There also needs to be some way to abstract μfunctions sufficiently that they can run across different processing engines and use different memory and storage as they become available, and this will require an agreement on an intermediate representation for both compute and memory. Something represented by this: Isn’t that pretty? Charts are easy to make. New datacenter-spanning application and system architectures, not so much. And, in the long run, the Treehouse researchers say they will need a new energy-aware and energy-optimized operating system and runtime environment for datacenter applications. Timothy Roscoe, professor in the Systems Group of the Computer Science Department at ETH Zurich, argued brilliantly last summer at the OSDI21 conference that it was time to start doing research and development in operating systems again, and this might be a good place to start. Treehouse will, in short, be a massive project.
<urn:uuid:e4f74d2b-4300-4e6e-aa63-6b5f84c301be>
CC-MAIN-2022-40
https://www.nextplatform.com/2022/01/07/wanted-an-energy-aware-datacenter-application-scheduler/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00329.warc.gz
en
0.947448
1,851
3.15625
3
Although “there’s no clear evidence that use of contact tracing apps will help us contain the spread of COVID-19, according to digital rights NGO, Access Now, governments around the globe have nonetheless decided that’s what they are going to try. So-called track and trace apps are in various stages of deployment in different countries – Privacy International has done an impressive job keeping on top of the dozens of different developments – but the biggest debate that has emerged is centralized versus decentralized tracking. There are several ways a smartphone can track someone’s location, but for the purposes of fighting the COVID-19 pandemic, Bluetooth technology has come out on top. Yet despite this consensus, there are several different ways of tracking and storing the data needed to assess the spread of the virus. Essentially, centralized tracking would see all the data uploaded to a centralized database and notifications to users managed from there, whereas with decentralized tracking, the data remains on the users’ own device. MEP Birgit Sippel, who firmly supports a decentralized approach, said however that “currently, there is a serious lack of clarity on how these apps would function and what their added value would be.” “Any potential app would have to be voluntary and have data protection, data minimization and privacy built-in by design. Non-users must not face any disadvantages, such as being denied access to shops or transport systems, as a result of their free choice. Apps would have to be used for the sole purpose of contact tracing, with no access for commercial players or law enforcement authorities to the data,” she said. Most European countries, including front runner Germany, have opted for the more privacy-protecting decentralized approach of doing “proximity matching” directly on people’s own phones. Controversially however, the UK Government has begun testing of a centralized app. “They did this without explaining how they will minimize privacy risks,” said the Open Rights Group (ORG) which has made a legal request for a Data Protection Impact Assessment. As well as the data protection concerns raised by the centralized database, this approach also risks being incompatible with other apps developed around the continent. “A fragmented and uncoordinated approach to contact tracing apps risks hampering the effectiveness of measures aimed at combating the COVID-19 crisis, whilst also causing adverse effects to the single market and to fundamental rights and freedoms,” warns the European Commission’s “common EU toolbox” – something it felt it was necessary to publish as member states started to take wildly differing approaches. EPP MEP Axel Voss went even further calling for “a single European app, which is in line with our data protection standards.” “We must minimize the risk of fragmentation, as these apps will be used across the EU as soon as the European borders are open again,” he said. Indeed, the Commission toolbox insists that apps should be interoperable. Downloading the app should also be voluntary according to the toolbox. However the World Health Organization and Oxford University researchers both say that in order to be effective, any app would have to be used by at least 60% of the population. In other words, says Access Now, “these apps cannot be helpful unless people trust that they are not a vector for harmful surveillance and therefore feel free to put them to use.” It’s worth noting that “voluntary” doesn’t just mean “not legally required”. For example if employers start insisting that a person’s job depends on them installing the app, it can hardly be considered truly voluntary. Whether or not people will feel comfortable downloading an app will strongly depend on how secure they feel it is, not just in terms of where their data is stored and who can legally access it, but also how long it is stored for. The risks do not just occur at the time the data is collected, but continue as long as it is stored and continues to be a potential target for malicious actors. Again the Commission toolbox wants specific technical requirements for encryption, communications security, user authentication, and so on to be overseen by ENISA, Europe’s cybersecurity agency. Louis-James Davis CEO of cybersecurity company VST Enterprises said that hacking should be a major concern whether data is centralized or decentralized. “The issues surrounding a hack and the manipulation of data from a rogue state is a major security issue and concern, given all of what we know about the interference with election results. Such manipulation could further damage the country with the perceived threat of a second or third wave of the virus forcing the country into tough lockdown measures again and creating economic hardship,” he said. Meanwhile with decentralized apps, “each phone or smart device handset are themselves subject to a wider brute force attack using Bluetooth hacks and malware,” he added. But even the most privacy-protecting, secure app will not prevent the spread of COVID-19 if people don’t use them diligently – and many will have legitimate reasons for not wanting to leave an app switched on on their phone all the time. Some people do not have smartphones at all. Many will stop using an app if they perceive it is preventing them from accessing other services such public transport. And all that is assuming they will even work as designed. There are inherent differences between Bluetooth proximity tracking and the likelihood of catching the virus. Bluetooth can detect signals through walls, floors and glass, cases where COVID-19 transmission is impossible. It is therefore likely that apps will record a number of false positives. If the data is stored on an individual’s own device that may allow them to make a more informed assessment of the likelihood of infection in a way that a notification from a centralized database may not. The speed at which people receive notifications is also important. If someone is not notified that they may have come in contact with COVID-19 for several days, they may well feel that the damage has already been done. Using the technology at our disposal to help prevent the spread of COVID-19 is a no-brainer. But overreliance on fallible apps, that in themselves pose risks, is not the solution either. The virus is spread by humans and it is the actions of human beings, not technology, that will determine how, when and if, it is overcome.
<urn:uuid:2c87b38c-493c-446a-9d98-d4698f7f8fb5>
CC-MAIN-2022-40
https://www.cpomagazine.com/data-privacy/centralized-or-decentralized-all-tracking-apps-fall-foul-of-the-same-vulnerability-user-error/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00329.warc.gz
en
0.957735
1,332
2.703125
3
Women in tech have historically had a difficult relationship. Recent years have seen significant progress. Yet still today there is a gap to close. Today in the USA and EU there is still a notable 16% wage gap between men and women’s salaries in the technical sphere. This means that for every dollar earned by a man, a woman earns just 84 cents. We tend to forget that it’s thanks to women if we’ve achieved some huge technical milestones. We’ve been able to land on the moon (thanks Margaret Hamilton, Software Engineer), discovered radioactivity (shoutout to Marie Curie). Nowadays we’re also able to instruct a machine to perform tasks for us, aka programming (thanks Ada Lovelace). In celebration of all the great women working in technology, we’d like to highlight some of the top communities supporting women in tech. 6 organizations and initiatives aimed at empowering women in tech The National Center for Women & Information Technology (NCWIT), is the farthest-reaching network of leaders and educators focused on advancing innovation and computer science. The community aims are to support education programs at every level. Participation in the network is mainly for organizations that want to adopt a more gender-equal approach in their activities. Individuals can join the network as volunteers. Girls Who Code The community aims to lower the gap of women working in tech. They act via school and college programs, home learning, everything offered for free to any girl and woman willing to learn. As an individual, firstly you could leverage the material they share on their website. Secondly, you could be tutored during one of their club moments displaced all around the US and UK. Another option is to join a 2-week summer course (virtual, at the time of writing). Learn more about Girls Who Code. League of Women Coders League of Women Coders (LWC) is a US-based (NYC and DC) collective, organizing recurring meetings where women can gather together to hack on projects, give casual 5-minute lightning talks, ask all of their technical questions, and toss around ideas together. It’s a great group to improve coding skills while enjoying the company of people who are on the same page. Girls Develop It Girls Develop It (GDI) is a no-profit organization that creates welcoming, supportive opportunities for women and non-binary adults to learn software development skills. Firstly, it’s not always easy to start from scratch on a brand new topic. Secondly, computer science and software development are not among the easiest but a community like this one eases the onboarding process a lot. Lastly, these programs are not for free but they provide a full-time tutor and a complete learning path to master the coding skills you need to enter the job market. Women Who Code While talking about women in tech, we couldn’t pass by Women Who Code (WWCode). This is a network of IT Professionals and Developers, supporting all women in tech and development via scholarship, education at large, a job board of hiring companies. Their main goal is to redefine the industry so that women are equally represented as leaders, executives, founders, VCs, board members, and software engineers. Women In Tech WIT is surely among the biggest no-profit organizations working to bridge the gap in technology roles. According to Eurostat, women represent only 17% of the people working in STEM roles (Science, Technology, Engineering, Mathematics) across Europe. WIT works actively to invert the trend. To achieve this goal, they put effort into 4 main areas. These include; education, business, social inclusion, and advocacy. They promote activities for each of them. You can start joining their activities for free with a mentoring program. Other options are to visit their job board to find a new job or place a job offer if you are an employer. Promoting more Women in Tech We have high hopes that we’ll see more women thriving in the tech industry. Education and support organizations promoting women in tech is growing too. To close the gap, all of us must be committed to educating about gender equality which will help continue the progress. We also need to be committed on advancing the careers of women in technology too. To continue the progress, we need to see more women in tech because it’ll inspire future generations of women. In other words, we need to see more women employed in the industry. We also need to see them promoting online coding courses and in top level executive positions at tech companies.
<urn:uuid:f547545e-e7b7-43aa-98ec-821e45fe6488>
CC-MAIN-2022-40
https://blog.domotz.com/fun-side-it/women-in-tech/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00329.warc.gz
en
0.94969
956
2.625
3
The fourth industrial revolution is well underway — and it’s thanks to a confluence of technologies we wrote off as science fiction just a few short years ago. Previous industrial revolutions focused on the introduction of a handful of new tools and processes, but Industry 4.0 requires a literally unprecedented amount of system integrations within organizations and smart investments between partners in the name of improved collaboration and data-sharing. Let’s take a look at some of these technologies now and what kind of implications they have for the future of manufacturing. What is industry 4.0? To understand Industry 4.0 and where it’s taking us, it’s useful to first get a sense of where we’ve been. Here’s a crash course in the first three industrial revolutions: - Industry 1.0: During the late 1700s and into the 1800s, the manufacturing world focused its attention on optimizing labor by introducing more efficient tools and processes, including steam engines and even animal-assisted mechanical equipment. - Industry 2.0: In the early 20th century, the use of steel and electricity took off in factories. The advent of electrified equipment gave rise to assembly lines, which provided a huge boost to product throughput and overall productivity. - Industry 3.0: It wasn’t until the third industrial revolution of the late 1950s that manufacturers began to pair their electrically powered manufacturing infrastructure with computer control and digital systems. Industry 3.0 planted the seeds for the truly automated future we now anticipate as we embark on the fourth industrial revolution. Each of these distinct manufacturing eras solved the problems of the day using tools that were available at the time. One thing they had in common, however, was the quest for greater interconnection between manufacturing systems, improved information accessibility and transparency, faster and more efficient production models, and the introduction of equipment types requiring less and less human intervention. Industry 4.0 is the logical culmination of each of these prior manufacturing eras, and yet, something altogether more impressive. Its potential is so vast and world-changing that its many separate innovations deserve to be explored separately, in greater detail. The industrial internet of things (IIoT) The IIoT is one of the most promising and talked-about aspects of Industry 4.0. In fact, it’s essentially the backbone of the entire endeavor. Broadly, it describes a way to connect digital and physical systems to produce an intelligent, transparent and efficient type of infrastructure that’s far more than the sum of its parts. You’ve probably heard the term “smart factory.” If you understand the concept, you know it’s the future of manufacturing. However, we won’t get there without significant investments in digital-physical systems and IIoT devices. IIoT tools are what allow factories to monitor their own environmental conditions—humidity, temperature, lighting control, etc.—and make changes based on occupancy or operational needs, either automatically or with remote human interaction. That’s hardly the end of it, though. Factories that invest in IIoT technologies also enjoy the benefit of greater insight into their operational data. Connected material handling equipment can act as a governor for product throughput when it detects a slowdown or a bottleneck—on a conveyor belt moving products, for example. IIoT devices also make maintenance efforts smarter and more proactive. Factory equipment is increasingly able to gauge its own performance and signal when maintenance is required, usually long before a total equipment failure that could leave your operation crippled. Industry 4.0 is much bigger than these few examples. In fact, it stretches into each of the following technology areas, too, and provides a sort of central nervous system for other investments to work together. The Internet of Things is what gathers meaningful data, but the cloud is what grants that data mobility. First and foremost, cloud platforms facilitate data sharing, including across multiple production facilities, within departments under a single roof and between business partners. Some of the terms you’ll hear used in conjunction with cloud computing include edge and fog computing. These are similar to one another, and they piggyback on the interconnected infrastructure made possible by the IIoT. Fog computing simply refers to a network architecture that is decentralized across many nodes, including IIoT devices. As the name suggests, edge computing moves the intelligence-gathering and analytical apparatus toward the edge of your operations and closer to the source of the data. Big data and analytics Stated simply, big data is the process of analyzing information gathered from a variety of sources. Industrial control systems and connected machinery are two potential sources. Others include customer relationship management software, enterprise planning platforms, and even data gleaned from web traffic, search engine results, social media, customer service interactions and more. The ultimate goal of big data and analytics is to move toward making more decisions in or near real-time. Not surprisingly, 70 percent of the most successful distribution companies have some kind of analytical capabilities baked right into their enterprise planning systems. AI and autonomous robots As we’ve seen already, most of these technologies are nested within one another like Russian matryoshka dolls. IIoT devices feed real-time information into the cloud. The cloud distributes this data to analytical platforms and wherever else it’s needed. Big data provides the means for multiple facilities, partners and even industries to engage in closer collaboration and information-sharing. Artificial intelligence is helping smart manufacturers and factories bring together each of the aforementioned technologies and move us closer to a world without the burden of human error and unnecessary labor. As we speak, artificial intelligence is becoming indispensable for predicting customer behaviors, anticipating machine failures, automating inventory processes and raw material reorders, and much more. The future holds even more potential. Generative design is emerging as a way to create ever-more-efficient product designs within certain fixed parameters. It works like this: A human engineer uses generative design software to specify qualities like material usage, desired tolerances of the final design and even cost requirements. Then, the AI within the program generates one or more physical designs that meet the desired criteria. As AI comes of age before our eyes, we’re witnessing a proliferation of autonomous technologies, including robotics. Collaborative robots, also known as cobots, are an appealing bridge technology toward fully robotic factories. Cobots work alongside human workers and ease the burden of physically demanding processes. In assembly plants, collaborative robots can lift and hold heavy objects, such as engine parts or automobile panels, while human workers perform work that requires finesses and dexterity, such as welding it in place. In other factory environments, we can expect cobots to perform a larger share of inspection duties and other tasks that require considerable attention to detail and where errors can be costly. AR and simulations Augmented reality is a technology with vast potential in a variety of manufacturing-related fields. In fact, AR devices like Microsoft’s HoloLens and Google Glass are being targeted squarely at industrial movers and shakers who desire greater employee productivity, safety and accuracy. In a manufacturing plant, AR headsets and goggles allow technicians and engineers to overlay schematics and assembly instructions directly overtop the real world. Consider an automobile at a certain stage of assembly. Assemblers equipped with AR can see detailed, “exploded” views of the car within their field of vision. Repair technicians can place manuals and detailed checklists in their peripheral vision to ensure they don’t miss any steps as they complete their work. Augmented reality allows for incredibly detailed simulations that mirror the real world without the same risk of injury or equipment failure. Machine operators can verify calibration settings without risk of damaging the machine, and also cut down on startup time thanks to a reduction in real-world trial-and-error. Additive manufacturing, including 3D printing, has a long way to go before it’s scalable in the way previous-generation manufacturing techniques, like injection molding, are. Nevertheless, it’s quickly expanding from niche applications into what may be, in the near future, business as usual for manufacturers across the world. One of the most appealing ways to put additive manufacturing to work involves rapid prototyping. Consider the benefits of using generative design and 3D printing to quickly produce and test products in the real world before ramping up production. Afterward, many 3D printer materials—known as filaments—used in prototyping can be recycled and used again. Resins, nylons, polystyrene and plastics—such as ABS and PLA plastic—are some of the most common materials used in 3D printing today. However, industrial-scale printers can just as easily churn out replacement parts made from aluminum, steel and other metals, and with increasingly tight tolerances and elaborate designs. Even wood, stone and bamboo may be used in conjunction with eco-plastics as printer filaments for environmentally friendly products with undeniable tactile appeal. Additive manufacturing is an ally to small-batch manufacturers first and foremost, but we’re inching closer to truly scalable printer models capable of combining the deposition of multiple materials into a single product. Very few complex products in the world today are fashioned from a single element, which means producing items like this at scale requires multiple machines. Having one device capable of depositing several materials to form a complex product is expected to be another game-changer in this already dynamic industry. Horizontal and vertical systems integration Ultimately, each of these areas of technological development serves the same goal: cross-company cohesion between departments and employee functions, and across multiple companies and supply-chain partners. Reaching this level of horizontal and vertical systems integration requires robust cyber-physical infrastructure. A true smart factory, for example, would likely need each of the following types of system integrations: - Incoming freight items, such as unfinished goods, are loaded from trucks onto automated rollers and pass under RFID scanners. These scanners automatically verify the count and send that information to an intelligent facility system. The system diverts the freight to wherever it’s needed — whether to temporary storage or directly to the assembly floor. - Multiple businesses working in unison within a shared supply chain can engage in systems integration to automate parts reordering and to sync their pickup and delivery schedules. - Multiple industrial systems come together in an emerging trend known as “digital twins.” Manufacturing companies live and die according to the leanness with which they conduct their operations, and that means having no more inventory on hand at a time than is necessary. Enterprise-planning software draws conclusions about future inventory levels based on past and current partner and customer data. Automated manufacturing systems call up digital schematics — digital twins — and send them to factory equipment. The computerized assembly and handling equipment then works until demand is met. Vertical systems integration within factories, and horizontal systems integration across value-chain partners, are the inevitable future of manufacturing. This network of interconnected technology systems means cleaner, leaner and more efficient production. It also delivers significant savings in the form of lower error rates, as well as fewer transmission mistakes between departments and partners. Achieving this future involves compatibility between industrial control systems and digital management platforms. This, in turn, requires collaboration between partners or the use of APIs. However, these are easy obstacles to overcome once the benefits are made plain to each party. Cybersecurity in Industry 4.0 Of course, all this connectivity raises significant concerns about cybersecurity and will, in turn, yield new technologies designed to keep intellectual property, client information and operational data safe from prying eyes. Several key points need to be addressed to keep advanced manufacturing infrastructure safe and secure. Performing identity verification and enforcing access control to factory networks and cyber-physical systems is a must. Reliable and encrypted communication between partners is essential for protecting data in transit. The data centers that store and analyze industrial information need to have robust digital and physical protection against would-be hackers, both on the premises and operating remotely. Lest we forget, some of these technologies are still new enough that their security cannot be taken for granted. Even industrial-scale heating and cooling equipment, if it makes use of the IIoT for remote operation and diagnostics, is a potential point of failure — and a way into your critical systems for cybercriminals who know how to exploit them. Building smart factories and engaging in greater collaboration all the way up and down the supply chain means vetting equipment, as well as hardware and software providers, very carefully. Major tech companies like Microsoft, Amazon, IBM, Cisco, GE and Oracle are staking their futures on providing connectivity solutions for the industrial internet of things and tomorrow’s industries. However, it’s ultimately up to facility managers, chief technology officers and CEOs to understand the underlying technologies well enough to make reasoned decisions when it comes to choosing their partners wisely. Industry 4.0 will change the world It’s right to think of these technologies as the beginning of another industrial revolution. Together, they represent a top-to-bottom reimagining of the entire manufacturing industry. Don’t write them off as futuristic, though — they’re already here and providing competitive advantages for companies that recognize their potential.
<urn:uuid:a92c44de-a160-4e50-b400-3823b03531fc>
CC-MAIN-2022-40
https://bdtechtalks.com/2019/04/25/what-is-industry-4-emerging-technologies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00329.warc.gz
en
0.935166
2,754
3.140625
3
Cisco Meraki uses IPSec for Site-to-site and Client VPN. IPSec is a framework for securing the IP layer. In this suite, modes and protocols are combined to tailor fit the security methods to the intended use. Cisco Meraki VPNs use the following mode+protocol for Site-to-Site VPN communication: In tunnel mode, the entire IP header and payload is encapsulated. This means that a new packet header will be added and the packet itself can be encrypted, as opposed to just the packet’s data. This allows traffic to be passed in it's entirety and create a secure channel for communication between two endpoints. Protocol: Encapsulated Security Payload (ESP) ESP is the wire-level protocol designed to secure communication by encrypting the encapsulated data and can allow for authentication. ESP being used in tunnel mode allows for encryption of the full packet. To an entity viewing this traffic externally, the only clear-text data in the packets are the new IP header and the ESP header: IPSec can also be used in both transport mode and the AH protocol. Each mode can be used with either protocol, but the above combination is used because it best suits a secured VPN connection. Each side of an IPSec communication needs to share secret values to secure traffic. These keys are used to match encryption and hashing methods. There are two distinct methods that Cisco Meraki devices use to establish these keys. The foremost method that Cisco Meraki devices use to establish shared secrets is through the Cisco Meraki cloud infrastructure. All Meraki devices have a secured tunnel back to the Cisco Meraki cloud. This allows Cisco Meraki devices to establish all information needed to create an IPSec tunnel through this mutually trusted source. A method known as UDP holepunching is then used to create these VPN tunnels. For more info on how the Meraki MX uses UDP hole punching, please refer to our documentation on Automatic NAT Traversal. Cisco Meraki devices also utilize the well known IKE method for negotiating the essential information for IPSec connections. This method is used for client VPN and Non-Meraki site-to-site VPNs. Internet Key Exchange (IKE) is the protocol Cisco Meraki uses to establish IPSec connections for Non-Meraki site-to-site and client VPNs. When a VPN endpoint sees traffic that should traverse the VPN, the IKE process is then started. IKE is broken down into 2 phases: The purpose of this phase is to create a secure channel using a diffie-hellman key exchange. This secure channel is then used for further IKE transmissions. Phase 1 is based off of the ISAKMP framework. In the above figure, we can see the Cisco Meraki Event Log entries that will typically accompany the IKE process. Please note that in a successful exchange, the logs should display “ISAKMP-SA established” and some information specific to that association. Phase 1 has two possible modes; main mode and aggressive mode. Main mode consists of three exchanges to process and validate the diffie-hellman exchange while aggressive mode does so within a single exchange. Issues with this phase are usually related to public IP addressing, pre-shared keys, or encryption/hash configuration. Using the channel created in phase 1, this phase establishes IPSec security associations and negotiates information needed for the IPSec tunnel. This phase can be seen in the above figure as “IPsec-SA established.” Note that two phase 2 events are shown, this is because a separate SA is used for each subnet configured to traverse the VPN. This phase has only one mode on the Cisco Meraki platform, called quick mode. Issues with this phase are typically seen when subnets are not matched on each side of the tunnel or permitted encryption/hash settings are mismatched. For information on troubleshooting Cisco Meraki VPN, please refer to the following articles: For more information on configuring Site-to-Site VPN tunnels:
<urn:uuid:9970b09d-bef5-40f5-8802-63830b8fffc0>
CC-MAIN-2022-40
https://documentation.meraki.com/General_Administration/Tools_and_Troubleshooting/Networking_Fundamentals%3A_IPSec_and_IKE
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00329.warc.gz
en
0.90383
855
2.828125
3
Internet Protocol (IP) addresses are assigned to hosts so they can communicate with devices on different networks. IP addresses assigned to clients on Cisco Meraki networks are viewable under Network-wide > Monitor > Clients page for MR Access Points, MX Security Appliances and MS Switches or from the command prompt using ipconfig on Windows devices. Calculating IP addresses IPv4 addresses consist of a network portion and a host portion. The address is split based on the network's subnet address. This concept is explained using the following example IP address configuration. IP Address: 192.168.0.1 Subnet Mask: 255.255.255.0 Default Gateway: 192.168.0.254 In this example, the host client that is configured with these settings can be reached at address 192.168.0.1. When the client does not know how to reach another client (i.e. the other client is not in the same subnet), the client will send traffic to the default gateway, which is 192.168.0.254. A subnet mask will tell the client which subnet it is in and is also used by the network administrator to configure networks with a specific amount of addresses. In this case, the subnet mask of 255.255.255.0 (/24 in CIDR), tells us that the network portion of the address is the first 3 octets (192.168.0). Each host on this network will always have a prefix of 192.168.0. The only portion that will change is the last octet which will be 1 - 254, with .255 being the broadcast address for all clients in the network. As the subnet address changes, there will either be more hosts and less networks, or less hosts and more networks. For example, the private address range of 10.0.0.0/8 utilizes a smaller subnet mask (255.0.0.0). This allows the network administrator to have up to 16,777,214 hosts in one network. The previous example utilized the 255.255.255.0 subnet mask which only allowed the administrator to have 254 clients in the subnet. Consult the Knowledge Base to learn more about Subnetting. IP Address Assignment IP addresses can be statically or dynamically assigned. Dynamic addressing utilizes DHCP (Dynamic Host Configuration Protocol). This protocol allows clients to turn on their device and obtain an IP address on the network automatically and is ideal for networks with large numbers of end user PCs and mobile devices. MX Security Appliances, Z1 Teleworker Gateways, and MR Access Points can be configured to provide DHCP Services to network clients. Static IP addressing is used when a device needs the same address each time it connects to the network. This is specifically useful when file servers or network devices need the same IP address for management and access purposes. Consult the product manuals to learn how to statically assign IP addresses to MR Access Points, MX Security Appliances, and MS Switches. At this time, IANA (Internet Assigned Numbers Authority) is developing IPv6 which is slowly being implemented on network devices. There are several differences between IPv4 and IPv6. IPv4 addresses are 32 bits and expressed in dotted decimal notation (for example, 192.168.100.1) whereas IPv6 addresses consists of 128 bits expressed in hexidecimal notation (for example, 2001:0db8:85a3:0042:1000:8a2e:0370:7334). Additionally, clients can have multiple IPv6 addresses (an IP address for the local network, and a routable IP address). IPv6 also has IPSec built in natively which allows clients to establish secure connections between each other in both the WAN and the LAN.
<urn:uuid:ccd00b2d-674b-49e2-b428-d3fbe387775f>
CC-MAIN-2022-40
https://documentation.meraki.com/General_Administration/Tools_and_Troubleshooting/Networking_Fundamentals%3A_IP_addressing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00329.warc.gz
en
0.912943
788
3.59375
4
Electronic data interchange (EDI) describes the technology and the accord between business partners who jointly agree to save costs on human labor, and reduce errors, by submitting and processing standard business documents in an automated, electronic exchange. What do I need to know about data mining? What do I need to know about EDI? EDI comprises a set of standardized formats for documents and business partners who wish to adopt these standards to create structured data that facilitates computer processing without human intervention. What is EDI used for? EDI is most often used for routine documents such as purchase orders and invoices, but most routinely exchanged business documents are fair game for EDI transmission, so long as both parties agree to use EDI for that document or transaction type. How does EDI work? Business partners, also called “trading partners” in EDI, jointly agree to one of several EDI standards. The standard describes the way that various types of data must be formatted so that each partner’s software can process the others’ data without ambiguity. Documents are submitted electronically and then processed by an EDI translator—typically either a custom-built or hosted software service—to process the document and feed the information to downstream software applications such as accounting systems or enterprise resource planning (ERP) systems.
<urn:uuid:29044fe3-0787-4f3f-8f6b-afef32341eee>
CC-MAIN-2022-40
https://www.informatica.com/nz/services-and-training/glossary-of-terms/electronic-data-interchange-definition.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00329.warc.gz
en
0.944532
271
3.03125
3
The Space Race is back. SpaceX founder Elon Musk announced on Feb. 27, 2017 an ambitious plan to launch a mission that would carry two paying tourists on a trip around the moon in 2018. A similar NASA mission, using the long-awaited Space Launch System and the Orion spacecraft, is in the planning stages for 2019, according to a February 15 announcement. The lunar mission would use the new SpaceX Falcon Heavy launch vehicle. This rocket can lift more than twice the weight of the Space Shuttle launch system into low earth orbit. The Falcon Heavy’s lift capacity is 54 metric tons, which is enough to put a fully-loaded Boeing 737 with passengers, luggage and fuel into orbit. By contrast, the Space Launch System being developed by NASA will be able to lift 70 metric tons into low earth orbit. Later versions of the SLS will be able to lift 130 metric tons into orbit. By comparison the Saturn V moon rocket that sent Apollo 11 to its lunar landing was capable of putting 108 metric tons into orbit—about the weight of a fully-loaded Boeing 757. According to a statement by Musk, the company plans to use its Dragon 2 spacecraft for the lunar mission. This is a version of the spacecraft that’s already in use delivering cargo to the International Space Station. The Dragon version 2, developed as part of NASA’s Commercial Crew Program, is designed to carry crew to the ISS under contract with the space agency. The Crew Dragon, as it’s also called, will make its first trip to the ISS later in 2017. The first trip will be unmanned, but subsequent missions will carry replacement crews. According to a statement from SpaceX, the lunar mission will launch once crew deliveries to the space station are underway. In his announcement, Musk said that the lunar mission will launch from the historic Launch Pad 39A—site of the Apollo 11 launch in 1969. SpaceX has leased this launch pad from NASA for a series of missions, including a launch last week of an ISS resupply mission. The Falcon Heavy launch vehicle is closely related to the existing Falcon 9 rocket. It’s essentially three Falcon 9 rockets strapped together, with a total of 27 rocket engines producing over five million pounds of thrust. On a typical launch, all three rocket cores will provide maximum power at lift-off. Then the center core would throttle down while the other two would provide most of the lift, then the center core would take over to lift the spacecraft almost into orbit. The final push comes from a second stage with a single engine, that’s basically the same as the second stage of the Falcon 9.
<urn:uuid:03b7cae5-4c36-41cf-8980-8d6fd622f834>
CC-MAIN-2022-40
https://www.eweek.com/cloud/spacex-targets-2018-for-manned-mission-around-the-moon-and-beyond/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00329.warc.gz
en
0.949458
538
2.953125
3
If you have used a computer, you have likely heard of ransomware. Whether you know it or not, there is a good chance you may have even encountered an attempted attack at one point. With data provided by Emisoft, the New York Times notes that in 2019, 205,280 organizations submitted files that had been hacked in a ransomware attack. This represents a 41 percent increase from the year prior. These attacks have not discriminated against their victims either. They have frozen the data of U.S. government operations, cargo facilities, and hospitals, just to name a few. These troubling statistics further reinforce the fact that ransomware is not going away, while in fact, it may be getting worse. Why is this? The sophistication of the modern-day cybercriminal has hit new heights. Ransomware now comes in many damaging variants, and we outline a few of the more common ones below. What is ransomware? Before we dive into some common strains of the ransomware virus, it will be best to understand what it is more broadly. Ransomware is a type of malware that encrypts a victim's files. The attacker will then demand a ransom to restore access to the data. Payment is required to obtain a "decryption key". Financial demands can range from a few hundred to hundreds of thousands of dollars. Larger organizations are often slapped with the largest demands, given their financial capability and the mass amounts of data they often store. 3 common ransomware variants to be aware of This type of ransomware is often one of the most difficult to notice. It provides virtually no indication of infection while encrypting your files. Once the process is complete behind the scenes, the results can be devastating. Cerber can infiltrate your device in several ways, including: - Downloaded in free programs online - Distributed via links or attachments in emails - Installed by websites using software vulnerabilities. This method is often "silent", with very little warning of infection beforehand What happens next? Once Cerber ransomware has infected your computer system, it immediately gets to work. It infects files used for the regular day to day operation of your system and renders them unusable. Free ransomware removal programs will be ineffective against Cerber, leaving victims with a choice between paying the ransom to retrieve files or enlisting the help of an IT expert. This type of ransomware seems to be standard fare in terms of data encryption, at least at first. However, CryLocker enjoys adding "insult to injury" as it locks up critical files. It will typically use websites such as imgur.com to host personal information about victims, found within files. If that wasn't bad enough, CryLocker uses the Google Maps API to pinpoint a victim's location, using nearby wireless SSIDs.This is used as a scare tactic to convince a victim to pay the ransom. Like many other forms of ransomware, CryLocker is usually paid out in BitCoin. Originally known as "BitcoinBlackmailer", Jigsaw ransomware is known for encrypting files and gradually deleting them until a ransom is paid. Originally introduced in April of 2016, Jigsaw remains a threat today. How does it work? It was designed to spread through malicious email attachments; more reason to double-check incoming attachments and only open those from senders you trust. Upon encrypting all user files and master boot records, a pop-up will appear on the victim's screen, featuring an image of Billy the Puppet, a character used in the Saw franchise of films. Accompanied by Billy's image is a demand for a ransom payment. Immediately after the demand is made, the victim's clock begins to tick. If the ransom is not paid in one hour, one file is deleted. For each hour thereafter, files will be deleted in greater amounts. This pattern continues until a computer is fully wiped at the 72-hour mark. Victims are urged to avoid attempting to terminate the process or reboot their systems as this will result in 1,000 files being deleted at once. If losing files in this manner wasn't bad enough, more recent versions of Jigsaw ransomware threaten to dox a victim by posting personal information found within the files online. The bottom line - be wary of ransomware Whether at work or at home, ransomware is one of the most significant threats facing the safety of our data in a connected world. As was exemplified in this post, ransomware can take many different routes of action, although a blanket encryption of critical files is standard procedure. Although ransomware can sometimes unknowingly infect a system, users should always maintain hyper-vigilance by avoiding the download of free programs online and downloading attachments from senders who are not trusted contacts.
<urn:uuid:f85a62d8-0db7-49fa-82f4-30aa8a325755>
CC-MAIN-2022-40
https://www.givainc.com/blog/index.cfm/2020/10/21/exploring-different-types-of-ransomware
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00329.warc.gz
en
0.953919
971
3
3
Introducing Trusted LAN A trusted LAN is a network segment that's fully under Firewalla's protection, on which you have full visibility and control of everything flowing through that network. - You know what devices are on that LAN. - You have full control of which devices can get on the LAN. - Besides your devices, no one else can talk to devices on that LAN. - You also have full control of what's coming in and going out of your network. - Your network, your rules. Your home network is a trusted LAN. What if you are traveling with the family? or working remotely with a team? Where do you need a Trusted LAN? You need a trusted LAN when you are: - At home or work - Using public wifi ... like Starbucks - Traveling and use a hotel or Airbnb Wi-Fi - At a co-working space, where your network is mixed with other companies Why do you need a Trusted LAN everywhere? - Public & Shared Networks: Public Wi-Fi is often impossible to resist. Here, your network traffic is often mixed with others, and there is no way to guarantee this network is secure. The same When you are on business travel or using an Airbnb, you are using a private but shared network out of your control. - Your devices need to securely talk to each other, just like you are at home. We turn on file sharing and remote access at home or work and forget to shut these down when we go to a public network. This leaves those files open for anyone who wants to take a look. Even when you are traveling outside of your home network, here are some things you can do: - Security protection, adblocker, and other features can be set up once and used no matter what network you are on. - When traveling with family, kids' devices can be blocked from sites and apps you don't want them visiting. - Block the wifi host's ability to see where you are going on the internet by securing your DNS over HTTPS (DoH). - Share a single trusted connection across all the devices you trust. - Connect to assets at another location via VPN (e.g. home, an office) such as a music server or sensitive business data. - Create a private network within a co-working office to secure Intellectual Property amongst employees at the same site or connect securely to employees at locations around the world. If you have a dedicated office, the Firewalla Gold is also a good choice. How to create a Trusted LAN? - Run a Firewalla in router mode. This will allow the Ingress Firewall to block all traffic to the foreign network. - Firewalla Purple is simpler since it is smaller and can be carried around if you are traveling. It allows you to create a Trusted LAN anywhere that has a Wi-Fi connection or an ethernet connection. And if there is no Wi-Fi or ethernet, use your phone's personal hotspot to share a connection. - A Firewalla Gold will also work if you have a remote site or a shared workspace. - Create a network segment using Wi-Fi or another ethernet port. - Create rules on the segment to protect your devices further. - If you want to connect back to home or work, then use VPN Client to create a tunnel to the home network. (example: site-to-site VPN) - Use the policy-based routing feature to selective route traffic to a local ISP, a third-party VPN, or another Firewalla box. How is the "Trusted LAN" different than just using the Firewalla VPN Server when outside of the home network? The Firewalla VPN server is a way to send all traffic home, it does not have the concept of a LAN. This works nicely if you have one or two devices that do not interact with each other. In case you want devices to talk to each other, such as file sharing, then you will need the "trusted LAN".
<urn:uuid:02f11ed1-ebd0-41d4-8f90-4b41d9fe5b8f>
CC-MAIN-2022-40
https://help.firewalla.com/hc/en-us/articles/4408744159379-Firewalla-Trusted-LAN-Everywhere-?sort_by=votes
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00329.warc.gz
en
0.933094
852
2.515625
3
What is an SPF record and how do I use one? SPF stands for Sender Policy Framework Basically an SPF record enables a receiving mail server to lookup and determine whether the sending server is authorized to send emails from the domain in which the email originated. This helps prevent email spoofing, where the sender spoofs the “FROM” address of the email. The record is added to the DNS zone of the originating email address. So only the authorised domain administrator can actually add and edit the record which enables the security The format of the record is as follows: v=spf1 a mx ptr ip4:22.214.171.124 ~all This example shows that any A or MX record can send email as well as IP: 126.96.36.199, and that all PTR records resolve to outbound mailservers. Microsoft have a great wizard to set these records up:
<urn:uuid:2c1427d1-f134-4076-a86f-e0cba3cae20d>
CC-MAIN-2022-40
https://kb.hyve.com/what-is-an-spf-record/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00329.warc.gz
en
0.888118
216
2.765625
3
FCC Adopts New Rules on RF Safety The FCC recently voted to change its rules on how to best protect the general public and trained workers from the health risks associated with radiofrequency (RF) emissions. The FCC decided not to change its long-standing RF exposure limits, finding that there was an insufficient scientific record to change the current levels of protection. However, the FCC adopted new rules for RF exposure in three areas: (1) broad exemptions from RF evaluation for certain types of radio transmissions; (2) calculations that should be used if a general exemption does not apply; and (3) mitigation procedures that should be used to ensure that the general public and RF-trained workers are not exposed to RF levels in excess of the FCC’s exposure limits. The FCC is also inviting comment on: I. New RF Safety Rules A. Exemptions from RF Evaluation To simplify its current exemptions from RF evaluations the FCC has adopted new exemptions that will be applied to all types of transmitting devices. The new exemptions are broadly grouped into three classes: - 1. Extremely low-power devices that transmit at no more than 1 milliwatt will be exempt. - 2. Low-power devices with antennas that will be used between 0.5 and 40 cm of the body will be evaluated under formulas that take into account frequency, power and separation distance between the antenna and the body. - 3. All other transmitters will be reviewed against a table of Maximum Permissible Exposures (MPEs) for any frequency between 300 kHz and 100 GHz. The FCC will issue further technical guidance on how to evaluate non-exempt devices. B. Mitigation Measures to Ensure Compliance The basic concept of mitigation is to ensure that individuals are kept out of areas with RF levels that exceed the FCC’s exposure limits unless they are specially trained in RF safety. The FCC has adopted more precise language on “transient exposure” so that an untrained person may transit through an area of higher RF radiation if the exposure does not exceed the general population limit when exposure levels are averaged over a time interval of 30 minutes. The area must have access controls and appropriate signage, and the individual must be escorted by RF-trained personnel to ensure compliance. Employees and contractors (such as utility workers, roofers, HVAC technicians, etc.) who must perform any task or stop in an area that exceeds the general population limits may not be considered “transient” but must be trained in RF safety. The FCC will require licensees to use warning signs when a location will have RF levels above the limit for the general public. The FCC will also allow use of indicators (chains, railings, paint and diagrams) if other positive access controls (e.g., a locked door to the rooftop) are in place to restrict access to persons who are RF trained. The FCC adopted four categories for RF safety actions: - Category 1: Locations where RF energy is not in excess of the general population limit. Signage is not required but if used must show a green “INFORMATION” heading. - Category 2: Locations where the continuous exposure limit for the general population is exceeded but not for RF-trained personnel. These locations must have positive access controls and signs with the word “NOTICE” in blue color. - Category 3: Locations where the exposure limit for occupational (RF-trained) personnel would be exceeded by no more than a factor of ten. The FCC will require signs with the word “CAUTION” in yellow color, and controls or indicators (chains, railings, contrasting paint, diagrams, etc.) in addition to positive access control. - Category 4: Locations where the exposure limit for RF-trained personnel would be exceeded by more than a factor of ten or where there is a possibility for serious contact injury. “WARNING” signs in orange color are required, and “DANGER” signs in red color are required where immediate and serious injury will occur on contact, in addition to positive access control. Signs must readily viewable at a minimum distance of five feet from the boundary at which the limits are exceeded. Signs must include a point of contact, the RF energy advisory symbol, an explanation of the RF source, and how to comply with the exposure limits. Persons who will be working in controlled (high RF) areas must be “fully aware” of the potential for exposure and must be able to control their exposure. Workers must receive training on how to control their exposure and must be allowed to reduce or avoid exposure by wearing personal protective equipment or reducing the time spent in high RF areas. Posting information on signage at an access door is not sufficient to constitute “training.” If a new licensee at a transmit site causes the site to go out of compliance the new entrant is responsible for evaluating the site to ensure compliance, but all licensees share responsibility for any modification or remediation necessary to bring the site into compliance. To avoid public confusion, the FCC will preempt any state or local requirements regarding RF signage or barriers that go beyond what is required by the FCC. C. Transition Periods The FCC will allow two years from the effective date of the new rules for licensees to determine if new evaluations are required, to perform them if necessary, and to comply with the new mitigation measures. Nevertheless, the FCC encourages licensees to utilize the new signage and other guidance as soon as possible. II. Notice of Proposed Rulemaking The FCC also opened a new rulemaking proceeding to consider whether the FCC should expand its rules to new technologies; for example, (1) whether to expand the range of frequencies to which the RF exposure limits will apply (currently limited to 100 kHz to 100 GHz); (2) the conditions and methods on which exposure limits are averaged, in both time and area, during evaluations; and (3) how to address issues related to wireless power transfer (WPT) devices that emit RF energy for the transfer of power, such as to charge or power devices within a room or for higher power devices that could charge batteries in a vehicle. If your company is interested in learning more about the new RF rules or how you can ensure compliance, please contact Jeff Sheldon.
<urn:uuid:87914179-c9cb-4ef0-a715-ba6ae084e1f8>
CC-MAIN-2022-40
https://www.lb3law.com/blog/fcc-adopts-new-rules-on-rf-safety/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00329.warc.gz
en
0.929161
1,294
2.984375
3
People build their own servers at home for a number of reasons. Some use them as centralized media hubs, others for file sharing. This article explains how to create your own server and the advantages that come with it. Before you go about finding parts for your server, you will need to decide on its purpose. This will help you to determine how much power you are going to need to run it, what operating system you will need to use and how much RAM you require. Many people build their own servers using old computer boxes. It is best to run your server from a separate computer to the one you use for everything else. If you do not have a spare computer you will need to assemble one using a basic computer box, motherboard and chip, hard drive and as much RAM as possible. You will also need to be able to connect it to a reliable broadband connection. A keyboard, mouse and monitor are necessary during the set up process. There are a number of different operating systems you can choose from, depending on your preference and familiarity. Windows home server is a possible choice, but you do have to pay for it. If you are looking for a free operating system, you can use Linux Ubuntu which can be used in conjunction with Samba for file sharing. Alternatively if you are familiar with UNIX you may want to consider using FreeBSD. In order to set up your own server at home you will first need to assemble your computer box and install your operating system. Once your operating system has loaded it is essential that you strip it of any unnecessary programmes that will simply be taking up valuable space. On Linux this may be programme packages like Open Office, Gimp and Thunderbird. You will also want to access your computer’s settings and disable its screen saver. After you have removed any unwanted programme packages from your computer, you will need to run updates on the programmes that you still want to use. This will ensure you are using the upgraded versions of all programmes which will help your system to run more efficiently. One complete, you will need to set up your file sharing programme. The way you do this will depend on the programme you have chosen to use. A popular choice is Samba as it allows users to file share amongst all workstations regardless of their operating systems. Once you have installed your file sharing programme you will need to add FTP capability to your server and set up shell access. You will need to choose an FTP app to download and install. Servers with FTP are particularly useful as they enable you to save and retrieve files from your computer, no matter where you are in the world. After it has installed you will need to set up an administrator account for yourself, ensuring that you grant rights to all necessary drives. Finally you will need to connect your server to Putty, or another similar programme and type in the IP address of your new dedicated server. You can then unplug your keyboard, monitor and mouse, leaving your server connected to the internet for remote use. Many people choose to run their own servers as it gives them more control over their data. If you use an online storage company to host your files, you are essentially giving a third party control over your data. Owning your own server also means that your files are all in one place, rather than spread over a number of servers. If you set up sufficient security you can protect your files and prevent anyone else from accessing your server. If however you want to share your server with family and friends, you can grant them access by giving them a dedicated URL, username and password. Another benefit of building your own server is that you can use it for whatever you like. Common uses include email servers, proxies, web hosting and gaming, however the possibilities are endless! Owning a server can also reduce costs as you will not need to pay a hosting fee. Author Bio: Tom Reynolds, works as a server backup specialist at Cheeky Munkey, the IT support firm based in London. He advises clients and individuals on building or renting their own server.
<urn:uuid:b876c326-ec11-4fde-b0a5-3fbadf72494b>
CC-MAIN-2022-40
https://www.colocationamerica.com/blog/how-to-create-your-own-server
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00529.warc.gz
en
0.956342
827
2.53125
3
Federal agencies tasked with gathering and storing climate-related data are turning to cloud computing and artificial intelligence to store, analyze and preserve vast amounts of information about the potential effects of global warming, The Wall Street Journal reported Sunday. NASA and the National Oceanic and Atmospheric Administration (NOAA) are working with technology companies Google, Amazon Web Services and Microsoft to move their climate databases into the cloud to accommodate the rapid growth of information used in climate studies. Robert Lee Hotz wrote in WSJ’s The Future of Everything series that the total volume of the U.S. environmental data archives is expected to grow from about 83 petabytes to more than 650 petabytes over the next decade. The growth will be driven by projects aimed at gathering data about the natural phenomena on Earth and its atmosphere, including NASA’s NISAR radar imaging satellite and the space agency’s Surface Water and Ocean Topography mission. “This is a new era for Earth observation missions, and the huge amount of data they will generate requires a new era of data handling,” said Kevin Murphy, NASA’s chief science data officer. Nancy Ritchie, archive branch chief at the National Centers for Environmental Information in Asheville, North Carolina, said NOAA expects to move all of its Earth science archives into the cloud by 2027. In addition to the cloud, scientists are looking at the potential application of artificial intelligence in attribution science. “There is a lot of data sitting around and AI, I argue, is the cheapest way to unlock the insights from the data,” said Claire Monteleoni, a computer scientist at the University of Colorado specializing in climate data systems.
<urn:uuid:627cfd38-549c-43cb-a5f1-924375306a0b>
CC-MAIN-2022-40
https://executivegov.com/2021/12/nasa-noaa-turn-to-cloud-to-transform-environmental-data-storage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00529.warc.gz
en
0.907096
344
3.15625
3
A new study details the extent and seriousness of potentially destructive spyware on the Internet, finding that it is still prevalent but declined significantly. University of Washington computer scientists sampled more than 20 million Internet sites looking for programs that can covertly enter computers. While most spyware can be a nuisance—generating pop-ups, loading unwanted programs—it can also perform such malicious tasks as gathering personal data or using your modem to dial costly toll numbers. The study examined popular categories of Web sites including games, news and celebrity sites. Among the findings: More 5 percent of executable files contain piggybacked spyware; One in 62 Internet domains performs “drive-by download attacks” to force spyware on users who simply visit the site; Game and celebrity Web sites appeared to pose the greatest risk for piggybacked spyware, while sites that offer pirated software topped the list for drive-by attacks.
<urn:uuid:7f68e6b5-4fa6-4500-a73d-010d5c99ee68>
CC-MAIN-2022-40
https://it-observer.com/study-notes-decline-internet-spyware.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00529.warc.gz
en
0.909805
185
2.734375
3
Artificial Intelligence is an astonishing technology if you use it correctly. How fascinating it would be to build a machine that behaves like a human being to a great extent. Mastering machine learning tools will let you play with the data, train your models, discover new methods, and create your algorithms. Artificial Intelligence comes with an extensive collection of AI tools, platforms, and software. Moreover, AI technology is evolving continuously. Out of a pile of AIlearning tools, you need to choose any of them to gain expertise. This article has a list of top 10AI tools that are widely used by experts. Scikit-learn was initially developed by David Cournapeau in 2007 as a Google Summer of Code project. Later MatthieuBrucher joined the project and began to use it as apart of his thesis work. In 2010 INRIA got involved and therefore first public release (v0.1 beta) was published in late January 2010. Scikit-Learn is an open-source package in AI. It also provides a unified platform for users. This platform helps in regression, classification, and clustering. Dimensionality reduction and pre-processing are also done using Scikit-learn. It is built on top of three main libraries. The NumPy, SciPy, and MatplotLib. This AI tool helps in training and testing your models as well. KNIME is an open-source AI tool and it is GUI based. In 2006 the first version of KNIME was introduced and several pharmaceutical companies start using KNIME and many life science software vendors began integrating their tools into KNIME. It doesn’t require prior coding knowledge. You can still perform operations using the facilities provided by KNIME. KNIME is usually used for data-related operations. These include data mining, manipulation, etc. KNIME processes data by creating various workflows and executing them. It has repositories that consist of many nodes. These nodes are then dragged into the KNIME portal. Then a workflow of nodes is created and executed. TensorFlow is an open-source software library for machine learning developed by the Google Brain Team for various sorts of perceptual and language understanding tasks, and to conduct sophisticated research on machine learning and deep neural networks. It is Google Brain’s second-generation machine learning system and can run on multiple CPUs and GPUs. TensorFlow is deployed in various products of Google like speech recognition, Gmail, Google Photos, and even Search. TensorFlow performs numerical computations using data flow graphs. These elaborate the mathematical computations with a directed graph of nodes and edges. Nodes implement mathematical operations and also can represent endpoints to feed in data, obtrude results, or read/write persistent variables. Edges describe the input/output relationships between nodes. Data edges carry dynamically-sized multi-dimensional data arrays or tensors. 4. Waikato Environment for Knowledge Analysis (Weka) Waikato Environment for Knowledge Analysis (Weka), initially developed at the University Of Waikato, New Zealand. It’s free software licensed under the GNU General Public License, and is companion software to the book “Data Mining: Practical Machine Learning Tools and Techniques”. Weka feed tools for data pre-processing, implementation of several Machine Learning algorithms, and visualization tools also so that you can develop machine learning techniques and apply them to real-world data mining problems PyTorch is open-source machine learning, library based on the Torch library, used for applications like computer vision and natural language processing, primarily developed by Facebook’s AI Research lab (FAIR).Its Deep Learning framework. It makes very strong use of GPU. This makes it very fast and flexible to use. Its useful because it is used in very important aspects of ML. Like, tensor calculations and building deep neural networks. Pytorch is completely Python-based and is a great substitute for NumPy. It has a great future as it is still a young player in the industry. RapidMiner is a data science software platform developed by the RapidMiner Company that provides an integrated environment for data preparation, machine learning, text mining, deep learning, and predictive analytics. It is for business, commercial applications, and also for research, education, training, rapid prototyping, and application development and supports all steps of the machine learning process including data preparation, results in visualization, model validation, and optimization. It is an amazing interface and is quite helpful for non-programmers. This tool works on cross-platform operating systems. It is usually used in corporates and industries for quick testing of data and models. The rapid miner interface provides a user-friendly platform. In this, you can put your data and test your model. You only have to drag and drop the item in the interface. That’s the reason many non-programmers use it. Google Cloud AutoML In April 2008, Google announced App Engine. It is a platform for developing and hosting web applications in Google-managed data centers, from the company it was the first cloud computing service. The service became available to the public in November 2011. Since the announcement of the App Engine, Google also added multiple cloud services to the platform. Google Cloud Platform provides infrastructure as a service, platform as a service, and also serverless computing environments. The basic concept of cloud autoML is to make AI accessible to everyone. It’s used for businesses as well. Cloud AutoML provides pre-trained models for creating various services. These services cover everything from speech, text recognition, etc. Google Cloud AutoML at the moment is starting to become popular among companies. It is very difficult to spread AI in every field. This is because every sector doesn’t have skilled people in AI/ML. So, Google has created the Cloud AutoML platform which provides pre-trained models. This is a great step by Google. The reason being, it helps users from all backgrounds to create and test their data Azure Machine Learning Studio Azure was announced in October 2008, started with the codename “Project Red Dog”, and released on February 1, 2010, as “Windows Azure” before being renamed “Microsoft Azure” on March 25, 2014. It provides a drag and drops option to the users. This is a very useful and easy method to form connections of datasets and modules. Azure also has the aim of providing AI facilities to all people. It works both on CPU and GPU. This Machine Learning tool is not as popular because of Google. But it is still a useful tool. Accord.net is a computational framework of ML. It usually consists of audio and image packages. These packages help in training models and to create applications. These applications include computer vision, audition, etc. Since it is .Net, the base of the library is c# language. It helps in classification, regression, etc. Accord has special libraries for audio and images. These are very helpful for testing and manipulating audio files etc. Colab or Colab notebook is an environment that is now provided by Google. This environment is based on Jupyter Notebook. It is one of the most efficient platforms for ML in the market. The only thing is everything in Colab will be cloud-based. You can work with many tools like TensorFlow, Pytorch, and Keras on the Colab. Colab can improve your Python skills. We can also use a free GPU provided by Colab for extra processing. Google Drive is a storage method here. So, these were some of the most popular and widely used machine learning tools. All these show how advanced machine learning is. All these tools use different programming languages and run on them. For example, some of them run on Python, some on C++, and some on Java.
<urn:uuid:da78dbea-7697-459a-baf1-0b26ce47b1a8>
CC-MAIN-2022-40
https://areflect.com/2020/06/13/top-10-ai-learning-tools-by-experts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00529.warc.gz
en
0.942963
1,728
2.828125
3
Collecting data and performing analysis doesn’t mean much if you can’t find a way to effectively convey its meaning to an audience. Oftentimes, audience members aren’t well-positioned to understand analysis or to critically think about its implications. To engage with an audience, you need to embrace storytelling. Let’s take a look at what that means when talking about storytelling with data. How to Build a Story Arc One of the simplest ways to approach the problem is to treat your story as a three-act play. That means your story will have: - An introduction - A middle - A conclusion Each section of the story needs to be delineated so the audience understands the structure and the promise of a story that comes with it. What Goes into an Introduction In most cases, data is hidden before being subjected to analysis. That means you have to set the scene, giving the audience a sense of why the data is hidden and where it came from. You don’t necessarily want to jump right to conclusions about the data or even any basic assumptions. Instead, the data should be depicted as something of a mysterious character being introduced. If the storytelling medium is entirely visual, then you need to find a way to present the data. The Minard Map is a classic example of how to do this. It uses data to tell the story of the slow destruction of Napoleon’s army during the invasion of Russia. Minard employs a handful of vital statistics to explain what’s going to happen as the story unfolds. These include the: - Sizes of the competing armies - The geographic proximity of the two forces - Air temperature The audience can familiarize themselves with the data quickly and easily understand what this story is going to entail just by reading the vital statistics. In this particular case, this story is going to be about man versus the elements. Unfolding the Middle of the Story Following the presentation of the story should guide the audience toward the conclusion. In the case of the Minard Map, the middle of the story is about a slowly shrinking French army and a slowly growing Russian army that tracks the French. Military engagements occur, and the weather starts to turn. Geographic elements are worked into the graph, too, as the armies cross rivers and march into towns. Providing the Conclusion A well-executed data visualization should let the audience get to the conclusion without much prodding. The Minard Map makes its point without beating the audience over the head. By the third act, it’s clear that the conditions have turned and the Russians are now close to matching the French in manpower. As the two armies reach Moscow, it’s clear that what started as a triumphant march has ended as an immense loss. In its best form, data storytelling shouldn’t feel like a sea of numbers at all. People have seen numerous charts and graphs in their lifetimes, even over the regular course of a single day of business, and that means good-enough visualizations that are focused on presenting numbers tend to become white noise. Good data storytellers make history. Florence Nightingale’s analysis of casualties during the Crimean War permanently changed the way all forms of medical treatment are provided. Her work is still required reading at many nursing and medical schools more than 150 years later. That’s the goal: to engage the audience so thoroughly that the story and the data long outlast your initial presentation. Accomplishing that goal requires planning. You can’t just fire up your best data visualization software, import some info from Excel and let the bars and bubbles fly. That’s easy to do because many software packages can deliver solid-looking results in a matter of minutes. Top-quality data storytelling occurs when the audience is given just enough information to set and understand the scene. Someone scanning the visualizations will then follow the information as it unfolds over time. As the audience approaches the conclusion, they should be left with a strong impression regarding what the data says and what they should learn from it.
<urn:uuid:41e97e54-2c6d-47a4-87af-99cfe2d8cc89>
CC-MAIN-2022-40
https://www.inzata.com/data-storytelling-101-the-essential-skill-for-the-future-of-analytics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00529.warc.gz
en
0.947087
852
3.359375
3
The introduction of the analog cellular systems back in 1980 signaled a revolution in mobile telephony through enabling the wireless exchange of voice and data signals. Communication networks have evolved rapidly ever since based on much sophisticated and smarter technologies that have completely changed the way we live and work. Every decade is characterized by a significant advancement in mobile technologies which is associated with a new generation of wireless and mobile communications. In this context, we are currently experiencing the benefits of the 4th Generation (4G) of mobile communications, which is characterized by high-speed communications, international roaming capabilities, as well as the integration of voice, video and data communications. 4G technologies underpin many of the state-of-the-art networked applications that we all use every day, such as the mainstream social networking applications and the rich set of mobile apps that include multimedia features and functionalities. Despite the capacity and speed of 4G communications technology, there are still cases where the quality of service is far from being perfect and reliable. For instance, people are still experiencing poor service when trying to use their smartphone during events occurring in densely populated venues such as football stadiums and concert halls. At the same time, a number of emerging bandwidth savvy and low-latency applications will be soon driving 4G technologies to their limit. As a prominent example, self-driving cars are expected to access and process large volumes of data per second, in order to be able to “see the road”, understand the driving context and drive safely. Indeed, autonomous vehicles are expected to take decisions nearly ten times per second, based on the collection of information from hundreds of sensors and other vehicles. For these reasons, researchers, telecommunication equipment vendors and services providers are already working intensively towards the fifth (5G) generation of mobile communications, which is destined to overcome the limitations of existing technologies for many state-of-the-art application scenarios, but most importantly to meet the quality of service needs of future applications such as self-driving cars. 5G will be delivering increased bit rates, along with large data volumes per unit area, based on much higher spectral efficiency when compared to 4G. This exceptional spectral efficiency will be empowered by 5G’s novel cognitive radio technology, which will facilitate different versions of radio technologies to share the same spectrum. At the same time, 5G is designed to enable more devices to connect concurrently and instantaneously without any essential loss in the end-user’s experience. Furthermore, it will boost the power efficiency of mobile communications as a means of lowering battery consumption. This will be achieved based on up to 90% reduction in network energy usage. Also, 5G targets worldwide coverage i.e. high-quality connectivity regardless of the geographic region of the end user. With 5G the entire world is expected to be in Wi-Fi zone. Moreover, 5G is expected to lower the costs of infrastructural development, while at the same time increasing the reliability of the networked applications. 5G evangelists quantify the above-listed advantages, in order to illustrate 5G’s potential. In terms of speed and bandwidth, 5G is expected to deliver speeds from 1 to 10 Gbps. As far as latency is concerned, an end-to-end round-trip time of less than one (1) millisecond is foreseen. These figures justify 5G’s ability to support a large number of real-time and data-intensive applications of the fourth industrial revolution. Beyond specific applications, the Internet-of-Things (IoT) revolution is considered as one of the main drivers behind 5G. In the recent blogs, we have extensively discussed the rise and growth of the IoT paradigm, along with relevant applications in areas such as healthcare, trade, industry and smart cities. Moreover, we have illustrated its momentum, based on the exponential increase in the number of internet-connected devices. In a world of billions of connected devices, there is a need for new connectivity paradigms, which will ensure the quality of service in sensor and IoT devices saturated areas where billions of data streams will be produced and processed every few seconds. 5G provides this novel connectivity paradigm. Many of its features are designed in order to support the connectivity and efficient operation of IoT devices and applications. As a prominent example, its emphasis on power-efficient communications and longer battery life addresses one of the proclaimed needs of IoT and mobile computing users nowadays. Likewise, its exceptionally low latency is a key enabler for the vast number of IoT applications that rely on (near) real-time analytics, such as healthcare analytics at the point of care, analytics for predictive maintenance and quality management in factories, real-time detection of water leaks in smart cities and more. Note also that 5G is seen as a global framework that can unify diverse IoT technologies and protocols (e.g., different low-power access protocols in smart homes or smart buildings). This unification will be particularly important in contexts where interoperability across different IoT connectivity technologies is required. Smart cities and public infrastructure projects are probably the most prominent examples. One may argue that 5G is just about higher bandwidth and lower latency than 4G. While this is a true statement, it is also an oversimplification of 5G technologies. Like the previous generations of communications, 5G is not mere capacity improvement rather it will also include disruptive features that could possibly change the communication landscape (e.g., like data and voice integration in 3G). In the case of 5G, disruption will come from the delivery of exceptional user experience. In particular, 5G will deliver unprecedented user experience in cases where existing technologies fail. Densely populated urban areas and events are only one example. 5G will also facilitate seamless roaming and uninterrupted delivery of multimedia applications for users moving at very high speeds (e.g., high-speed train, fast cars), as well as delivery of applications at very high altitudes. 5G is still under development and not yet commercially available. Most of the large telcos worldwide are planning or already executing large-scale pilots in order to validate the technology and gain experience for further development and deployment. Several thorny issues are still to be addressed, including the development of the promised spectral efficiency and the implementation of built-in security and data protection functionalities. It is estimated that the first enterprise-scale services will become commercially available after 2020. Nevertheless, many vendors and solution integrators are already designing and developing products that will take advantage of 5G characteristics and capabilities. In a sense, 5G is already having a global impact and is all set to paint the future of ICT infrastructures and applications. The Rising Cybersecurity Threats CIOs cannot afford to ignore What’s New in Securing the Critical Infrastructures of the Financial Sector Double the security with integrated systems 7-Notorious IT Security Attacks Your Fort Knox Grade Security will Crumble in the Face of a Hack if… Significance of Customer Involvement in Agile Methodology Quantum Computing for Business – Hype or Opportunity? The emerging role of Autonomic Systems for Advanced IT Service Management Why is Data Fabric gaining traction in Enterprise Data Management? How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:8affa5d2-ce65-4a84-bc98-c004819e2e21>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/5g-communication-networks-drivers-and-implications/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00729.warc.gz
en
0.942409
1,647
3.09375
3
Insecure deserialization is a well-known yet not commonly occurring vulnerability in which an attacker inserts malicious objects into a web application. This allows them to inflict denial-of-service (DoS) attacks, remote code execution attacks, SQL injections, Path Traversal, and Authentication Bypasses. Deserialization attacks are a major threat and can seriously harm businesses and customers. Potential vulnerabilities have been identified in the most popular programming languages, including Java, Python, .NET, PHP, Node.js, and Ruby. What is Insecure Deserialization? Insecure deserialization has been ranked #8 on the OWASP Top Ten List of web applications’ most critical security risks since 2019, along with other risks such as an injection vulnerability. In addition, it’s recognized as one of the first steps that software development organizations need to take to ensure more secure coding. Insecure Deserialization Security Assessment Level The Basics About Serialization and Deserialization Serialization and deserialization are common practices, and they’re used in web applications regularly. Many programming languages even have native tools for serialization. To create the best strategies for protecting your applications from insecure deserialization, it’s important first to introduce and understand these two concepts. Serialization means converting an object into a format for saving it to a file or database or sending it via streams or networks. This basic function regularly needs to be performed for storing and transferring data. Programming languages serialize objects differently — using either binary or string formats. The data has to be shaped in a certain way — preprocessed to a byte stream — which is what serialization does. Some common serialization formats include XML and JSON. The deserialization processes are just the opposite of serialization. They entail converting the serialized data from files, streams, or networks into objects. Deserialization, essentially, reconstructs the byte stream in the very same state it was before being serialized. This conversion is a typical process when done securely. Unsure deserialization should be avoided, in which malicious code comes from unauthorized user input. This often happens when an attacker employs the customizable deserialization processes many programming languages offer to control them with untrusted input. Unfortunately, the languages presume the data is safe and treat all serialized data structures as validated, thus allowing the inclusion of malicious objects. Examples of Insecure Deserialization Attacks Insecure deserialization attacks are often seen as difficult to execute and thus deemed not common, affecting as low as 1% of applications. Yet, due to many attacks an application can be subject to, this type of attack shouldn’t be underestimated. The most typical example of an insecure deserialization vulnerability is when an attacker loads untrusted code into a serialized object, then forwards it to the web application. The application will deserialize the malicious input if there are no checks, allowing it to access even more of its parts. That’s how it makes possible additional attacks that eventually may cause serious privacy vulnerability for the application’s user base. Insecure deserialization is thus sometimes referred to as an ‘object injection vulnerability. The OWASP Insecure Deserialization Cheat Sheet contains some common attack examples: - A set of Spring Boot microservices is called in a React application. To make their code immutable, the programmers serialized user states, and passed back and forth with each request. An attacker abuses the “R00” Java object signature and performs remote code execution on the application server by employing the Java Serial Killer tool. - PHP object serialization is used for a PHP forum to save a “super” cookie loaded with data. It contains the user’s ID, role, password hash, and other states. An attacker modifies the serialized object to obtain admin privileges and tamper with the data. The attacker changes the serialized object to give themselves admin privileges: How to Prevent Insecure Deserialization To protect your web application from insecure deserialization, it’s crucial never to pass a serialized object manipulated with untrusted input by the user to the deserialize function. The reason is that if you do so, an untrusted user would be able to manipulate the object and can send it directly to the PHP deserialize function. As an example how what a serialized PHP object looks like, see the code block below: The OWASP notes that the best way to prevent insecure deserialization attacks is never to accept serialized objects from untrusted users. Alternatively, you can use serialization tools that allow only primitive data types. In case you have to accept serialized objects, here are some tips for stopping insecure deserialization: - Introduce digital signatures and other integrity checks to stop malicious object creation or other data interfering - Run deserialization code in low privilege environments - Keep a log with deserialization exceptions and failures - Execute strict constraints for the deserialization processes before object creation - Limit and check all incoming and outgoing network activity from deserialization containers and servers - Keep tabs on deserialization activities to identify in case there is constant deserialization by a user - Use deserialization methods like JSON, XML, and YAML that are language-agnostic How to Test for Insecure Deserialization You can check your application manually from the source code site and in the running state for insecure deserialization testing. You can also use a static code analysis tool. However, it would require access to your code and only see your application’s “theoretical” view in a non-executed way. To test your executed, running application, the best approach is to use a dynamic application security testing tool. It can automatically scan your web application or API. To test if your web application is vulnerable to insecure deserialization, you can run an invasive free scan through our Vulnerability Testing Software. The content of this article is Creative Commons Attribution-ShareAlike v4.0. Insecure Deserialization Video Explanation When was the Insecure Deserialization vulnerability discovered? The Insecure Deserialization Vulnerability was first reported on January 12th, 2019, by Tencent Security Xuanwu Lab researchers. This vulnerability allows attackers to execute arbitrary code on the target server when they can inject malicious data into the JSON string being passed to the deserialize method. Which versions of PHP could be affected? This vulnerability affects all versions of PHP 5.X and above. It has been patched for version 7.2.3 and later.
<urn:uuid:b1784ea5-483d-41ce-a298-074355e66e14>
CC-MAIN-2022-40
https://crashtest-security.com/insecure-deserialization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00729.warc.gz
en
0.876645
1,613
3.5625
4
Hello, my name is Piotr Lewulis and I am Assistant Professor at the Faculty of Law and Administration at the University of Warsaw and in today’s short presentation I will talk briefly about how the theory of digital evidence collides with the actual judicial practice in Poland. So, based on the generally accepted definition of digital evidence and based on the existing forensic standards, all materials of digital origin should be gathered, examined and assessed in accordance with the basic forensic guidelines, designed to maintain the credibility of evidence. In Polish criminal proceedings, legally speaking, digital evidence belong to the category of physical evidence. This is quite typical in itself and there are no formal restrictions, no formal guidelines, no legal guidelines to their use. It is generally accepted, and expected, that any evidence is assessed in accordance with the indications of the underlying scientific principles. In case of digital evidence, that would be digital forensics. However, following strict standards of digital forensics may be difficult, costly and time consuming, and, given the popularity of digital content in many contemporary criminal proceedings, it may be hypothesised that often such materials are not preserved and presented in a forensically sound manner. To test such a hypothesis, I have conducted an empirical study that involved an analysis of all these criminal cases. The analysis included cases of technically sophisticated so-called cyber crimes, as well as ordinary crimes, such as simple fraud, copyright infringements and hate speech. All these were legally concluded between 2016 and 2018, and they were analysed in 2019 in the first half of the year. In the analysed cases, various types of digital content was present. Popularly used digital materials included open source information, for example, from social media and other open websites, emails, as well as various types of information received from internal data processing systems from various business entities. In over 68% of the analysed cases, evidence of broadly understood digital origin were present. However, only in a minority of those cases, the evidence was collected as data in a forensically, technically sound manner. In most cases, materials of digital origin were presented as printouts only, and they were not analysed technically. They were not analysed forensically at all. Well, criminal courts in Poland very rarely seek help of forensic IT experts, and often rely on simple printouts to assess digital evidence. This often goes unchallenged, as the parties to the proceedings very rarely raise doubts as to the credibility or authenticity of digital evidence. Even if such evidence is questioned, in none of the analysed cases the court deemed such evidence inadmissible or uncredible. This does not mean that the digital evidence in Poland are generally false or uncredible. However, it seems that the principles of digital forensics in Poland do not seem to play an important role in current judicial practice. Thank you very much for listening.
<urn:uuid:f76131a3-885c-4f4d-b63a-50590a0e8bdd>
CC-MAIN-2022-40
https://www.forensicfocus.com/webinars/theory-and-practice-of-the-use-of-digital-evidence-in-polish-criminal-court-proceedings/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00729.warc.gz
en
0.969829
580
2.546875
3
GPS spoofing is a danger that has been lurking for several years, but due to technological advancement we take it for granted. Learn more about GPS spoofing. For several years now, different ways of spoofing have been developed in various fields. One of them is GPS spoofing that has existed since the popularization of the use of GPS; which has generated a large number of attacks on these devices; the targets of these attacks have evolved over time, and have focused from identity theft attacks to cyberterrorism. It is therefore advisable to gather all these advances and precautions on GPS spoofing. Before knowing how spoofing is performed, we must first know how GPS works… What is GPS and how does it work? The Global Positioning System, or GPS; is a satellite navigation technology commonly used by terrestrial travelers to properly get from point A to point B using digital guidance. Like many of the technologies we know today, GPS was primarily intended as a military resource; however in the 1980s it was decided to commercialize this device for civilian use. Looking at the characteristics of GPS we can see that it works anywhere in the world; regardless of weather conditions or time gaps and currently many devices have this technology; be it cars, Smartphones, etc. The way these devices work, as we mentioned before, is through satellite signals. GPS satellites circle the planet daily, in a very precise orbit which allows transmitting the signal data to the earth. GPS receivers take this data and use triangulation to calculate the exact location of the person or device. Basically, the GPS receiver compares the time a piece of evidence was transmitted by a satellite with the time it was received. The time difference tells the GPS receiver how far away the satellite is. Now, with distance measurements from many satellites; the receiver will verify the user’s position and display it on the electronic map which is what we view. Now, how does GPS spoofing happen? While GPS technology has brought many, many benefits to people over the years; it has also proven to be potentially vulnerable and dangerous. GPS spoofing occurs when someone uses a transmitter to send a fake GPS signal to a receiving antenna to counter a legitimate or real GPS satellite signal. Most navigation systems are designed to use the stronger GPS signal; so the fake signal overrides the weaker, but legitimate satellite signal. There is likewise another similar attack, which is GPS signal jamming; however these should not be confused, as electronic GPS jamming occurs once a cybercriminal blocks GPS signals altogether. The commoditization or exploitation of GPS jamming tools that can block communications is prohibited within the United States. While GPS jamming appears to be the greatest threat, GPS spoofing offers a negative leverage towards businesses. GPS spoofing allows hackers to interfere with navigation systems without operators noticing. Fake GPS feeds cause drivers, ship captains and other operators to deviate without any coercion. Companies that are mainly exposed to GPS spoofing are transportation companies, cab services, construction companies, shipping companies, etc. Is there more than one type of GPS spoofing? YES! And they are as follows: - GPS signal spoofer: In this class, a GPS signal machine concatenated with an RF front-end is used to mimic authentic GPS signals. The signals generated by this class of spoofer do not appear to be basically synchronized with the true GPS signals. Therefore, the spoofing signals appear as if they are noise to an operational receiver in tracking mode (even if the power of the spoofer is higher than that of the authentic signals). However, this type of spoofers effectively deceives business GPS receivers especially if the spoofing signal strength is beyond the authentic signals. - Receiver-based spoofers: This type of spoofer is difficult to discriminate from authentic signals and is much more sophisticated than the previous class. The biggest challenge for the realization of this type of spoofer is to get the signals to the victim’s receiver with relevant delay and signal strength. The power of the spoofing should be slightly higher than the power of the authentic signal in order to successfully fool the target receiver, however, it should not be much more than the ordinary power of GPS signals. - Sophisticated receiver-based spoofers: This class is the most complicated and effective of the spoofing classes. It assumes that this class captures the cm-level position of the center of the antenna part of the target receiver, to fully synchronize the spoofing signal code and carrier part with those of the authentic signals at the receiver. This type of spoofer will take advantage of most transmit antennas to defeat direction-of-arrival anti-spoofing techniques. How can we prevent these attacks? There is certainly no way to completely protect GPS signals, because they are in space, being transmitted by a satellite in orbit; so implementing some security tool or method is a technology that is still unattainable at the moment. However, there are techniques that can be implemented to prevent these attacks; one of these that is already available, but is deployed only for fairly massive GPS receivers is the GPS firewall. This device is placed between the GPS receiver and its external antenna. It constantly compares the GPS signal with a set of rules to eliminate false signals, so that only the real one reaches the receiver. As for mobile GPS, smartphone chip manufacturers may at some point be able to inflict a sort of GPS firewall directly on the devices’ navigation receivers, however, it will be a couple of years before it happens. Unfortunately, there is still a lack of time and awareness on the subject before there will be a real demand.
<urn:uuid:8b980be6-304d-4ade-a4b1-2402b3cf862a>
CC-MAIN-2022-40
https://demyo.com/gps-spoofing-its-vulnerabilities-how-to-prevent-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00729.warc.gz
en
0.954549
1,150
3.46875
3
Security Awareness Training Most likely you were taught the dangers of traffic early in life. Things like not running on the street and looking both ways before you cross it. It may not have made sense to you then, after all, as children many of us felt that rules spoiled our fun too much. We weren‘t really worried about safety. Later in life many of us learned to drive a car. This meant more rules and procedures we had to follow. Signal lights, seat belts, stopping at a red light, not driving too fast etc. We know that if we follow these rules, traffic will be safer for us and everyone else. Despite being constantly reminded of these rules and knowing the dangers of driving, accidents can still happen. We forget ourselves only for a moment, or someone else does and we get caught up in it. Every day we head out knowing the risk and doing everything we can to avoid it. Cyber security awareness is much like driving. The cyber space can be dangerous and full of traps. This is mostly due to hackers and spies who want to get their hand on data or information in the hopes of exchanging it for cash. If not by selling it to others then by holding it ransom and getting you or your workplace to pay to have it back. This helps them fund all sorts of other nefarious affairs. To navigate the Internet more safely we need to learn the rules and procedures that keep us safe. The more we know about the risks the better we get at avoiding them. This is why your workplace wants you to learn about cyber security. Not to create a new IT expert but to help you navigate the Internet safely and protect valuable or sensitive data from getting into the wrong hands. The more you know about the risks, the more safely you can navigate the cyber space. - Security awareness training teaches the rules and risks of the Internet - When we are aware of the risks and best practices, we are less likely to get into trouble - Security awareness training makes us safer online, and helps us protect sensitive data and systems from cyber attacks
<urn:uuid:7d4aa108-7f3f-4e50-9f09-8824f7eb0ffb>
CC-MAIN-2022-40
https://awarego.com/videos/security-awareness-training/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00129.warc.gz
en
0.969705
425
2.890625
3
The National Institute of Standards and Technology is updating the federal Secure Hash Standard by adding two new algorithms to the list of those approved for validating digital documents or messages. The National Institute of Standards and Technology is updating the federal Secure Hash Standard by proposing two new algorithms for the approved list that could be easier to use on 64-bit platforms. The Secure Hash Algorithms are cryptographic tools that create a unique message digest, or fingerprint, that can be used to verify that the contents of a digital document have not been altered. The approved algorithms are contained in Federal Information Processing Standard 180. NIST has released a draft of FIPS 180-4 with the additional algorithms for comment. If approved, the new standard would replace the current FIPS 180-3, which was approved in October 2008. Running a hash algorithm against a digital message creates a string of bits of a specific length that is unique to that message. “The hash algorithms specified in this standard are called secure because, for a given algorithm, it is computationally infeasible 1) to find a message that corresponds to a given message digest, or 2) to find two different messages that produce the same message digest,” the document states. “Any change to a message will, with a very high probability, result in a different message digest,” which would result in a verification failure. The current standard specifies five secure hash algorithms: SHA-1 and SHA-224, 256, 384 and 512, collectively known as SHA-2. Each algorithm produces a message digest of a specific length: SHA-1 produces a digest of 160 bits, SHA-224 produces one of 224 bits, and so on. The updated standard would add SHA-512/224 and SHA-512/256. They are based on the SHA-512 algorithm but produce a truncated output of 224 or 256 bits, respectively. They are being added because they might be a more efficient alternative to using SHA-224 or SHA-256 on platforms that are optimized for 64-bit operations, NIST officials said in introducing the proposed standard. Other algorithms based on SHA-512 could be specified in the future as the need arises. The new standard also removes a requirement that a pre-processing step called padding be done before hash computation begins. Padding is the addition of bits to a message so that its length is a multiple of 512 or 1,024 bits, depending on the algorithm being used. In FIPS 180-3, padding was inserted before hash computation. Under FIPS 180-4, padding can be inserted before hash computation begins or at any other time during the hash computation before processing the message blocks containing the padding. NIST’s website offers examples of the implementation of the Secure Hash Algorithms. NIST is in the process of a multiyear competition to select the next secure hash algorithm — SHA-3 — which is expected to be chosen in 2012. SHA-3 will augment and eventually replace the algorithms now specified in FIPS 180-3 or 180-4. The competition for SHA-3 was opened in 2007 after weaknesses were discovered in the existing algorithms. Despite the weaknesses, the algorithms have not yet been cracked. Last month, the field of possible new algorithms was whittled down to the final five from an initial field of 51. Comments on the draft standard should be sent by May 12 to firstname.lastname@example.org with the phrase “Comments on Draft FIPS 180-4” in the subject line.
<urn:uuid:8fc2e936-da59-48e9-aeb2-33c948e37948>
CC-MAIN-2022-40
https://gcn.com/cybersecurity/2011/02/2-secure-hash-algorithms-set-for-64-bit-platforms/282884/?oref=gcn-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00129.warc.gz
en
0.899886
729
2.5625
3
Digital forensics is a forensics science focusing on data found online and on digital devices. In today’s digital world, digital forensics is a critical part of effective online investigations, allowing investigators to collect data and gather evidence, recover deleted files, and access content on the dark web. When applying digital or cyber forensics to an investigation, efficacy relies on a balance between best-practice techniques and technology. One of the main issues investigators encounter when attempting to gather evidence from the internet is the sheer volume of information available. The surface web on it’s own includes massive amounts of data from websites, forums, and social media. Even more content and data lies in the deep and dark web, often used by threat actors to conduct illegal activity. This makes dark web intelligence both difficult to collect and absolutely invaluable for investigations. Investigators require access to the information contained on the dark web in order to follow the trail of criminals operating through it and to gain a greater understanding of their movements and motivations. This makes dark web monitoring a crucial aspect of collecting evidence in digital investigations that require advanced cyber security intelligence. Digital forensic tools in the form of internet monitoring software have the capacity to collect data from a wide range of sources and provide dark web monitoring services. These tools allow investigators to monitor the dark web and can identify cyber threats coming from all layers of the internet as well as generate user alerts when significant information is identified. Digital forensic tools can also collate this information into reports which may be used for actionable insights or evidence in trials. These tools are not just effective for collecting and analyzing data found on the dark web, but can access any information located on sources which are publicly available, also known as Open Source Intelligence or OSINT. In addition to gathering information, web application monitoring tools can even be used to preempt crimes. Advanced monitoring tools, or OSINT tools, have the capability of analyzing content from OSINT websites, such as various social media platforms. These tools use social analytics features to gain social media insights, draw conclusions, and provide information which can be used proactively to prevent crimes such as cyber threats, fraud, and even terror attacks before they occur. The most advanced of these monitoring tools operate on highly sophisticated algorithms in order to identify information on message boards and social media platforms of various languages. Additionally, they are able to present data in readable forms such as graphs or generate alerts when relevant information crops up to enable faster response times in time-sensitive investigations and cases that rely on cyber intelligence. Having such crucial information at their fingertips allows law enforcement to take proactive steps, not only in crime investigation, but in crime prevention as well.
<urn:uuid:dc1cc3da-3303-426f-bb28-6c46f1898898>
CC-MAIN-2022-40
https://cobwebs.com/effective-investigations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00129.warc.gz
en
0.922189
537
3.328125
3
What is Cybersecurity? UK-based internet security company IT Governance tells us that “Cyber security is the application of technologies, processes, and controls to protect systems, networks, programs, devices and data from cyber attacks.” A comprehensive cybersecurity plan aims to reduce the risk of cyber-attacks and protect against the unauthorized exploitation of operating systems, sensitive data, networks, and technologies. Every business implementing internet services must create a security culture to improve business and consumer trust. Has COVID-19 Affected Cybersecurity? Cybercrime, which includes everything from theft or embezzlement to data hacking and destruction, is up 600% due to the COVID-19 pandemic. Nearly every industry has had to embrace new solutions, and it forced companies and government agencies to adapt quickly and take cybersecurity strategy very seriously. Do Small Business Owners Need to Worry About Cybersecurity? The answer is a resounding, unequivocal YES! Everyone who logs on to a computer needs to worry about cybersecurity. Many small business owners believe they are too small for the hackers to worry about and don’t have enough data to warrant a breach. This is not true. Small businesses are not just one target of the bad guys; they are a favorite target. 10 Small Business Cybersecurity Statistics That Should Scare You. A 2021 report from fundera by nerdwallet revealed that: - 43% of cyber attacks target small businesses. - 60% of small businesses victims of a cyber attack go out of business within six months. - Cybercrime costs small and medium businesses more than $2.2 million a year. - There was a 424% increase in new small business cyber breaches last year. - 3 out of 4 small businesses say they don’t have the personnel to address IT security. - 54% of small businesses think they’re too small for a cyber attack. - Small businesses spend an average of $955,429 on restoring regular business in the wake of successful attacks. - 50% of small and mid-sized businesses reported suffering at least one cyber attack last year. - Small businesses spend an average of $955,429 on restoring regular business in the wake of successful attacks. - 91% of small businesses don’t have cyber liability insurance. Why are Small Businesses a Favorite Target for Hackers? There are several reasons small businesses can become preferred targets for cybercriminals. First, there is this kind of head-in-the-sand thinking we’ve talked about where small business owners don’t believe they can be targets for data systems breaches and so don’t spend the time or money necessary to defend themselves properly. Don’t fool yourself—even your tiny bits of data – credit card numbers, medical records, bank account information, etc. – are easy to unload on the dark web with the proper internet browser. And they’re not just hacking you; they’ve targeted hundreds, maybe thousands of others – so you do the math. Don’t Have the Time, the Money, the People Then there is the problem that so many people involved in small businesses don’t have the resources to implement adequate security systems. And with fewer resources dedicated to the advanced security necessary to protect your small business, there are bound to be some security measures that get overlooked. According to fundera by nerdwallet, we see that 47% of small businesses say they have no understanding of how to protect themselves against cyberattacks, and 3 out of 4 small businesses say they don’t have the personnel to address IT security. Start Small to Go Big Today’s companies are digitally connected to facilitate transactions, manage supply chains and share information. Since larger companies presumably, though not necessarily, are t0ugher to hack into, the cybercriminals target smaller partners as a way to get into the systems of large companies. What’s at risk? A cyberattack can drastically impact your business. As cited above, 60% of small businesses that fall victim to an attack shut down within six months of the breach. While that may be the most devastating result of an attack, there are other consequences that your business could experience, like losing your customers’ confidence and besmirching your brand’s reputation. You could be giving away: - Client lists - Customer credit card information - Your company’s banking details - Your pricing structure - Product designs - Expansion plans - Manufacturing processes And it’s not just you that they can affect These attacks don’t just put your company at risk for a data breach. Hackers may use their access to your network as a stepping stone into the networks of other companies whose supply chains you’re part of—all very difficult to recover from. And then there are the bucks In terms of dollars, Sophos, a British-based security hardware and software company, estimates the average cost of recovering from a ransomware attack was $1.85 million in 2021. What Type of Cyber Attacks Go After Small Business? 4 Common Security Threats Spyware is software that infects your PC or mobile device. It gathers information about you, monitors your browsing and Internet usage habits, and collects other data—it’s a total invasion of privacy. That’s some sneaky software. Running quietly in the background, spyware can steal your internet history, contacts, passwords, and even credit card information. In the cases of smartphones and tablets, it can steal information such as: - Incoming/outgoing SMS messages - Incoming/outgoing call logs - Contact lists, emails - Browser history Spyware is clever, making its way onto your computer without your knowledge or permission, attaching itself to your operating system, and maintaining a presence on your PC. The problem is it can be challenging to detect. If your device is infected, you might see a significant reduction in processor or network connection speeds, increased data usage, and low battery life. Spyware can infect your system in the same way that other malware does. Here are a few ways spyware can make its way into your device: - Security vulnerabilities like clicking on an unfamiliar link or attachment in an email. - Installing “Useful Tools” from third parties. Spyware authors often present their spyware programs as tools to download, like an Internet accelerator, download manager, or hard disk drive cleaner. But be careful. Installing this malicious software can result in spyware infection. - Software bundles. Free software can conceal a malicious add-on, extension, or plug-in. Ransomware is another form of malware. There are several things this form of malware can do once it has taken over your computer. Most commonly, your files become encrypted, making them inaccessible to you and your team. An excellent argument for backing up files The files can’t be decrypted without a key that is only known to the hacker—users are given instructions for how to pay a fee (usually in the form of cryptocurrency) to get the decryption key. A less common type of ransomware, called leakware or doxware, is when an attacker threatens to publicize sensitive data on the victim’s hard drive unless a ransom is paid. 3. Distributed Denial of Service (DDoS) Attacks Significant threats to business operations. These attacks target websites and online services to overwhelm them with more traffic than they can handle. This makes the website or service inoperable, resulting in a denial of service to legitimate users or customers. Downtime is no good for dollars The targeted websites or services are bombarded with incoming messages, requests for connections, or fake packets. Attacks are launched from any number of corrupted computers, from a few to thousands. Since the attack comes from so many different IP addresses simultaneously, a DDoS attack is much more difficult for the victim to locate and defend against. This is part of what makes DDoS attacks such a problematic cybersecurity threat. DDoS or something more malicious inside? Disturbingly, a DDoS attack might be just the tip of the iceberg. These types of attacks can be used to hide other malicious activity. With your IT security team distracted by the DDoS attack, who knows what other malicious activities the bad guys have going on in the background? Phishing is an online scam where criminals send an email that appears to be from a legitimate company and ask users to provide sensitive information. This type of scam could grant the attacker access to all sorts of valuable data—so you must know how to spot such an attack. Here are a few things you should be on the look-out for: - The displayed name in the email – A name displayed in the “from” box does not guarantee that it came from this sender. - Suspicious links – If the web address you see when you hover over the link doesn’t seem to match the sender, don’t click. Be wary if an email directs you to a website asking for a login. - Spelling or grammar mistakes – If it doesn’t look or sound right, it’s probably not legit. - Odd salutations – If the contact usually addresses you by name, but the email uses something generic like “Valued Customer” or “Important Client,” this is a red flag. - Request for sensitive information – If you’re asked for confidential information you aren’t comfortable sharing, pick up the phone and call a verified number to confirm the request. - Implied urgency – This is a scare tactic designed to catch you off-guard and reply when you usually wouldn’t. If someone threatens to stop a service without an immediate reply, stop and think about it (and contact your tech nerd). - Broken images or format – If the images or layout of an email seem a bit off, it could be a sign this is an attempt to fool you. - Suspicious domains – Malicious emails routinely use parts close to the legitimate domain, but not quite right. For instance, someone may use Capital0ne.com instead of capitalone.com to try luring you into providing your credentials. - Non-standard attachments – if the attached file is not one you recognize (like .doc for a Word file, .xls for an Excel file, or .pdf for a PDF file), be suspicious (and don’t open). 6 Ways to Protect Your Data and Connected Devices from Cyber Attack - Make security part of your company culture. Be vocal about the importance of cyber security across the entire organization. Let all departments know that cyber security is a high priority - Train your people. This is part of creating a culture of security. Employees can leave your business vulnerable to an attack. Research shows that 43 percent of data loss stems from internal employees who maliciously or carelessly give cybercriminals access to your networks. - Deploy antivirus software – and keep it updated. It would be best to have antivirus software to protect your devices from viruses, spyware, ransomware, and phishing scams. Make sure it is updated regularly to strengthen it or add patches. - Utilize VPNs and Firewalls. These defense lines can’t prevent all types of attacks, but they are highly effective when implemented properly - Back up, back up, back up. It’s always a good idea to have multiple backups of your business’s data. That way, if you’re ever the victim of cyber threats, a ransomware attack, a natural disaster, or some other event that restricts your ability to access your data, you’ll have a backup plan to protect your small business. - Limit employee access. It makes sense to segment and limits employee access to only the systems and data they need access to. If you maintain tight controls over user access, you’ll limit the damage that any single user (or compromised account) can do to your network security. Time to face facts Big business, small business, mom and pop shop. If you’re in business, you need to make cyber security a part of it. Chances are high, nearly certain, that you’ll be attacked. But with the right security policies technology, cyberattacks should be caught before they get anywhere near encrypting files and extorting small business leaders. And while basic security practices risk assessment that is likely not your specialty, there is always a Nerd ready to help you with cybersecurity for small businesses. Nerds On Site has a global team of experts with solutions that can protect your business from any cyber threats. From first steps to advanced protection, we have you covered. Call us today for more information.
<urn:uuid:2fe87d31-68bb-4da7-9bdb-c23a8945f607>
CC-MAIN-2022-40
https://www.nerdsonsite.com/blog/cybersecurity-for-small-businesses/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00329.warc.gz
en
0.922796
2,700
2.59375
3
Two-factor authentication is a way to secure your online accounts from unauthorized access. It is also known as multifactor authentication, two-step verification, MFA and 2FA. When signing in with 2FA on, after entering your login credentials you need to verify with a second method it’s really you who’s signing in. This happens usually through a one-time passcode sent in a text message, biometric scan (fingerprint, face recognition), hardware key or an authentication app. The goal of two-factor authentication is to make sure only you have access to a user account or certain information in case your password gets stolen or otherwise compromised. Why protect your online accounts? Many user accounts include sensitive personal information such as full name, home address, phone number, credit card details, etc. Online criminals go to great lengths to get this information. They can use it to make money. And guess who pays? Not the criminals. Criminals can use stolen personal details for identity theft. For example, they can use stolen credit card details to buy goods or to take loans in the victim’s name. They can also extort the victim – or just use their Netflix account. Many criminals simply sell stolen information for others to use. There are multiple ways they can profit from stolen information. None of them are pleasant to the victim. This article tells more about why criminals want your personal details. How two-factor authentication helps you protect your accounts Often the only thing that protects your online accounts and the information within is a username and password combination. Criminals have their ways of stealing login credentials. You can protect yourself from viruses and malware that try to steal your info from your devices. But there are other threats. Every year hackers steal billions of user credentials from online services in data breaches. Unfortunately, there´s nothing you can do to prevent that. What you can do, is make sure that hackers can’t log in to your accounts with just your password. That’s where two-factor authentication jumps in. When entering your login details, criminals would still need to get through a second barrier that is often accessible only to you. Even if your password gets stolen, your information is still safe. Hackers won’t be able to steal your credit card information or other sensitive stuff. At least not easily. Making things harder for them can make them search an easier target. How to enable 2FA in popular online services As said, criminals can use stolen information in different ways. Any online account that includes your personal details should be secured with 2FA when possible. Here are instructions on how to enable 2FA in popular online services. How to enable 2FA on Facebook Go to: Settings & Privacy -> Settings -> Security and Login -> Two-factor authentication -> Use two-factor authentication -> Edit How to enable 2FA on Google Go to: myaccount.google.com (or access your Google account through your Android device) -> Security -> 2-step verification -> Get started. When switched on, your Google account, Gmail and any other Google service are protected. How to enable 2FA on Instagram Go to: Settings -> Security -> Two-Factor Authentication -> Get started How to enable 2FA on Twitter Go to: Settings and Privacy -> Account -> Security -> Two-factor authentication How to enable 2FA on Amazon Go to: Your account -> Login & security -> Two-Step Verification (2SV) Settings -> Get Started For any other service not listed above, you’ll find how to enable 2FA most likely from security or account settings. Not all services have the option to use 2FA, however. Enabling 2FA on F-Secure services For detailed instructions on enabling 2FA on your My F-Secure account, follow this link >> For detailed instructions on enabling 2FA on Protection Service for Business, follow this link >> What is the best two-factor authentication method? First of all, any two-step verification method is better than none. Using SMS for two-step verification, however, can be risky. Hackers can redirect your phone number so that your one-time codes fall directly into their hands. They can also intercept your SMS messages. Therefore, it is safer to use an authenticator app. You can use for example Google Authenticator, Microsoft Authenticator or Authy. For maximum security, combine two-factor authentication with strong and unique passwords Keep in mind that 2-step verification isn’t an impassable barrier. Regardless of having 2-step verification or not, you should change a compromised password immediately. And this applies to all the accounts you are using that same password on. To maximize your protection, use strong and unique passwords on all your accounts. Top that with 2-step verification, and you have a good protection against account takeover and online identity theft. Using unique and strong passwords, however, is difficult without a password manager. F-Secure ID PROTECTION lets you store all your passwords securely, create new ones, auto-fill them when needed and access them from all your devices. You can also set it to monitor your data online, so when a service you use gets breached, you will be alarmed. This way you can change your password before hackers have time to use it.
<urn:uuid:3cc9442b-cd10-422f-bb09-a2395ebb9b21>
CC-MAIN-2022-40
https://blog.f-secure.com/two-factor-authentication-what-it-is-and-how-to-enable-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00329.warc.gz
en
0.899452
1,102
3.4375
3
Data enrichment is an essential part of the process to make data “analytics-ready” that has evolved greatly over the past decade. Data enrichment provides additional information and context within the dataset to allow analysts and data scientists to deliver more meaningful insights to the business for fast, highly confident actions derived from the data. There are multiple ways to perform data enrichment. All involve adding new fields and data to the dataset, but the approaches are different based on the original form of the data and the source of the additional data. Let’s explore data enrichment, how to do it, and the tools that make it much faster and easier. Data enrichment refers to the process of appending or otherwise enhancing collected data with relevant context obtained from additional sources. This enrichment can be performed by adding new calculated fields, integrating disparate data from other internal systems, or appending third-party data from external sources. Enriched data is a valuable asset for any organization because it becomes more useful and insightful. A simple form of data enrichment is to add new fields that are derived from the existing data. Another common form of data enrichment is in customer or marketing analytics, where additional information about customers and marketing actions is added to see what was successful and what was not. This technique can also be used in other analytics, such as operational analytics. Data cleansing is the process of detecting and/or removing corrupt or inaccurate records from a set of data. With data cleansing, you can identify data that is incomplete, incorrect, inaccurate, or irrelevant, and apply functions and/or algorithms to address these issues with the data. Data cleansing is related to data enrichment in that in both processes, the data is being improved. However, with data cleansing, you are fixing the data, while with data enrichment, you are enhancing the data. The most common use case example for data enrichment is adding demographic data that comes from other internal systems or external (3rd party) sources to customer data. Specific examples include: There are multiple ways data can be enriched, including appending data, segmentation, deriving attributes, imputation, entity extraction, and categorization. Let’s explore how Datameer can help you with each. By appending data, you bring multiple data sources together to create a more holistic data set than any one data source. This helps you generate more accurate analytics or explore more variables to use as features to improve machine learning models. Appended data can be both internal and external data. Datameer provides two features that make it easy for anyone (programmer or non-programmer) to append data: File Upload to Enrich Data in Datameer No-code Data Blending in Datameer Data segmentation allows you to divide or organize a dataset according to specific field values in the data. Very common segmentation is done using demographic, geographic, technology, or behavior values. This is often used in marketing use cases for targeting. The first step in segmentation is typically to append data, which we already explained how easy it is to perform in Datameer. The next step is to organize the data to your needs, which is also very easy to do in Datameer with different no-code operations: No Code Data Aggregation in Datameer Derived attributes are fields added to a dataset that are calculated from other fields. The most commonly thought of derived attribute is Age – calculated by subtracting birthdate minus current date. Other derived attributes include date/time conversions (hour, day, month, quarter), time periods, time between, counts, and classifications (time bands, age bands, etc.). Aggregation operations, which we showed above, are easy to do in Datameer and are essential to deriving count attributes. For other types of derived attributes, Datameer offers: No-code, Wizard-driven Pivot in Datameer Some consider data imputation part of data cleansing, as it is the process of replacing values for missing or inconsistent data within fields. A prime example is estimating the value of a missing field based on other values. Data cleansing is often the realm of data engineers, who may know a lot about the data but may not know the context by which the analytics are used. Therefore, data imputation may be better suited for transformations performed by data analysts, who better know what the analytics are targeting, and hence is better classified as data enrichment. The easy-to-use Datameer formula builder can be used to help calculate values to fill in for missing or inconsistent values. In addition, Datameer also offers a no-code Replace operation that can be used to easily replace missing or inconsistent values in a dataset. When one is using more complex unstructured or semi-structured data, multiple data values may be encoded within one field. To make the data useful, the values need to be extracted from one field, then exploded out into one or more new columns in the data. For data extraction, Datameer offers two easy to use functions: There are five critical best practices for enriching your data that Datameer helps you keep for highly effective data enrichment: Data enrichment is an often overlooked yet highly critical part of producing analytics-ready datasets. This is often because when designers decide what data to capture in applications, they are not privy to downstream analytics data requirements. In addition, analytics data needs will always change over time. Therefore, it is critical to have a highly evolved, easy-to-use data transformation tool that allows any team member to transform and enrich data to their specific needs. This allows the analytics teams to be more responsive to the business, produce highly accurate analytics, and drive greater adoption of analytics. Datameer’s SaaS data transformation platform focuses on the T – transformation – in your modern ELT data stack. Datameer is the industry’s first collaborative, multi-persona data transformation platform that is Snowflake-native. The multi-persona SQL code and no-code UI supports your hybrid team of programmers and non-programmers on a single platform to collaboratively transform and model data. Catalog-like data documentation and knowledge sharing facilitate trust in the data and crowd-sourced data governance. Native integration into Snowflake keeps data secure and lowers costs by leveraging Snowflake’s scalable compute and storage. Are you interested in learning more about Datameer and how it can deliver agility and collaboration for the “T” in your modern ELT data stack without requiring you to add additional resources? Please visit our website to schedule a personalized preview with our team or sign up for a free trial. Webinar Event: Virtual Hands-On Lab – Get hands-on with Analytics for SnowflakeJoin us Oct 5th
<urn:uuid:e8024356-2899-429b-b09c-2afc66fcaaa2>
CC-MAIN-2022-40
https://www.datameer.com/what-is-data-enrichment/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00329.warc.gz
en
0.913751
1,397
3.078125
3
Domain Hijacking / Spoofing Domain Hijacking or Domain Spoofing is an attack where an organization’s web address is stolen by another party. The other party changes the enrollment of another’s domain name without the consent of its legitimate owner. This denies true owner administrative access. Scammers then use the legitimate web address for any purpose they choose. Domain loss to another person can occur under mundane circumstances, such as upon the expiration of a domain name when, at such time, another person quickly registers it. A true hijack of a domain happens when a domain’s legitimate owner unwittingly loses it. This occurs when they volunteer their Domain Name System (DNS) credentials as a result of a phishing or other social engineering scam. Other causes of a DNS hijacking stem from when a partnership between more than one person who has access to the DNS registration dissolves, and one party hurries to reset access credentials, locking out the other party. Domain spoofing is a related but separate action. Here, the illegitimate party mimics the website at the true domain, doing whatever they like: destroying the reputation of the true business, collecting credentials and payment card data, conducting SPAM, and basically abusing all domain-related privileges (including email control). "I responded to an urgent message about the expiration of our domain, but it wound up being a domain hijacking. Our website now shows really embarrassing content and I'm hearing of emails pretending to be me...saying inappropriate things."
<urn:uuid:fae005e5-2768-49ce-81a6-755e6ccf6784>
CC-MAIN-2022-40
https://www.hypr.com/security-encyclopedia/domain-hijacking-spoofing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00329.warc.gz
en
0.906499
313
2.734375
3
Today, organizations rely heavily on data, with a big portion of that data made up of sensitive information. As organizations become the custodians of more and more sensitive information, the frequency of data breaches increases accordingly. In some cases, the origin of a data breach is outside of the organization, with hackers succeeding in evading organizations by compromising accounts; while in other cases, data breaches involve internal actors – malicious employees or contractors who intend to harm the organization and who may have legitimate credentials. According to the 2018 Verizon Data Breach Investigations Report (DBIR), the top asset involved in data breaches is the database server (see figure 1). This is not surprising since large volumes of sensitive data are stored in databases, which is why database security is such a crucial factor. The thing is, rather than trying to prevent a data breach, a more realistic goal would be to detect a breach at an early stage. The reason why early breach detection is more practical than breach prevention is that often times, malicious users are already inside the organization, abusing their legitimate permissions to access sensitive data. To quickly detect a potential data breach, we need to be able to identify “early signs” of a breach. The alternative is months or even years of trying to detect a data breach. The data breach kill chain Back in 2011, Lockheed Martin defined The Cyber Kill Chain as a model for the identification and prevention of cyber intrusion attacks. It defines the steps taken by an attacker during a typical cyber-attack (Figure 2). The seven steps of the cyber kill chain are as follow: - Reconnaissance – Get as much information as possible about the target, choose your target. - Weaponization – Choose the best tool for the task. - Delivery – Launch an attack using weaponized bundle on the target. - Exploitation – Exploiting the vulnerability to execute code in the victim’s system. - Installation – Install weapon access point - Command and Control (C&C) – Enable sending remote commands for persistent access to a target network. - Actions on Objective – Take action. For example: data destruction, exfiltration or encryption. We’ve adjusted the cyber kill chain to suit data breaches specifically, presenting an instance where the attacker is already inside the organization/already has access (figure 3). Having been recruited or accidentally infected — compromised by an external hacker –, the attacker continues to the reconnaissance stage; this stage is similar to the reconnaissance stage in the original cyber kill chain, and the goal is to gain as much information about the data or the assets that hold the data (think permissions, structure etc.) as possible. After choosing the target and getting enough information, the attacker will try to obtain access to the data – in the exploitation stage. This could be achieved by escalating its privileges. Once access is granted, an attacker will try to acquire sensitive data. This stage could be a large file download or specific access to sensitive data. The last step will be exfiltrating the data from the organization. Protect data in your databases: Defense-in-depth The principle of defense-in-depth is that layered security improves your overall security posture. If an attack causes one security mechanism to fail, other mechanisms may still provide the necessary security to protect the system. To better protect data in your databases, you need to identify suspicious activity at every step of the potential attack chain. That way, if we fail to detect the attack at a specific step, we’ll still have the chance to catch it in the following steps. It’s always best to uncover an attack as early as possible, preferably before malicious or compromised individuals obtain access to the data. Imperva CounterBreach looks for suspicious database activities in each step of the kill chain to detect a data breach and uncover risky security practices that are likely to be exploited by attackers. For example, we look at specific events happening at different stages of the attack chain, such as failed logins, broad access to databases, service account abuse, and accessing business-critical data. In the latest version of Imperva CounterBreach – V2.3 – a new algorithm that detects reconnaissance attacks was added. The algorithm uncovers suspicious system table access before any data gets exfiltrated. Reconnaissance Stage: Suspicious system table scan System tables store the metadata of the database. Those tables contain information about all the objects, data types, constraints, configuration options, available resources, information about valid users and their permissions and more. We expect attackers to access system tables as part of the reconnaissance stage of the attack in order to explore the databases in the organization, or even change database permissions. The challenge in detecting reconnaissance attacks using system tables access, with a minimum amount of false positive alerts, rests in the fact that legitimate users access system tables on a regular basis to perform their ad hoc tasks. To perform this task successfully, Imperva CounterBreach combines machine learning logic with domain knowledge and correlates a host of sources and hints to form a holistic picture. Imperva CounterBreach distinguishes between access to sensitive system tables and access to non-sensitive system tables. This classification is taken from our database activity monitoring solution and is based on research conducted in the Imperva Defence Center. Imperva CounterBreach utilizes a host of profiling cycles to study everyday access patterns to the different classes of system tables (as shown in figure 4). These cycles include: - User normal activity – Learn how users typically behave and access system tables - Database normal activity – Learn how users inside the organization access system tables in a certain database - Organization normal activity – Learn how sensitive system tables are accessed across the entire domain - Community normal activity – Use data of system tables access patterns from different customers, to have a knowledge of global normal system table activity We also make use of other hints to identify legitimate access to sensitive tables, separating a legitimate user like DBA, from an attacker. Imperva Counterbreach identifies a malicious user by analyzing the number of databases they try to access system tables from, the number of failed logins to databases in the organization in the same time frame, and more. Figure 5 shows a real example of 11 different interactive users from one of our customers. These 11 users comprise 10 DBAs and an attacker (‘E’ in the figure) who attacked the organization on the 16th and 17th. Different colors represent different users and the x-axes represent the different days. The top graph shows the number of databases each user accessed sensitive system tables in which are not normally used by his profiling circles. The bottom graph shows the number of databases that the user failed to log into in each day. We can see that the attacker accessed new system tables in several databases on the days of the attack. Combining that information with the fact that they also failed to log into some databases on the days of the attack indicates a suspicious activity. Attackers will continue to attempt to steal sensitive data from organizations, this is a given. Our goal is to detect potential breaches as early as possible before any actual harm is done or any sensitive data is exposed. Uncovering suspicious system table access is a critical step to catching attackers in the reconnaissance stage where they still try to find their way in. If an attack was not detected in the reconnaissance stage, Imperva CounterBreach can still identify anomalies along the data breach attack chain, using the defense-in-depth approach. With domain knowledge expertise and machine learning algorithms, this approach allows you to detect threats at all stages of a data breach. Try Imperva for Free Protect your business for 30 days on Imperva.
<urn:uuid:a7c72054-e2e2-4392-aa2b-3e27c7959cad>
CC-MAIN-2022-40
https://www.imperva.com/blog/the-data-breach-kill-chain-early-detection-is-key/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00329.warc.gz
en
0.900801
1,572
2.78125
3
Connectivity the Key to Specialized Optimizing Machines (SpectrumIEEE) Research into developing analog optimizers—machines that manipulate physical components to determine the optimized solution—is providing insight into what is required to make them competitive with traditional computers. The researchers found that the higher connectivity an analog optimizer has between its components, the better its performance. To that end, the researchers compared their machine, a coherent Ising machine, to a quantum annealer built by quantum computing company D-Wave. The main difference between the two is the connectivity difference. D-Wave’s quantum annealer uses qubits to model the problem being optimized, which as a general rule, makes it possible to only directly connect neighboring qubits and limits the overall connectivity. The coherent Ising machine, however, uses pulses of light that can be shuttled around, making it possible for any two pulses of interest to directly interact. By testing the two systems on two types of optimization problems (the Sherrington-Kirkpatrick model and MAX-CUT), the researchers showed that for scenarios with high connectivity, the coherent Ising machine outperformed the quantum annealer. The takeaway isn’t that Ising machines are better than quantum annealers, however. It’s less about choosing between a coherent Ising machine or a quantum annealer, and more about examining how many connections are within those machines.
<urn:uuid:057479ca-62e2-409f-bdef-9bb01be55e57>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/connectivity-key-specialized-optimizing-machines/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00329.warc.gz
en
0.92762
294
2.84375
3
As an industry that prides itself on its advancements in construction, building design, and architecture, it may be surprising that the world of technological design isn’t exactly open to innovation. In fact, emerging technologies in construction, design, and architecture have often resisted modernization and new approaches in favor of old, faithful methods. But here’s the thing about innovation and advancement, no one can stop it from evolving age-old processes and completely transforming the way things are done. So, despite the detractors, modern technological advances have brought on a variety of new design tools that have made great additions to the visionary’s toolbox, changing the way structures and ideas are conceptualized, formulated, and constructed. New Construction Technology Achievements That Are Heralding Growth And Opportunities These innovations are enhancing the experience, augmenting profit margins, and above all, making construction sites safer than ever before. Perhaps this is why 60% of construction companies have R&D departments to explore these new technologies. Here are some of the most important technologies to hit the building design and construction industries: Now the laborers in the construction industry can suit up to enhance their safety and efficiency. Exoskeletons are metal suits with motors to provide muscle power. The wearer can seamlessly lift heavy objects with ease, reducing the chances of on-site injuries, and upsurges productivity. The market for this construction technology is predicted to reach $1.8 billion by 2025, as per a study published by ABI Research. 2020 will see an increased dependency on robotics in the construction, architecture, and design industry. Now, robots perform many duties such as laying roads and bricks, demolition activities, cleaning jobs, etc. The bottom line is that this technology is putting fewer humans at risk. They are a lot more expendable and cost less than manual labor, and above all, robots leave no for human-error. According to a report published by the World Economic Forum, this year, the use of the robots in the construction industry will be at an all-time high. - Building Information Modeling This innovation is helping the everyday tech architect create 3D models, instead of the conventional 2D models they have depended on for decades. BIM allows people to develop efficient plans, bring them to life, and manage the infrastructure during the construction phase more efficiently. The resulting models are incredibly detailed and have become standard in the industry. - AR And VR Architecture Software Virtual Reality and Augmented Reality have been helping professionals in a variety of industries such as health care, automobile, etc. And now, the construction industry is also harnessing the powers of AR and VR architecture software for efficient results. From pinpointing errors to providing a virtual walk-through of a site and improving BIM visualizations, VR, AR, MR, and other wearable technologies have a lot to offer. - 3D Printing You will be amazed to know that now you can print a livable house in less than 24 hours. Up until a few years ago, building a basic cabin would be several month’s hard work, but now all you need is a 3D printer and a day to get the job done right. And 2020 will see more 3D printing in action, to get more done in less time – while also saving labor costs, material costs, and a lot more. There have been many revolutionary changes in the way the architecture and construction industries operate. As more and more advancements in technology offer better, smarter, and faster ways to get things done, it’s time for professionals to open up to what they can achieve. All in all, we have a lot to look forward to in 2020 and beyond. Leave a Comment You must be logged in to post a comment.
<urn:uuid:dc67a1b3-beec-4b04-bd65-542c3d947d29>
CC-MAIN-2022-40
https://www.consultcra.com/emerging-technologies-construction/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00329.warc.gz
en
0.931353
794
2.734375
3