text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
What is the Cyber Kill Chain? The cyber kill chain is an adaptation of the military’s kill chain, which is a step-by-step approach that identifies and stops enemy activity. Originally developed by Lockheed Martin in 2011, the cyber kill chain outlines the various stages of several common cyberattacks and, by extension, the points at which the information security team can prevent, detect or intercept attackers. The cyber kill chain is intended to defend against sophisticated cyberattacks, also known as advanced persistent threats (APTs), wherein adversaries spend significant time surveilling and planning an attack. Most commonly these attacks involve a combination of malware, ransomware, Trojans, spoofing and social engineering techniques to carry out their plan. 2022 CrowdStrike Global Threat Report Download the 2022 Global Threat Report to find out how security teams can better protect the people, processes, and technologies of a modern enterprise in an increasingly ominous threat landscape.Download Now 8 Phases of the Cyber Kill Chain Process Lockheed Martin’s original cyber kill chain model contained seven sequential steps: Phase 1: Reconnaissance During the Reconnaissance phase, a malicious actor identifies a target and explores vulnerabilities and weaknesses that can be exploited within the network. As part of this process, the attacker may harvest login credentials or gather other information, such as email addresses, user IDs, physical locations, software applications and operating system details, all of which may be useful in phishing or spoofing attacks. Generally speaking, the more information the attacker is able to gather during the Reconnaissance phase, the more sophisticated and convincing the attack will be and, hence, the higher the likelihood of success. Phase 2: Weaponization During the Weaponization phase, the attacker creates an attack vector, such as remote access malware, ransomware, virus or worm that can exploit a known vulnerability. During this phase, the attacker may also set up back doors so that they can continue to access to the system if their original point of entry is identified and closed by network administrators. Phase 3: Delivery In the Delivery step, the intruder launches the attack. The specific steps taken will depend on the type of attack they intend to carry out. For example, the attacker may send email attachments or a malicious link to spur user activity to advance the plan. This activity may be combined with social engineering techniques to increase the effectiveness of the campaign. Phase 4: Exploitation In the Exploitation phase, the malicious code is executed within the victim’s system. Phase 5: Installation Immediately following the Exploitation phase, the malware or other attack vector will be installed on the victim’s system. This is a turning point in the attack lifecycle, as the threat actor has entered the system and can now assume control. Phase 6: Command and Control In Command & Control, the attacker is able to use the malware to assume remote control of a device or identity within the target network. In this stage, the attacker may also work to move laterally throughout the network, expanding their access and establishing more points of entry for the future. Phase 7: Actions on Objective In this stage, the attacker takes steps to carry out their intended goals, which may include data theft, destruction, encryption or exfiltration. Over time, many information security experts have expanded the kill chain to include an eighth step: Monetization. In this phase, the cybercriminal focuses on deriving income from the attack, be it through some form of ransom to be paid by the victim or selling sensitive information, such as personal data or trade secrets, on the dark web. Generally speaking, the earlier the organization can stop the threat within the cyber attack lifecycle, the less risk the organization will assume. Attacks that reach the Command and Control phase typically require far more advanced remediation efforts, including in-depth sweeps of the network and endpoints to determine the scale and depth of the attack. As such, organizations should take steps to identify and neutralize threats as early in the lifecycle as possible in order to minimize both the risk of an attack and the cost of resolving an event. Evolution of the Cyber Kill Chain As noted above, the cyber kill chain continues to evolve as attackers change their techniques. Since the release of the cyber kill chain model in 2011, cybercriminals have become far more sophisticated in their techniques and more brazen in their activity. While still a helpful tool, the cyberattack lifecycle is far less predictable and clear cut today than it was a decade ago. For example, it is not uncommon for cyber attackers to skip or combine steps, particularly in the first half of the lifecycle. This gives organizations less time and opportunity to discover and neutralize threats early in the lifecycle. In addition, the prevalence of the kill chain model may give cyberattackers some indication of how organizations are structuring their defense, which could inadvertently help them avoid detection at key points within the attack lifecycle. Critiques and Concerns Related to the Cyber Kill Chain While the cyber kill chain is a popular and common framework from which organizations can begin to develop a cybersecurity strategy, it contains several important and potentially devastating flaws. One of the most common critiques of the cyber kill chain model is that is focuses on perimeter security and malware prevention. This is an especially pressing concern as organizations shift away from tradition on-prem networks in favor of the cloud. Likewise, an acceleration of the remote work trend and a proliferation of personal devices, IoT technology and even advanced applications like robotic process automation (RPA) has exponentially increased the attack surface for many enterprise organizations. This means that cybercriminals have far more points of access to exploit—and companies will have a more difficult time securing each and every endpoint. Another potential shortcoming of the kill chain is that it is limited in terms of the types of attacks that can be detected. For example, the original framework is not able to detect insider threats, which is among the most serious risks to an organization and one of the attack types that has the highest rates of success. Attacks that leverage compromised credentials by unauthorized parties also cannot be detected within the original kill chain framework. Web-based attacks may also go undetected by the cyber kill chain framework. Examples of such attacks include Cross Site Scripting (XSS), SQL Injection, DoS/DDoS and some Zero Day Exploits. The massive 2017 Equifax breach, which occurred in part because of a compromised software patch, is a high-profile example of a web attack that went undetected due to insufficient security. Finally, while the framework is intended to detect sophisticated, highly researched attacks, the cyber kill chain often misses those attackers who do not conduct significant reconnaissance. For example, those who use a “spray and pray” technique often avoid carefully laid detection snares by pure happenstance. Role of the Cyber Kill Chain in Cybersecurity Despite some shortcomings, the Cyber Kill Chain plays an important role in helping organizations define their cybersecurity strategy. As part of this model, organizations must adopt services and solutions that allow them to: - Detect attackers within each stage of the threat lifecycle with threat intelligence techniques - Prevent access from unauthorized users - Stop sensitive data from being shared, saved, altered, exfiltrated or encrypted by unauthorized users - Respond to attacks in real-time - Stop lateral movement of an attacker within the network
<urn:uuid:9ff69604-9b60-4beb-a90b-a2bba39ae736>
CC-MAIN-2022-40
https://www.crowdstrike.com/cybersecurity-101/cyber-kill-chain/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00180.warc.gz
en
0.932947
1,525
2.84375
3
*This blog has been updated as of February 21, 2021, with relevant content. A watering hole attack is a method in which the attacker seeks to compromise a specific group of end-users either by creating new sites that would attract them or by infecting existing websites that members of that group are known to visit. The attacks have been adopted by criminals, APT groups, and nation-states alike, and we see the amounts rising. The goal is to swipe username, and password combinations hoping the victim reuses them, or infect a victim’s computer and gain access to the network within the victim’s place of employment. Many conclude that these attacks are an alternative to Spear Phishing but are quite different. Watering Hole attacks are still targeted attacks, but they cast a wider net and trap more victims than the attacker’s original objective. What is a “Watering Hole” Attack? Phishing is like giving random people poisoned candy and hoping they eat it, but a watering hole attack is like poisoning the village water supply and just waiting for them to drink from it. To a lion, the watering hole is more than just a place to get hydrated – it’s the perfect place to ambush unsuspecting prey. For the energy-conserving predator, lying in wait for victims to gather is much easier than the usual tracking and attacking method. To a hacker, the game plan is largely the same when conducting a cyberattack in this method – infect a website typically frequented by an individual of a specific group (be it a large enterprise, religious group, or organization) and wait. When the “prey” logs on, the implemented malware can compromise the end-users computer and gain access to their network. Although, in comparison with the antelope, a cyberattack victim may not realize they’ve been taken down until much, much later. As attackers create new sites or compromise legitimate websites and applications that aren’t blacklisted – often using zero-day and obfuscated exploits with no antivirus signatures, the attack success rate remains high. While not the average modus operandi of a hacker, the water hole attack is particularly nefarious due to the fact that it’s difficult to detect and relies on social engineering – taking advantage of human error. Who Has Been Affected by Watering Hole Attacks? A diverse victim set, we see watering hole attacks being used by everyone from the Chinese government against political dissidents, foreign APTs against US nuclear scientists, industrial espionage against US/UK defense contractors through attempts to steal COVID-19 research by targeting COVID-19 researchers. One of the more sophisticated watering hole attacks recently was uncovered by Google security team Project Zero who uncovered a sophisticated watering hole that attracted users of a particular group to websites and through an android application and utilized four zero-days in their attack.1 Another one tracked by Kaperski Labs found a much less sophisticated but still successful watering hold that incorporated a website, malicious Java, and a phony Adobe Flash update pop-up to trick a particular group of people could be infiltrated2. How Does a Watering Hole Attack Work? - First, the attackers profile their targets by industry, job title, etc. This helps them determine the type of websites, and targeted applications often visited and used by the employees or members of their targeted entity. - The attacker then creates a new website or looks for vulnerabilities in these existing websites and applications to inject malicious code that redirects the targets to a separate site where the malware is hosted. - The exploit drops the malware onto the system of the target. - The attacker now uses the dropped malware to initiate its malicious activities. Also, knowing that most people still sadly reuse passwords, the attacker often collects usernames and passwords to attempt credential-stuffing attacks against targeted applications, enterprises, and sites. - Once the victim’s machines or the applications, enterprises, and sites are compromised, the attackers will perform lateral movements within the victim’s network and ultimately exfiltrate data. What Can I Do To Prevent These Attacks? - Continuously test your current security solutions and controls to verify that they provide you with adequate defense against application and browser-based attacks. . Ensure your security controls prevent criminal redirection, malware, and rootkits from being successfully deployed. Ensure that browser control and endpoint software are adequately tuned and that web content and security proxy gateways are well configured. Organizations must seek additional layers of advanced threat protection, such as behavioral analysis, which have a far greater likelihood of detecting zero-day threats. - Update systems with the latest software and OS patches offered by vendors. - All third-party traffic must be treated as untrusted until otherwise verified. It should not matter if content comes from a partner site or a popular Internet property such as a Google domain. - Educate your end-users on what watering hole attacks are by creating easy-to-understand corporate materials you distribute. This attack is sure to continue as attackers leverage legitimate resources as a catalyst for attacks. This includes influencing search engine results, posting on popular social networks, and hosting malware on trusted file-sharing sites. Download Cymulate free trial to see your organization’s outbound exposure to malicious or compromised websites. Email Gateway Vector Learn how Cymulate’s Email Gateway vector helps you to test your corporate email security controls against cyber threats with actionable insights.Read More How to Evaluate Secure Email Gateway Solutions How do you know which secure email gateway solution is best for your organization? Read now how to evaluate.Read More Phishing Awareness Vector The Cymulate Phishing Awareness vector simulates real-life phishing campaigns that employees might click on and fall victim within your organization.Read More
<urn:uuid:91dee655-4057-4776-84f4-841141d7c18c>
CC-MAIN-2022-40
https://cymulate.com/blog/watering-hole-attack-dont-drink-water/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00180.warc.gz
en
0.931688
1,212
2.75
3
‘Streams in the Beginning, Graphs in the End’ is a three-part series by Dataconomy contributor and Senior Director of Product Management at Cray, Inc., Venkat Krishnamurthy – focusing on how big changes are afoot in data management, driven by a very different set of use cases around sensor data processing. In part I, we talked about a sensor-led Big Data revolution, variously referred to as the Internet of Things (and even the Internet of Everything). Here, we’ll examine some ideas on why this revolution places a new set of demands on systems infrastructure for analytics and data management. The consequences of Moore’s law today mean that we now have a way to create data streams out of anything – plants, animals, natural phenomena big and small (including tornadoes, of course), and machines themselves, in a satisfyingly recursive turn of events. We’ve entered the age of the really small machine that will be a Really Big Deal – the sensor. For a start, what are we defining as a ‘sensor’? One definition is that it is a device to turn a physical phenomenon into a discrete sequence of data events. The sensor itself may be physical – for example, a heart rate sensor in a smart watch, or a pitot tube on the wings of an airplane, or an ionizing microscope that turns tissue samples into spectra. Fig 2 – Sensors are everywhere. 2(a) – the Boeing 787 has several sensor systems generating 0.5TB of data per flight. 2(b) A Smartwatch is equipped with multiple biophysical sensors. 2(c) – The compact muon solenoid in the Large Hadron Collider is so large a sensor collection that it has its own datacenter located above it Or, it could well be ‘logical’ – for example, a web page can be seen as a sensor that turns a sequence of user actions into an event stream. The reason for this broader definition of a sensor is that it gives us a single unifying conceptual handle on thinking about where data ‘begins’. Turning our attention to our data management story, the quantum of data produced by a sensor is typically a well-defined, tiny (in terms of data size) observation. For example, a single tick of a stock in a financial market, a single ambient pressure reading from a pressure sensor in an aircraft engine, or a single click on a web page. At its most basic level, this observation has a timestamp associated with it, giving us the ‘atomic theory’ equivalent of data – the Event, which is a combination of a data payload from an observation, and a timestamp. As reality unfolds (with sensors attached!), you get an Event Stream (also known as a Log) – a naturally ordered sequence of event data. This is a core building block of stream processing systems and their variants. To date, a significant portion of data management research (E.F.Codd’s rules, ACID, CAP theorem being prime examples) has been devoted to the idea that the dominant concern is managing State, which is defined as the cumulative effect of a set of events. These principles remain critical, but the world of event streams needs a new set of corresponding ideas. Martin Klepmann provides a must-read explanation of these core ideas in his blog series. Jay Kreps (the co-creator of Kafka and founder of Confluent) who coined the term ‘Kappa architecture’ also has a great series on the Confluent blog, as well as a terrific post on log-centric data management. The fundamental question both of them ask, and set out to answer is this – If observed reality in data becomes an ever-growing set of event streams, shouldn’t data processing begin with stream processing? For us, the answer clearly is yes. The implications at a systems architecture level are significant – here are a few of them, drawing from Martin and Jay’s ideas. Table of Contents Event Streams are primary data containers The immutable collection of events that forms a stream is, in a sense, the ground truth for analytical data processing. Every processing need can now be based on applying a set of analytic operations (filters, transforms, joins etc) to the streams directly, usually after selecting an interval of time such as a day, and possibly other axes (e.g. location) of interest. The approach of co-locating at least part of the data processing with data collection means that you can now choose to replay the processing in its entirety for any chosen interval of time since you’re always starting with the raw material (events). Compare this to traditional analytic approaches, where the typical ‘starting point of analysis, is a post-facto container of state (A table or DataFrame etc). In effect, you’re starting a step further from the original data. For example, a table in an RDBMS is usually mutated via SQL Data Manipulation Language (INSERT, UPDATE, DELETE): in allowing this, you implicitly require that the state data in the table (eg. your end of day balance) actually represents an accurate picture of the collective effect of several underlying events (eg. your set of transactions through the day). The emergence of tools like Kafka is a realization of the idea of event-centric data management. By providing scalable data management for streams, they make it possible to repeatedly go back to the original, structured event sequence data rather than start later with state containers like tables, or pay the cost of file-based serialization/deserialization operations on this data (like log processing from files in HDFS). The impact this has on systems architecture is significant. In this approach, the design of the data processing infrastructure will be driven primarily by the needs of stream processing, in which latency is critical, rather than batch processing, that was driven by throughput considerations. Similarly, there is a significant subset of analytical processing that can be done entirely on the stream – everything from simple windowing aggregations to emerging streaming versions of machine learning and advanced analytic algorithms. Event Streams allow for Time Series analysis Since an Event Stream is a time series, the universe of techniques to find and analyze temporal patterns in the data now becomes a powerful toolset. This isn’t new by any means – As a matter of fact, in Finance, Robert Engle won the Nobel prize in economics for his study of the time series properties of asset returns. More recently, PayPal has pointed out how fraud detection is viewable as a signal-processing problem, and gone so far as to also build custom hardware architectures to handle this at scale. One caveat is that a stream of events emitted by a single sensor probably has relatively little value when considered in isolation. While one can characterize the underlying probability density for the time series variable and determine whether the underlying process generating it is changing in significant ways (i.e. non-stationary behavior), this can gives you only a limited view of the system under observation. For instance, a glucose monitor might be able to detect blood sugar spikes in diabetics, which is useful, but you need other time series variables such as blood pressure, ketone levels, lipid panels, etc. to really say something about the disease progression of the individual. The correlation structure that exists between sensors, i.e. a quantitative measure of how strongly two or more sensors are associated, is important to start understanding the baseline behavior of a complex system in time and space. More on this in the last article, where we talk about graphs and their value as an organizing principle for data in the age of sensors. Time series can also be correlated to risks of outcomes of interest. For instance several time series variables might be “risk factors” for a particular type of event of interest. A great example of this is in failure prediction for large supercomputers. Predicting the risk of failure for a particular compute node, for instance, involves monitoring multiple streams of data from different subsystems (memory, network, disk) and watching for abnormal signals that need to be defined upfront, using past failure history. The primary analytic focus here is in the temporal patterns in sensor readings – in other words, time-series analysis. The raw material for this analysis is the actual sensor reading events. This is in contrast to a ‘state first’ approach, which can hide important outlier signals lurking in temporal patterns by “averaging over” these details. Aggregations might provide a convenient way to summarize complex data, but rare events that can herald important outcomes of interest may be lost in the process. The focus on time series analysis as an important analytic need, early in the data ingest process, results in a fundamental shift in thinking from ‘understand what occurred to ‘understand and react to what is occurring NOW. This paradigm shift is already underway – the autopilot in commercial aircraft is a great illustration, as is the imminent arrival of self-driving cars – you could not build such capabilities if the analytical loop was anything other than real-time. Data Structures, not Files As the stream becomes the primary focus of data processing, the unit of data in the stream is a higher-level data structure representing the event, and not a bunch of bytes in a file. In other words, applications that process the stream, typically in a pipeline, speak ‘data structures’, not ‘un-interpreted byte streams’ (a.k.a ‘files’). This forces a rethink of the storage-first mentality (‘Data Lake’/’Data Warehouse’/ Data Hub’) that pervades frameworks like Hadoop today. A central component in Hadoop is HDFS, and a typical data processing life cycle in Hadoop begins on files in HDFS. While this is acceptable for use cases like indexing unstructured web pages (the motivating reason for Hadoop and its MapReduce precursor), it’s questionable as the primary approach when the data (eg. Web server logs or sensor readings from a turbine) are real-time event streams that have been allowed to ‘pile up’ into files, centralized and aggregated at locations far away in time and space from the generating event. Why is this relevant to data systems architecture? Primarily because it moves the focus of storage higher up the memory hierarchy. In a stream-first architecture, applications that process the stream want to work with their data structures as long as possible, and have these stay as close as possible to the ‘compute’ – in effect, the file as the unit of data exchange is questionable for this approach. As a result, there is far greater need and role for memory as a first-order storage tier. A great realization of this idea can be found in Apache Spark, which defines an API for a distributed in-memory data structure – the source and intermediate/final storage destination for the resident dataset are up to the application, and can be NoSQL or SQL databases, or filesystems. Data Processing Pipelines, not monolithic applications Underlying most practical applications of analytics, is the idea that you need a series of discrete steps to achieve a specific analytical result. For example, a recommendation system, or a risk analytics platform or Next Generation Sequencing are all examples of this ‘assembly-line’ approach. This brings up the idea of an analytic pipeline, where multiple tools, possibly from disparate toolsets are integrated via data exchange steps to create a set of analytical outputs. This pipeline approach allows great flexibility in assembling data products, and is already becoming prominent with tools like Google’s Cloud Dataflow, Amazon’s Kinesis, and Databricks’ Cloud allowing a systematic way to assemble and deploy such pipelines. Pipelines, especially for sensor data force an important consideration in the data processing lifecycle. A key reason for putting sensors on real physical things like aircraft engines and humans is that a whole class of analytics is now possible on the stream itself – from simple windowing aggregations, to streaming analytic algorithms and beyond. For all these cases, the need is not necessarily to process a very large batch of sensor readings, as much as a large number of smaller time windows. From a systems perspective, pipelines necessarily involve multiple components acting on a data stream in order. Naturally, the resulting need for data exchange become a big gating factor in performance and overall productivity of the pipeline. This development parallels the emergence of deeper memory hierarchies on computing platforms, that span both on and off-node storage tiers and types (DRAM, NVRAM, Spinning disk) This results in a fundamentally different focus for storage. It brings in the need for a ‘working store’ for data processing that is much closer to the applications. The working store offers the opportunity for a high-performance storage tier that exposes application-level APIs, but can interface with multiple storage systems and API’s underneath. In the next and final part of the series, we’ll look at another data management idea of increasing relevance to the Internet of Everything – Graphs. Stay tuned!
<urn:uuid:5666960d-7f61-45c5-9b0b-25d2323d1cc5>
CC-MAIN-2022-40
https://dataconomy.com/2015/06/streams-in-the-beginning-graphs-in-the-end-part-ii-sensors-event-streams-and-upside-down-databases/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00180.warc.gz
en
0.929744
2,747
2.796875
3
A few months ago, AT&T’s It Can Wait team asked our data scientists to look at something different from our normal work on network and business-process improvements: Did we have any new ideas on how to study texting while driving? The It Can Wait campaign shares a simple message: Keep your eyes on the road, not on your phone. And we did have an idea for them. We looked at 3 months of anonymized data on our network. Through some algorithms and analysis, we were able to estimate the rates of texting while driving across the United States. The real world creates a lot of variables. So we’re not announcing rates for individual cities. There’s no need to create a fuss when we might call the numbers directionally accurate, rather than pinpoint. But, after some checking and re-checking, our data science team is confident of this general statement: States that have statewide anti-texting laws have lower rates of texting while driving – at a statistically significant level. We believe that the 4 states without a full statewide ban have a roughly 17 percent higher rate of texting while driving than the 46 states with statewide bans. Here is a closer technical look at what we did: - Privacy first. We didn’t create any new data, nor did we share any data outside of AT&T. Our data scientists looked at an anonymized set of routine network information, and aggregated it to the size of a metropolitan area. - Specifically, we looked at outgoing text messages, and we used a cell-tower algorithm to figure out which ones were sent from moving vehicles. - We studied commutes within US Census metropolitan areas. We identified “commutes” as trips taken between an anonymous phone’s 2 most frequented tower locations. (The actual locations were not relevant beyond verifying that the towers were in a particular metro area.) Here’s why we did it this way: - We could only know that a mobile device was on the move, not whether it was being used by a driver or passenger. So our solution was to “weight” the rate in each metro area. We stuck to commutes in metro areas because the US Census collects passenger data specifically about metro-area commutes. In most areas, roughly 80 percent of commutes are solo – a car with no passenger. But in the greater New York area, for instance, the figure is 50 percent. We adjusted every metro area for this bias to get a truer picture of texting while driving, not just texting while moving. We think this study is significant because it shows what people as a group are actually doing, not what they say they are doing. And remember, this is just texting – not other smartphone driving distractions. But it’s a reasonable indicator. Should the 4 states join the rest of the states by passing comprehensive texting bans? We’re happy to add to the research. We’ll share our insights with our fellow members of Together for Safer Roads, a coalition that addresses global road safety. As a single study, we hope this work will help raise awareness. It could be replicated in the future as legislation changes, or adjusted with new ideas. The study of anonymous and aggregated data from mobile phones has great potential for good. In California, for instance, AT&T is helping on a project to determine how mobile data may save taxpayers literally billions of dollars. With aggregated traffic data from phones, the state may not need to replace worn-out, in-pavement traffic sensors to conduct congestion studies. It’s something to think about. But not to text about while driving. Mark Austin is vice president for data insights, Big Data at AT&T.
<urn:uuid:18eed263-5952-4082-9831-04f672430e52>
CC-MAIN-2022-40
https://about.att.com/innovationblog/041116antitextinglaw
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00180.warc.gz
en
0.94646
778
2.53125
3
Many organizations are not actively examining the encrypted traffic in their network. According to a Venafi survey, roughly a quarter (23%) of security professionals don’t know how much of their encrypted traffic is decrypted and inspected. “As organizations encrypt more traffic and machine identity usage skyrockets, so do the number of opportunities for cyber criminals,” said Nick Hunter, senior technical manager for Venafi. “Any type of encrypted tunnel can be exploited in a cyber attack, and most organizations manage hundreds of thousands of keys and certificates each day. This use will only grow, and the dramatic increase of keys and certificates will only make the job of securing encrypted tunnels more difficult. Ultimately, organizations must secure their encrypted tunnels or risk being at the mercy of cyber attackers.” Venafi security experts point out that without proper insight into encrypted tunnels, cyber attackers can use them against businesses in the following five ways: Undetected movement across networks Most large organizations use virtual networks to connect with multiple offices and business partners. However, the encrypted tunnels in virtual networks are rarely inspected, allowing attackers to go undetected. Cyber criminals can use these tunnels to move from site-to-site. Eavesdropping on confidential traffic to steal data The most common types of tunnels are found in layered security, such as Secure Sockets Layer (SSL) and Transport Layer Security (TLS). These tunnels provide a secure session between a browser and an application server. However, attackers may create man-in-the-middle attacks to eavesdrop on encrypted traffic and steal data from their victims. Access to endpoints To secure internet communication, organizations create virtual networks using Internet Protocol Security (IPsec). This often creates a tunnel from a remote site into a central site, creating an ideal entry point for cyber criminals. This type of attack typically compromises only established network endpoints, but it can be the start of a more sophisticated attack. Setting up phishing websites Attackers often use stolen or compromised certificates to establish a phishing website that a victim’s browser will trust. Users may then unwittingly share sensitive data with cyber attackers. Since HTTPS sessions are trusted – and rarely inspected – these attacks typically go unnoticed. Privileged access to payloads The tunnels created by Secure Shell (SSH) encryption are lucrative targets for attackers. SSH keys grant administrators privileged access to applications and systems, bypassing the need for manually typed authentication credentials. Unfortunately, this also means the compromised SSH tunnels can create an ideal environment for moving malicious payloads between file servers and applications. “On a positive note, there are ways organizations can confront this threat,” concluded Hunter. “Businesses must establish a baseline of machine identities that are trusted, regularly scan for untrusted identities and take a proactive approach to securing all machine identities. To do this, organizations need to centralize and review gathered intelligence and use automation to frequently rotate keys and certificates as often as they require a username and password to be changed. This can ensure all security tools organizations rely on maintain a continuously updated list of the relevant keys and certificates they need to inspect in their encrypted traffic. By protecting these machine identities and by integrating this data into security tools, security professionals can finally begin to shine a light into encrypted tunnels.”
<urn:uuid:0d950432-254f-460e-9bba-140167fc4f2b>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2017/10/02/take-advantage-encrypted-tunnels/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00180.warc.gz
en
0.935544
672
2.828125
3
Cloud services promise to provide easy access to your data from anywhere and from any device including laptops, tablets and smartphones. Any website that collects your personal information and requires a username/password combination could be considered a cloud service. But these days, not a month goes by without reports of hacks against these services with thousands, or even millions of accounts compromised. Cloud services are here to stay but many still need improved security. The risk of losing control of your personal information is real, but there are things you can do to minimize damage to your personal information should your provider get hacked. Basic password hygiene: USE ONLY COMPLEX PASSWORDS One way hackers gain access to password-protected systems is by using a “cracking” program. Many cracking programs attempt to guess passwords by trying lists of dictionary words against the target username. This is why it is important to never use simple words (as found in the dictionary) as your password. When someone says “complex” passwords they’re referring to seemingly random sequences of letters, numbers and punctuation marks. The longer (at least 8 characters!) and more random the password gets the harder it is to crack it with this “brute-force” approach. An easy way to create a difficult-to-guess (yet easy-to-remember) password is to use a passphrase – such as four random words strung together. Or, you can use a mnemonic: taking the first letter of each word in a favorite song title and combining them together into one unintelligible mess of letters that only you know. USE A DIFFERENT PASSWORD FOR EVERY SERVICE This is a hard thing to do for most people but it’s important in the event one of your online accounts gets hacked so that the hackers don’t immediately try to compromise other accounts of yours on other services. You can use tools (such as a simple spreadsheet) to track these passwords. Or, create your own “algorithm” that only you know: modify a base password based on the name of the website you are using utilizing a repeatable formula (as long as the resulting password is still complex!) A number of tools have been developed over the past few years which really help manage this process: LastPass is highly recommended because it installs plugins into your web browsers and can automatically detect login boxes and fill them in. LastPass tracks and generates passwords automatically on a large range of websites. It also allows you to generate and store very complex passwords without any manual intervention and works with a wide range of devices. CHANGE YOUR PASSWORDS ON A REGULAR BASIS Keep the bad guys guessing – change passwords on your web accounts on a regular basis. Six months is a good recommendation across the board, but you may want to change your password more frequently on sensitive sites such as banking services- quarterly or even monthly is much more secure. Track your password change dates or use a tool like LastPass (above) to keep track of password age for you. A good way to minimize damage from website hack attempts is to use basic password hygiene. Complex passwords that are different for every service and changed on a regular basis will help keep the bad guys out and minimize damage if they do get in. Tools such as LastPass and KeePass can help you keep all of your passwords under control.
<urn:uuid:fc9e8e39-76da-45f3-827e-12c851d5e567>
CC-MAIN-2022-40
https://www.interplayit.com/blog/basic-password-hygiene/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00180.warc.gz
en
0.933191
699
2.796875
3
A Polymorphic Virus is a type of ‘shape-shifting’ virus, producing malicious code that is able to replicate itself with new signatures but identical payloads over and over again. These viruses repeatedly change their overt characteristics in an attempt to evade and outwit your computer’s defenses and sabotage your system. Polymorphic capabilities are designed to evade signature-based cybersecurity solutions like antivirus and Anti-Malware. This threat continues to grow. Antivirus researchers in 2020 determined that 97 percent of newly identified viruses had polymorphic properties. In 2015, it took the combined efforts of the FBI and Europol to bring down a botnet running advanced polymorphic malware called Beebone. This polymorphic botnet contained at least 12,000 compromised computers and was able to change itself up to 19 times a day to avoid detection. What does this mean for an SMB? One of the simplest and best ways to protect your systems from dynamic, changing code is to ensure you have the right type of security solution software in place. Have a high-quality heuristic and signature based antivirus solution will give far more comprehensive protection than just signature based or just heuristic based antivirus protection. Heuristic based solutions examine the actions and activities taken by code running on your system and prevent certain things from happening: for example, encrypting files should never happen and many heuristic programs prevent that helping you avoid a ransomware attack. Employee Awareness Training The initial exploit of a system often comes from human error, performing an action like downloading and running an infected email attachment, or visiting a website that has been compromised. Your own good judgment is often your first and best line of defense. - Deploy cybersecurity awareness training for your employees and - Phish Test employees quarterly Keep Software Up to Date Cybercriminals are constantly updating and morphing their virus code. All of the good guys should do the same. Updates are released in the form of free software patches for your desktop and laptop computers, but also for your IoT devices. Make sure you install all system and software updates to everything. Guide Staff With Cybersecurity Policies Cybersecurity policies are a great way to keep staff informed and accountable to company expectations on behaviors and technology usage. CyberHoot recommends adopting the following four foundational governance policies if you haven’t any defined just yet: - Password Policy - Acceptable Use Policy - Information Handling Policy - Written Information Security Policy (WISP) Perform a Risk Assessment Spend your finite time and money on the most critical risks you face, identified in a Risk Assessment by a competent professional. CyberHoot comes with built in cybersecurity assessments to help our clients do just this. Purchase Cybersecurity Insurance for Catastrophic Failures When all your preparations and protections fail you, having cybersecurity insurance to help you recover quickly and effectively can mean the difference between a complete failure of your company and just a bad year. Protect yourself no differently than with Fire, Flood, Errors & Omissions, or car insurance with Cybersecurity Insurance. Here are two articles on what cyber insurance can cover and some of the challenges it has. By building a robust, defense-in-depth cybersecurity program as outlined above, you create an equal playing field where the hackers do not have the upper hand.
<urn:uuid:39a3e2ab-2846-4ea6-a166-73be2b642d5e>
CC-MAIN-2022-40
https://cyberhoot.com/cybrary/polymorphic-virus/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00381.warc.gz
en
0.929564
688
2.921875
3
There are times when you would like a folder to be accessible by you alone. Financial information, personal documents, or work related files on your personal system sometimes need to be hidden from prying eyes. One of the ways to do this is to password protect the folder. For the Windows section of this article we will answer a few frequently asked questions. Can you put a password on a folder? Well, Windows does not provide you with an option to simply password protect a folder, but it does provide you with some options that you can utilize to put a password on a folder. In Windows you can encrypt a folder by following these instructions: - Right-clicking it - Select Propertiesfrom the menu. - On the form that appears, click the Generaltab. - On that tab click the Advancedbutton - Select Encrypt content to secure data. - Click OK. An important downside to this method is that your Windows username and password will be used to encrypt and password protect the folder, so people logging in on the same account as you can still see the content. It is also important to note that when the process completes, you'll be prompted to back up your encryption key if you've never used the feature before. Click the recommended option on the notification and follow the prompts to make a note of your encryption key. You'll need this information if you ever lose access to your encrypted files, so it's important you take the time to back it up. How do I password protect a folder in Windows 10? For Windows versions later than Windows 7 there is also an option to send files to a compressed folder (a zip file) which you can password protect. This Send tooption is usually faster than encrypting the content. But you will have to keep in mind that the option creates a duplicate, so you will need to delete the original once you’re satisfied the compressed version is complete and accessible. How do I hide a folder? Hiding folders is not an ideal solution, but we want to point out that it is available in Windows. It works like this: - Right-click on the file or folder that you want to hide. - Select Properties. - Click the Generaltab - Under the Attributessection, check Hidden. - Click Apply. Why is it not ideal? Anyone that has access to the system can check the option to Show hidden files, folders, and drivesin the folder options. Many advanced Windows users already have this option enabled, and you may forget to change the setting after you have accessed your hidden folder. You can password protect folder contents using macOS and Disk Utility, a built-in utility on your Mac. This method will also encrypt the content. - Open Disk Utility on your Mac - With Disk Utility open, select Filefrom the menu bar - Then choose New Image-> Image from Folder. - Select the folder you want to protect with a password - Choose your encryption level: 128-bit, or 256-bit AES encryption - Enterand verifythe password for your folder (After you type the password into both the Password and Verify text boxes make sure to uncheck Remember password in my keychain, otherwise anyone logged into your account will still be able to access the data. - Give the folder a name if desired - Under Image Formatselect read/writefrom the menu - Select Save This creates a disk image holding the contents of the folder in encrypted storage. So, you'll need to delete the original folder after verifying the disk image is complete and accessible. Another important thing to remember is that this method only creates a fairly small—and fixed—amount of free space on the disk image, so if you want to make changes you'll be dealing with a limited capacity. If you want a disk image with unlimited capacity, you'd be better off creating a blank image, and choosing sparse bundle disk imageas the image format. If you create a 200 MB sparse bundle disk image, you can copy a 1 GB file onto it and it'll resize to fit. However, it will not decrease in size if you were to delete that 1 GB file. Third party software It is not our place to make recommendations about software you can use to achieve the goal of password protecting folders, but there are several third party software packages for both Windows and Macs that are very good at compressing files and folders and providing the resulting compressed files with a password. If they are any good you will not need to decompress the entire folder before you can look at an individual file. Just be careful not to download any potentially unwanted programs (PUPs) or one that is bundledwith PUPs or adware.
<urn:uuid:6afdf495-0579-46a6-910a-6f0f8b6e76c8>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2022/04/how-to-password-protect-a-folder
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00381.warc.gz
en
0.910412
978
2.53125
3
Google OCR is a user-friendly API that is part of the Google Cloud Vision API. It can be used to extract text from images as part of a software app that you yourself create. When used in conjunction with other API’s and functions, Google OCR can help you create innovative applications, without needing to know how to code any AI yourself. Below will look at what Google OCR is, what its benefits are, and how to use it. What Is Google OCR? Google OCR is an API that is part of the Google Cloud Vision API. It extracts text from GIF, JPEG, PNG, and TIFF images. Google’s OCR functionality is used in a variety of its products, from Gmail to Google Drive, but it can also be used as an API to generate text from images in your own NLP-powered automation tools. When using Google OCR as part of the Google Cloud platform, there are a few important points to note: - Google OCR is not free, but it is also not expensive unless you are using it at scale - OCR is only one of many features of Google Vision API which includes other features such as facial recognition, landmark detection, tagging of explicit content, and image labeling - OCR can be applied to a wide variety of languages beyond English Google OCR, in short, can be used by programmers or businesses who want to create an app that uses optical character recognition. Since it is affordable, powerful, and widely accessible, it is an excellent choice for those on a budget or those who want large scale applications. How to Use Google OCR Google has many guides on how to use Google OCR. As with any other API, Google Cloud Vision API can be accessed by including the proper libraries in your code and then calling functions from those libraries when they are needed. Here is a brief outline of what to expect: - Requests are sent to the API - Parameters, such as the target language, can be specified when sending the request - The API returns JSON that contains the extracted text - The exported text can then be stored or used in other features of your app For more details on Google OCR workflows, check out the guides in the link above. Google OCR, as mentioned, is particularly useful when it is used in conjunction with other Google Cloud Vision API features. Here are just a few examples of how to use Google OCR, as well as other Google Cloud Vision functions: - Extracting data from a receipt and in putting that into a spreadsheet - Extracting text from images and then translating that text into another language - Using Google OCR for workplace digitization, through, for instance, digitizing business records and paperwork - Extracting text with Google OCR and then using other NLP functions, such as sentiment analysis, for brand monitoring - Using text and image recognition for a note taking app In short, Google OCR is a way to automate the extraction and reading of text. When combined with other NLP functions, however, it becomes quite powerful indeed. Beyond Google OCR with Natural Language API OCR is excellent for extracting text, as we have seen, but to take things to the next level you may want to consider actual AI. Unsurprisingly, Google also offers NLP as a solution. Google’s Natural Language API, like Google OCR, requires little coding. Its features include NLP techniques such as: - Classifying, extracting, and detecting sentiment - Content classification - Syntax analysis - Entity analysis Those unfamiliar with NLP techniques may want to read our article on the topic. It will provide a breakdown of not only NLP techniques, but also how they can be used to generate business value, new products, and new services. Here are just a few examples of how NLP can extend the functionality of OCR and similar functions: - OCR and NLP can be used to summarize the content of long texts, such as legal documents - When used for brand monitoring, as mentioned above, sentiment analysis can be used to gauge customers’ reactions to brand activities, competitor activities, trends, and more - NLP can be used for text user interfaces, such as those found in chatbots, as well as voice user interfaces Like Google’s CR, Google’s Natural Language API provides access to machine learning, without needing to program the AI yourself. Access to such robust technologies opens the door to innovation for a wide variety of businesses, from individual programmers to small businesses to large corporations. Using Natural Language API is simple. You simply need to: - Create a project - Enable billing - Enable the API and authentication - Begin testing Google’s Natural Language API documentation provides everything you need to know to get started. Google OCR and Google Natural Language API both offer easy access to a robust set of AI-powered language tools. Although some programming is necessary, one does not have to be an AI expert by any means. Any coder can learn to use these APIs and begin implementing them in a short period of time.For more information on OCR and NLP, see our articles on NLP and OCR software.
<urn:uuid:11d011b1-7d3b-4c95-8c0b-9a006cd5701f>
CC-MAIN-2022-40
https://www.digital-adoption.com/google-ocr/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00381.warc.gz
en
0.920209
1,115
2.84375
3
Big data and artificial intelligence: a quick comparison This article was originally published at Algorithimia’s website. The company was acquired by DataRobot in 2021. This article may not be entirely up-to-date or refer to products and offerings no longer in existence. Find out more about DataRobot MLOps here. The big data industry has grown at an incredible rate as businesses realize the importance of insightful data analysis. But what exactly is big data, and how does it correspond to artificial intelligence? We will compare the two realms, what they are, their differences, and how the combination of both leads to results beyond traditional human capability. What are big data and artificial intelligence? Big Data is a field focused on managing large amounts of data from a variety of sources. Big data comes into play when the volume of data is too large for traditional data management practices to be effective. Companies have long collected massive amounts of information about consumers, pricing, transactions, and product security, but eventually the volume of data collected proved too much for humans to manually analyze. The essence of big data can be broken into “the three v’s of big data”: - Volume: The amount of data being collected - Velocity: The rate at which data is received and acted upon - Variety: The different forms of data collected, (structured and unstructured data sources) Artificial intelligence (AI) is the development and implementation of computer systems that are capable of logic, reasoning, and decision making. This self-learning technology uses visual perception, emotion recognition, and language translation to analyze data and output information in a more efficient manner than human-driven methods. In fact, you likely already interact with AI systems on a daily basis. The largest companies in the world, such as Amazon, Google, and Facebook, use artificial intelligence in their user interfaces. AI is what powers personal assistants like Siri, Alexa, and Bixby, and allows websites to recommend products, videos, or articles that might interest you. These targeted suggestions aren’t a coincidence, they are a result of artificial intelligence. What is the difference between big data and artificial intelligence? The difference between artificial intelligence and big data lies in the output of each. Artificial intelligence analyzes inputs to learn and improve its sorting or patterning processes over time, using data that it gathers to provide a more accurate diagnostic. In contrast, big data is the overarching pool of information that is accumulated from various data sources, to then be analyzed by artificial intelligence. Big data and artificial intelligence are often used in conjunction with one another, but each fulfill very different roles, one is information and the other is a treatment of that information. How big data and artificial intelligence work together Big data and artificial intelligence are interdependent. Although each discipline is distinct, the presence of each is crucial in allowing for the other to function at its highest degree. AI does use data, but its ability to analyze and learn from this data is limited by the quantity of information that is fed into the system. Big data provides a vast sample of this information, making it the gas that fuels top-end artificial intelligence systems. By harnessing big data resources, artificial intelligence systems can make more informed decisions, provide better user recommendations, and find ever-improving efficiencies in your models. However, an agreed-upon ruleset for data collection and data structure must be in place prior to AI implementation to ensure production of the best data possible. Some benefits of AI and big data: - Less labor-intensive data analytics - Machine learning helps to relieve common data problems - Doesn’t lessen the importance of humans in the analytic process - More predictable and prescriptive analytics Algorithmia understands big data and AI challenges The world of big data and artificial intelligence can be overwhelming, but these processes are crucial for enterprises to have in place to stay competitive. However, implementation of effective systems comes with its own set of challenges. Algorithmia understands these needs and hosts a serverless microservices architecture that allows enterprises to easily deploy and manage machine learning models at scale.
<urn:uuid:b070b178-199b-4b35-9107-fa2664c5a120>
CC-MAIN-2022-40
https://www.datarobot.com/blog/big-data-and-artificial-intelligence-a-quick-comparison/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00381.warc.gz
en
0.931145
855
3.078125
3
November is Infrastructure Security Month (ISM) in the United States, and the theme for week two is ‘Secure Public Gatherings: Build in security for mass gatherings, starting with your planning.’ This theme encompasses many things, including bomb threats, active shooters, and attacks from weaponized vehicles and unmanned aircraft. The Cybersecurity and Infrastructure Security Agency (CISA) provides resources to help people identify, prevent, and respond to these threats. One example is the Bomb-Making Materials Awareness Program (BMAP), which increases awareness around the everyday products that can be used to make explosives. BMAP and many more resources can be found in the 2021 Infrastructure Security Month Toolkit. One of CISA’s primary functions is to coordinate the cybersecurity engagements between the Department of Homeland Security (DHS) and nonfederal government entities. CISA also helps state, local, tribal, and territorial (SLTT) government entities protect their networks and other resources through cybersecurity assessments, training, alerts, and more. In addition, CISA manages cybersecurity incident reporting and maintains a catalog of Known Exploited Vulnerabilities. Ways technology is improving emergency response Another DHS agency to know is the Science and Technology Directorate (S&T). This agency serves as ‘the science advisor to the Secretary of Homeland Security and the research and development arm of DHS.’ S&T works with private and public sector partners to improve the effectiveness of the nation’s emergency response. They have several recent accomplishments that align with public safety. Some highlights: - Commercialized a new framework for IoT software trustworthiness. The framework validates system upgrades and provides feedback and statistics to vendors to help them understand current threats. - Released a framework to standardize position, navigation, and timing (PNT) equipment requirements, which include GPS and other systems in our critical infrastructure sectors. - Deployed the “Slash CameraPole,” which is a wireless system that detects activity along the U.S.-Canadian border area known as “the Slash.” - Improved TSA screening technology to increase safety and reduce traveler delays and inconvenience. - Developed sensor and scanning technologies that improve border security and provide early detection for disasters like fires and floods. These accomplishments were made possible by the many S&T grants, initiatives, and cross-sector partnerships. Protecting large groups of people is as much about humans as it is about technology. There have been remarkable innovations in cybersecurity, but Infrastructure Security Month is about empowering people to protect the country and each other. If you’d like to learn more about security for mass gatherings, or any other ISM topic, visit the CISA ISM website. Christine Barry is Senior Chief Blogger and Social Media Manager at Barracuda. Prior to joining Barracuda, Christine was a field engineer and project manager for K12 and SMB clients for over 15 years. She holds several technology and project management credentials, a Bachelor of Arts, and a Master of Business Administration. She is a graduate of the University of Michigan. Connect with Christine on LinkedIn here.
<urn:uuid:bd32cf80-4139-47ef-b3ff-93f088ae6c08>
CC-MAIN-2022-40
https://blog.barracuda.com/2021/11/11/infrastructure-security-month-securing-public-gatherings/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00381.warc.gz
en
0.945163
641
2.6875
3
Overview of Security Auditor Positions A security auditor is an individual who conducts a systematic evaluation of the security for a company’s information system including the system’s physical configuration, user practices, information handling processes, and software. Security auditors are also responsible for creating a report which details the effectiveness of the security system as well as explains problems within security systems as well as counsels management on how to become compliant. Legislation such as HIPAA, and the California Security Breach Information Act have increased the desirability and necessity for security auditors. Security Auditor Job Description - Design and execute audits - Establish the audit’s objectives - Assess the overall structure of a business or organizations system - Facilitates Risk Assessment - Utilize Testing matrices and risk assessment - Interpret data - Offer written as well as oral reports on audit findings - Create clear and effective practices for organizations that aim to improve security at every level. - Assess computer systems - Evaluate an organization’s IT budget - Define criteria for audit and interpret results In many cases, it may not be the technology that is the source of technical weakness in an organization, it may be the employees. Behavioral auditing analyzes user behavior to essentially check for human error. This component of the audit may include a penetration test where the auditor then switches roles and attempts to gain access to user information- mimicking a malicious hacker. Employees may be vulnerable to phishing attacks or engage in behavior that is not considered best-practice- such as sharing passwords in emails, leaving computers unlocked etc. This is another component to security auditing. Based on your starting point (whether you’re already a security auditor, service member, IT specialist, or a student) there are a few paths to entering the field of security auditing. We’ve listed some different levels at which you can engage in security auditing below. These levels will depend on the typical number of years of experience associated with career stages (entry, middle, senior) as well as how specialized your education is. For an in-depth look at how job experience in cyber security and education levels compare and contrast, check out our guide on how to prepare for a career in cyber security. A note on the positions below: Some job titles are tiered within that position- a position labeled “mid-level” for example, may have a range between mid to advanced. - Entry level:Security Administrator, IT Auditor, System Administrator - Mid-Level Level:Security Specialist, Regulatory/Policy Analyst, Security Engineer, Security Auditor - Senior Level: Senior Security Auditor, Senior Cybersecurity Analyst, Lead Cybersecurity Tester, Advanced Ethical Hacker - Ethical Hacking - Strong Oral and Written Communication Skills - Strong Code of Ethics - Team Player - Independent worker Security Auditor Job Outlook and Salary There are thousands of jobs available for security auditors across the nation. This is a highly specialized field. Security Auditor positions are projected to grow by 18% by 2024, which is a much faster rate than most fields. The median salary of security auditor is $88,890, though can range from $62,000 – $140,000+ Security Auditor Resources - For information how to utilize the G.I Bill Head to the U.S. Department of Veteran Affairs - To get certified as a systems auditor check out the Information Systems Audit and Control Association. - Here is more comprehensive information regarding Auditing Cyber Security . - For in-depth information on a variety of information security assessment types we like Daniel Miessler’s blog.
<urn:uuid:d6b5995c-2b7f-49e0-9896-205907a098c5>
CC-MAIN-2022-40
https://www.cybersecuritydegrees.com/careers/security-auditor/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00381.warc.gz
en
0.909124
828
2.546875
3
Passive optical networks (PON) technology was available in the middle of 90s. Since the huge development of network, various standards have been established and matured. PON developed from the first ATM PON (APON) and then evolved in Broadband PON (BPON) which is compatible with APON. Later, arisen Ethernet PON (EPON) and Gigabit PON (GPON) bring great improvement in data transmission distance and bandwidth. This tutorial will introduce about GPON technology. GPON is defined by ITU-T recommendation series G.984. GPON represents an increase in bandwidth compared with APON and BPON. GPON can be applied in many areas. In fiber to the desktop (FTTD) application, GPON is distributed via single-mode, simplex optical fiber connectors, and passive optical splitter typically using angled polish connectors (APC) to provide precision terminations. There are four main components in this GPON system: the optical line terminal (OLT), the transmitting media (cabling and components), the fiber optical splitter, and the optical network terminal (ONT). OLT is a device which serves as the service provider endpoint of a passive optical network. It is an active Ethernet aggregation device that is usually located in a data center or the main equipment room. An OLT converts the optical signals transmitting over fiber to the electrical signals and presents them to a core Ethernet switch. The OLT replaces multiple layer 2 switches at distribution points. OLT distributing signal is connected with backbone cabling or horizontal cabling through optical splitters, which are connected to the optical network terminal at each work area outlet. GPON transmits signals through the passive, physical cabling infrastructure. The transmitting media include copper, fiber optic patch cords, enclosures, adapter panels, connectors, splitters and other materials. All these transmitting media components should be factored in the channel loss budget to get a better system performance. Fiber Optic Splitter Fiber optic splitter, also known as beam splitter, is an integrated waveguide optical power distribution device. With this fiber optic splitter, multiple devices can be served from a single fiber. It’s one of the most important passive devices in the optical fiber network. It’s especially useful in GPON, EPON and FTTx, etc. PON typically connects a single fiber from an OLT to multiple ONUs. The connectivity between OLT and ONUs is achieved by using fiber optical splitters. The number of the outputs in the splitter determines the number of the splits. The split ratios often contain 1:4, 1:8, 1:16, 1:32 and 1:64. The insertion loss of a typical 1x32 optical splitter ranges from17 dB to 18 dB. Fiber optic splitter includes fused biconical taper (FBT) splitter and planar lightwave circuit (PLC) splitters. ONT, also called the modem, connects to the termination point (TP) with an optical fiber cable, and connects to your router via an LAN / Ethernet cable. It converts the optical signals to electrical signals to deliver to the end device. ONT always has multiple Ethernet ports for connection to IP services such as CPUs, phones, wireless access points, and other video components. GPON Loss Budget PON is typically composed of OLT and ONUs and other optical transmission media such as fiber cables and connectors which have been pointed out before. Link loss can be caused by these components (cable, connectors, patch cords, splices, couplers, and splitters). Link loss is very important in designing optical access network. The link budget is shown as the following table. This budget covers all optical components between OLT and ONU. Table1. Loss budget for GPON system Path Loss (dB) Minimum optical loss Maximum optical loss Minimum optical loss Maximum optical loss GPON Power Budget The transmitter’s power and receiver’s sensitivity are two parameters that influence the reach of the access network. How to calculate the power budget? The formula is “P=FCA*L+SL+Penalties”. P represents power budget. FCA is fiber cable attenuation in dB/m. L is the distance and SL is a splitter loss. Penalties stands for additional loss such as the splice and connectors. The following table shows the required power budget for different GPON configurations. Table2. The minimum power budget for different GPON configurations L (km ) Required Power Budget (dB) Now let’s calculate the reach of a network system. Suppose that the power budget is about 23 dB. A single-mode fiber cable operating at the wavelength of 1550 nm is used. SL is 14 dB and there are two mechanical splices (0.5 dB/per splice) and two connectors (0.5 dB/per connector). So the maximum reach of the network can be calculated as (23-14-2*0.5-2*0.5)/0.3≈23km. GPON is the most complex of all PONs. But it’s the best one of all PONs. GPON has the benefits of saving costs for moves and adds or other changes, low price per port on passive components, easy installation and low installation costs. So GPON gains the popularity in today’s diverse and ever-changing technology applications.
<urn:uuid:3eacdc64-8152-4859-a078-081efa66353f>
CC-MAIN-2022-40
https://community.fs.com/blog/overview-of-gpon-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00381.warc.gz
en
0.89854
1,283
3.375
3
Planning & best practices for network security in the workplace Ransomware, or malware, is one of the most profitable criminal business models in the history of malicious computer software. 2017 saw over 40,000 attacks per day, with ransomware hiding in over 40 percent of all email spam. In May of 2017, “WannaCry” Ransomware hit 150 countries by accessing employee’s computers. In just one day, it infected more than 230,000 computers with an estimated loss of $4 billion dollars. New strains of ransomware are hitting the cyber world on a continual basis, and Gartner predicts that by 2020, 60 percent of security budgets will be reserved for detection and response capabilities. What is Ransomware? Ransomware is malicious software that locks or encrypts computer files, according to the security awareness training company KnowBe4. With the files “stolen away," the organization must pay ransom in electronic currency to get those files back or to have the device unlocked. These ransoms can range from $500 up to millions of dollars, sometimes with a looming one-week deadline at which time the price starts to rise. Once the fee is paid, the cybercriminal provides a key to unlock or decrypt the stolen computer files. Ransomware can even get past an employee’s personal workstation and work its way across a company’s entire network and encrypt all the files in its path. Unfortunatey, cybersecurity threats will see a substantial rise into 2018, according to Gartner. Organizations need to stand alert and be prepared for these potential threats. Here are a few strategies businesses can take to increase network security. Getting Employees to Know the Threat Understanding potential ransomware threats and educating employees is a first step in fighting back against cybercriminals. Ransomware could infect employee’s computer files in a variety of ways including: 91 percent of cyberattacks start with a phishing email, according to a report by PhishMe. The emails are designed to trick employees into clicking an infected link or opening an infected attachment. The email will usually look like it’s from an organization that the employee would recognize and assume was real. Texting or SMS Phishing This is a similar form of trying to trick people by appearing as a familiar or safe entity but through texting. These texts are trying to get employees to click on or enter personal information. Often Android and iOS-based phones and tablets are targeted in this method. These are actual automated voicemails that trick people into calling a number or entering information through their smartphone, like a credit card number. The numbers coming in also could be electronically forged so they appear like they’re coming from a real source. Attackers will often pick an area code or phone number that seems familiar, for example from the person’s hometown or current town. When the person calls the number back, they may be given information on how they need to fix a problem with their phone. The caller then follows the directions to fix the problem; however, they are actually installing ransomware on their own device. Social media is used in many organizations today, from LinkedIn to Facebook to Twitter. Ransomware is creeping into social media by enticing people to click on a link or a thumbnail of an image. There is commonly a natural response to open image files, but once it’s been clicked, a file automatically downloads and the device is infected. Ads & Images on Websites Sometimes malicious software can be placed right into online ads or images on websites; it can even be an ad for an actual product. With the increasing value of usernames and passwords on the black market, multifactor authentication is an underrated end-user security strategy. By requiring users to present two pieces of identification — ranging from tokens to security codes — at each login, multifactor authentication provides an added layer of safety. As more and more enterprises move toward digital transformation, an inevitable process for successful business models, network security is becoming a top priority. From network architecture to end-user caution, it takes a variety of diligent efforts to keep an enterprise network secure. Learn More in a Live Webinar To learn more about best practices for network security, register for the upcoming webinar on March 21, 2018, at 9 a.m. PT.
<urn:uuid:69da84a8-d6c3-44b4-8ea1-248868fdee18>
CC-MAIN-2022-40
https://cradlepoint.com/resources/blog/planning-best-practices-network-security-workplace/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00381.warc.gz
en
0.92805
905
2.765625
3
The Healthcare industry is playing a critical role in curbing the transmission of COVID-19. This unprecedented crisis has accelerated the implementation of technology in healthcare industry. These solutions and their implementation had been struggling to prove their value. If we look closely, only the countries that have relied on adopting digital technologies and their implementation into policy and health care are the countries that have been successful in containing this disease. These technologies are enabling the healthcare ecosystem to perform more efficiently by providing optimal solutions across a broad range of applications in the healthcare industry. Here are a few technologies that are all set to shape the future of the healthcare industry. Artificial Intelligence (AI): The cost and difficulty of receiving proper healthcare by the general public in the United States have always been a subject of debate. On top of that COVID-19 has created a severe strain on healthcare resources across the world. This issue can be tackled with the help of AI tools and techniques that employ intelligent algorithms to optimize the performance of hospitals and health organizations. Powerful AI tools analyze the large database of patients and citizens and provide a solution to enhance the experience of healthcare services for both the general public and healthcare providers. The application of AI enables healthcare organizations across the world to boost efficiency and streamline daily functions. AI has also been an important asset that has facilitated rapid diagnosis and risk prediction of COVID-19. It enables the researchers to run hundreds of parallel trails of vaccine development in a short time that moves them one step closer to finding a vaccine or cure for several deadly diseases. Machine learning (ML): Advanced Machine learning algorithms are a powerful tool for the identification and diagnosis of diseases that are otherwise hard to diagnose. This also helps in the early-stage drug discovery process. Today, machine learning is being put to use in monitoring and predicting epidemics around the world. With access to a large amount of data collected from satellites, real-time social media updates, etc, artificial neural networks help to collate this information and predict everything from malaria outbreaks to COVID-19 outbreaks. Currently, machine learning technology is also incorporated in fitness trackers and other devices that we use daily. For example, the new Apple Watch Series 6 uses advanced ML and AI algorithms, and powerful hardware to classify your movements, track your heart rate, and even analyze your blood oxygen levels. This will prove to be an important health feature because oxygen saturation has become a key metric to monitor during the COVID-19 pandemic as low blood oxygen levels are one of the symptoms of COVID-19. Thus, ML and AI are destined to add further value to the healthcare industry in the coming days. Mandatory practices such as Electronic Medical Records (EMR) have already primed healthcare systems for cloud computing and data analytics. Hospitals are gathering an overwhelming amount of data of the patients amid the coronavirus pandemic. Also, the amount of data that needs to be shared or generated, and the speed at which it all happens, puts a lot of pressure on healthcare professionals. This is where cloud computing comes in. With the use of cloud computing, the healthcare industry will find storing and managing this huge amount of data to be more efficient and cost-effective. Innovators like Intone Networks enable hospitals to upload, share, and recover data at a quick pace to achieve higher overall performance with their cloud computing services. During this pandemic, doctors and patients alike are at risk of contracting the coronavirus in hospitals. During this critical time, Telemedicine has been an effective tool in containing the spread of COVID-19. The growth of the 5G network has enabled the use of virtual care platforms, video conferencing, and digital monitoring to reduce exposure to the virus. On top of that, involving cloud computing in telemedicine systems ensures that we have reliable and safe telemedicine practices. Traditional businesses and IT sectors are not the only fields being impacted by digitization and technologies in these hard times. Healthcare is a field that is highly suitable for the applications of these advanced technologies. Thus, the application of technology in healthcare such as AI, ML, and cloud computing in the healthcare industry is beyond the scope of COVID-19. These technologies are all set to make healthcare safer, efficient, and accessible to everyone.
<urn:uuid:bcf51ddc-1ba9-4c59-a1f5-f24e44b6b7dd>
CC-MAIN-2022-40
https://intone.com/technology-in-healthcare-future-of-the-healthcare-industry/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00381.warc.gz
en
0.944157
877
3.0625
3
The twentieth century was a time of significant technological progress for the human race. From the first flight to the moon landing to the internet, a single lifetime was enough to witness stupendous changes to the world. At the same time, nations perceived threats of conflict from each other. That’s why many locations around the world saw the construction of subterranean bunkers for people to inhabit for extended times. The original purpose of these bunkers was the safety of people from any conflict on land. If you’ve seen the 1999 romcom Blast from the Past (Brendan Fraser, Alicia Silverstone), you have a good idea of their intended purpose. However, in the 21st century, such major military conflicts are not likely to occur. The bunkers got abandoned for a while. https://www.youtube.com/embed/Bvf00GEiZFYWhy shouldn’t we put these bunkers to good use, now? That’s what smart people from the IT industry have started doing. Yes, now these shelters are increasingly used to store the machines that keep our digital information. Besides this, there are abandoned limestone mines, peppered around globally which are also candidates for use as modern data centers. The original designs had to be altered to ensure that the machines don’t heat up and comply with the safety and security standards. Why go through the trouble of constructing a subterranean data center when we can use ordinary buildings? Let’s consider the practical benefits of the underground placement. Instead of constructing a building from scratch or occupying space in an existing building on a lease, it is far cheaper to use an abandoned bunker or mine to operate a data center. Data centers range in floor space requirements. For some, a few tens of thousands of square feet may be enough. Nevertheless, for more significant operations, even a million square ft. may be necessary. Underground data centers can provide leasable floor space within this range. A regular building may charge $55-$65 per square foot as rent. A mine or bunker, on the other hand, is an existing space that is built of sturdy materials with tried and tested designs. Some modifications may be required to alter the area for operational specifications. This work can go on for years regardless of the weather on the surface. Once the initial architectural adjustments are made, you can start the wiring and placement of the machines. The whole process is much quicker. In addition to this, the maintenance costs are less, which leads to lower per square foot rental costs for the customers. With regards to construction permits, local authorities are less bureaucratic when it comes to subterranean structures. Underground data centers can provide a useful perk that traditional data centers lack. Any company wishing to keep the location of its data center a secret would do well to deploy a secret location. Buildings are usually located in public areas where information can be obtained about what is found inside any particular structure. On the other hand, there won’t be any foot or vehicle traffic next to an underground data center. Deliberate sabotage becomes much more difficult when the data is not in a traditional building. Even if secrecy isn’t a goal, every data center needs to meet stringent security standards. This is easier to implement in underground locations. Usually, you won’t find signs elaborating on what is located inside a subterranean location. Multiple layers of physical security such as guards, ID checks, document/authorization checks, and inventory checks are performed regularly. Technology is also employed to grant access using only biometrics. While in a traditional data center, someone may gain unauthorized access through points of entry that are only open to maintenance/custodial staff, in an underground data center the original layout prevents forced entry or CCTV blind spots. It is worth mentioning the fact that subterranean data centers have 24-hour security teams, trained guard dogs, infrared cameras, and military electromagnetic pulse protection. If you know anything about data center services, you understand how critical uptime is. Most data hosting providers promise 99.99% uptime. To deliver this guaranteed service, level data centers need to be as sure of the power supply as the sun rising from the east. Subterranean data centers deploy generators, uninterrupted power supplies, power distribution units, remote power panels, and programmable logic controllers. As these units are located underground, they are not exposed to the elements and remain calm during summer and protected from rain and snow as well. Technicians and engineers find it easier to maintain equipment underground and respond quicker in case of a fault or breakdown. The systems have full redundancy. So, in case a unit shuts down, other equipment can pick up the load. It’s not unusual for underground data centers to generate power in the megawatts. Major internet service providers partner with underground data centers to lay down the fiber lines for connection to the World Wide Web. For a while now the broadband connections have been based on underground optic fiber cables. Well, underground data centers are more efficiently served because they’re at the same height as the data transmission lines! There is no need for miles of extra wiring that is necessary for data centers in traditional buildings. That saves cost in terms of cabling and fixtures, which simplifies troubleshooting and repairs. The cabling process will also be completed much faster than for a traditional data center, thus saving labor costs, and enabling quicker operationalization. Academic studies show that 50% of a data center’s energy consumption is for cooling. Subterranean data centers have geographical and geological advantages, such as zero solar heat gain, low ambient temperature, natural geothermal cooling, and solid rock surrounded structures. The energy consumption reduction in air-conditioning and mechanical ventilation (ACMV) system of underground data centers is a significant incentive for companies to use underground colocation facilities. The hotter and humid a place is, the higher the advantage of the subterranean placement of data centers. Putting whizzing and whirring machines in a confined underground space will raise the temperature so to prepare any such facility for data center operations, ventilation pipes have to be drilled vertically or horizontally. Overall, you save on energy costs in underground data centers. The subterranean data centers aren’t just a fad. This trend is here to stay. It’s more likely that as land costs rise and leasing becomes more expensive, top executives will go for underground data centers. Fortune 500 companies and governments are already utilizing underground data centers. Currently, most such colocation facilities are in the United States and Europe. However, subterranean data centers are expected to launch all over, especially in hot climates.
<urn:uuid:6e56cbfd-e116-449e-81f7-9450b9a8d2f0>
CC-MAIN-2022-40
https://www.colocationamerica.com/blog/subterranean-data-centers-popularity
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00581.warc.gz
en
0.930552
1,381
2.796875
3
The LAG function returns data from preceding rows. LAG uses the following syntax: LAG ( Column name, Offset, Default ) LAG returns the value at an offset number of rows before the current row. Use the LAG function to compare values in the current row with values in a previous row. Use the following arguments with the LAG function: Column name. The column name whose value from the prior row is to be returned. Offset. The number of rows preceding the current row from which the data is to be retrieved. For example, an offset of "1" accesses the previous row, and an offset of "3" accesses the row that is three rows before the current row. Default. The default value to be returned if the offset is outside the scope of the partition. If you do not specify a default, the default is NULL. For more information about the LAG function, see the
<urn:uuid:fd336c73-e2ba-4a88-87e8-830971bf6663>
CC-MAIN-2022-40
https://docs.informatica.com/data-engineering/data-engineering-integration/10-2-hotfix-1/big-data-management-user-guide/stateful-computing-on-the-spark-engine/window-functions/lag.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00581.warc.gz
en
0.695258
194
2.796875
3
Odds are, the last time you were at a coffee shop, you thought, “I’m paying for the coffee – I might as well use the free Wi-Fi.” There’s no doubt that Wireless Internet, or Wi-Fi, connections are everyone’s favourite free service. However, using free, public Wi-Fi is dangerous, especially for a business professional. 1. Anyone can access it Unless a Wi-Fi router is protected with a passcode, it’s likely that the network is vulnerable to hacking attacks and other threats. Even with a passcode, if the router isn’t configured properly, it’s still vulnerable. When anyone can access a Wi-Fi signal, nothing stops a hacker from connecting to the router and spying on others who are connected to the network. It’s worth mentioning that, although we use the term “hacker,” even a mischievous child with a bit of curiosity could access your files if the Wi-Fi connection isn’t secure. 2. It’s highly used The more people who use a Wi-Fi connection, the more likely it is that one of them will be a hacker. Hackers know that free Wi-Fi draws a crowd, so they use the service themselves for the convenience of finding many new targets at once. 3. Data isn’t encrypted Quite often, open and free Wi-Fi networks do not bother with encrypting your data. Encryption is an extra layer of protection for data that’s sent to, and received from, a Wi-Fi connection. When routers have encryption, hackers have a harder time stealing data. No encryption? No protection. While unsecured public Wi-Fi routers aren’t something that you have direct control over, you can be cautious about using them when you’re out and about. There are three best practices for avoiding a potentially risky Wi-Fi network in a public place: If your business needs its employees to stay connected while on the move, contact Michael Anderson. We’ll help your team understand the best way to work around potentially threatening situations. Learn more about our 365Care+ solution.
<urn:uuid:2dd1df87-caaa-4bb8-9558-f51009a8ab9a>
CC-MAIN-2022-40
https://www.365tech.ca/be-careful-using-public-wi-fi/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00581.warc.gz
en
0.91505
474
2.59375
3
Container Computing: A Primer You’ve probably noticed that software runs differently in different computing environments and within different networks. It doesn’t matter if it’s the same software version; simply moving it around to various environments and platforms can cause a myriad of funky issues to arise. For example, you may be testing software on, say, Bob’s laptop. But when Bob pushes the software into a data center environment, it just doesn’t seem to work quite right. Or, your development team may move software from the staging environment, where it runs smoothly, and into staging, where problems creep up. So, what gives? While different versions of the software seem like the likely culprit, any number of changes can cause software to behave differently. Different environments and systems, after all, may have different network typography, changes in the security policies or even storage capacities. The solution: a technology referred to as container computing. What is container computing? “Containers” refers to a technology designed to allow software to run when it’s moved from one environment to the next. It’s the entire runtime environment in one nifty package: the application (and all its dependencies, libraries and configuration files), lumped into one “container”. Keeping the platform and its dependencies together, or “container-izing” it, sidesteps the problem of worrying about how the software will react in different environments. But be careful not to confuse container computing with virtualization. In virtualization, instead of a container getting moved around, it’s a virtual machine, which includes an OS and application. When a server runs virtual machines, there’s a hypervisor included and a separate OS per machine. However, one server running various containerized applications uses a single OS, with the kernel shared among all containers. Simply put: containers are a far more lightweight resource than what you typically get in a virtualized environment. How container computing benefits companies There are other benefits to container computing. For one, they’re small. We’re talking megabytes versus the gigabytes of memory required by a virtual machine. This makes them efficient; a single physical machine can support many containers when it may only be able to host a few virtual machines. Plus, unlike virtual machines, which may take several minutes to boot up, containers frequently fire up in seconds. They can be used on an as-needed basis, versus draining resources. And finally, there’s the opportunity for modularity. Complex applications don’t need to run within a single container. Instead, they can be split into modules if you choose—application there, the database there, and so on. Splitting it up allows for easier management, and this is the fastest-growing area of exploration our customers’ development teams are working on today. Docker versus Kubernetes Docker is a technology brand, largely credited with today’s container computing popularity. It’s a container platform, whereas Kubernetes (originally created by Google) is an open-source platform for container management. They both play in the same space, but they aren’t the same. With Docker, you can create and run apps using “Dockerfiles” anywhere, almost instantly. And like containers are meant to do, your software will run the same no matter where you command it—your laptop, a production server, wherever. Kubernetes comes in once you want to step up your game and run multiple containers across multiple machines. Suddenly, there’s a lot more to manage, right? Well, Kubernetes’ goal is to simplify this management. Kubernetes takes the manual hassle out of the process, making it easy to have many containers work effortlessly together. This is referred to as orchestration. (Note: Kubernetes isn’t the only container management option. Docker also has a container management arm, called Swarm. While Kubernetes can be used to manage Docker containers, it is not used for interaction with Swarm). The modern cloud Many of our clients work with and like virtual machines. And while VMs may be the preferred choice for certain environments or companies, we’re seeing an increase in the demand for easy-to-deploy solutions like containers. Container computing offers a minimalist approach to cloud management, they’re portable and just easy to use—all reasons why many IT professionals are exploring them as a solid alternative to virtual machines. If you have a question about container computing or wish to explore if they are the right choice for your applications, IT1 can help.<< Back to Resources
<urn:uuid:e61c7d41-5570-46a8-a691-70f57a510248>
CC-MAIN-2022-40
https://it1.com/container-computing-a-primer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00581.warc.gz
en
0.899625
980
2.84375
3
What is Virtualization? Virtualization means decoupling software from hardware, enabling mobile network operators to handle dynamic and challenging use cases, such as enhanced mobile broadband (eMBB), massive connectivity (mIoT), and ultra-reliable and low latency communications (URLLC). It allows you to use a physical machine’s full capacity by distributing its capabilities among many users or environments. What is RAN Virtualization? Radio Access Network (RAN) virtualization is a key architecture concept in 5G and it provides the flexibility and scalability for MNOs. A virtual RAN consists of a centralized pool of baseband units (BBUs), virtualized RAN control functions and service delivery optimization. With a virtual RAN, baseband modules are moved away from the site and to a data center. As a result, functions of the BBUs can be implemented with virtual machines in a centralized data center. This provides intelligent scaling of computing resources while decreasing energy consumption and capital expenditure (CAPEX). What is RAN Virtualization Forms? Virtualization can be applied to different forms of the RAN: 3.Virtualization of multiple radio access technologies (RATs), and virtualization of computing resources. Spectrum virtualization allows the available spectrum to be utilized more efficiently by permitting multiple network operators to share the same spectrum. Hardware sharing is of particular relevance for small cells in order to avoid massive over-provisioning. The virtualization of multiple RATs allows simplified management of different RATs, each dedicated to different services and offering different quality of service (QoS). The virtualization of computing resources is a new option that builds upon the idea of co-locating the processing resources of multiple BSs at a central processing center. While early implementations provided each physical BS with its own dedicated computing resources, which resulted in an over-provisioning of computing resources, more advanced implementations permit a dynamic reassignment of processing resources to BSs. A flexible Radio Access Network (RAN) is the cornerstone of 5G networks. The evolved RAN architecture, designed with cloud-native virtualization techniques, enables the RAN to flex and adapt based on usage and coverage. This flexibility provides expanded and more convenient network location choices for the baseband processing. It offers a strategic differentiation by enabling the Remote Radio Units (RRUs) to interwork with the Virtualized Baseband Unit (vBBU) over a non-ideal fronthaul (i.e. ethernet), overcoming the traditional constraints of CPRI over fiber.
<urn:uuid:f5f053f7-9345-4bd4-857c-59ceaba85560>
CC-MAIN-2022-40
https://moniem-tech.com/2019/12/07/ran-virtualization-in-5g-network/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00581.warc.gz
en
0.868098
551
3.09375
3
Regardless of how familiar you are with Information Security, you’ve probably come across the term ‘malware’ countless times. From accessing your business-critical resources and sensitive information to halting business operations and services, a malware infection can quickly become an organization’s worst nightmare come true. As a business owner, you must be aware of the implications of different types of malware on your company’s bottom line, and what steps you can take to protect your company from future attacks. This article will walk you through the various types of malware, how to identify and prevent a malware attack, and how to mitigate the risks. What is Malware? Malware, a combination of the terms ‘malicious’ and ‘software,’ includes all malicious programs that intend to exploit computer devices or entire network infrastructures to extract victim’s data, disrupt business operations, or simply, cause chaos. There’s no definitive method or technique that defines malware; any program that harms the computer or system owners and benefits the perpetrators is malware. A malware usually exploits unpatched software vulnerabilities to compromise an endpoint device and gain a foothold in an organization’s internal network. It could be hidden in a malicious advertisement, fake email or illegitimate software installation. Cybercriminals often leverage social engineering tactics like phishing and spear-phishing to propagate sophisticated malware. From mining cryptocurrency to launching DDoS attacks against networks, there are countless ways in which malware can access and utilize victim’s computers and data. Warning Signs of Malware Infection How often have you ignored unusual system slowdowns or unexpected pop-up messages? Unfortunately, this could be your computer trying to give away the presence of malware. To stop a malware attack in its tracks, you must first be able to identify an infection. Here are some of the key signs that almost always indicate malware progressing in your computer system: - Your computer starts running slowly and takes forever to boot. - Your computer screen freezes or the system crashes, displaying the ‘Blue Screen of Death” (BSOD) - Your web browser keeps redirecting you to unknown, suspicious websites. - Security warnings keep popping up, urging you to take immediate action or install a particular security product. - Many pop-up ads start appearing randomly. All of these could be typical signs of malware. The more symptoms you see, the more likely it is that you’re dealing with an infected computer. But don’t just solely rely on the list included above. It is not unusual to have your system or network infected with malware, such as spyware, that often lingers secretly with no apparent symptoms. Don’t worry though. We’ll be discussing how to detect and remove malware silently lurking in your system, exfiltrating sensitive data. Common Types of Malware Malware can be categorized based on how it behaves (adware, spyware and ransomware), and how it propagates from one victim to another (viruses, worms and trojans). For instance, computer worms are self-propagating malicious software, while trojans need user activation to infect and spread. Here are a few of the most common malware types that most people have heard of,, and how they continue to wreak havoc across industries. If you’re lucky, the only malware program you’ve come in contact with is adware, which attempts to expose the compromised end-user to unwanted, potentially malicious advertising. A common adware program might redirect a user’s browser searches to look-alike web pages that contain other product promotions. Statistics gathered between October and December 2019 by Avast’s Threat Lab experts show that adware was responsible for 72% of all mobile malware, and the remaining 28% consisted of banking trojans, fake apps, lockers, and downloaders. Spyware can silently infect a computer, mobile device or tablet, trying to collect keystrokes, gather sensitive data, or study user behavior, all the while victims remain entirely unaware of the intrusion. Hackers may use a keylogger to capture sensitive information, including payment details and login credentials of victims, or they may leverage a screen grabber to capture internet activity. A common type of spyware is a RAM scraper that attacks the storage (RAM) of electronic point-of-sale (POS) devices to scrap customers’ credit card information. One of the most notorious one being the BlackPOS spyware that compromised the data of over 40 million Target customers in 2013. Ransomware is one of the most widespread cyber threats, making up at least 27% of all malware incidents as per Verizon’s annual DBIR report (2020). Ransomware programs gain access to a computer’s file system and execute a payload to encrypt all data. The data is neither stolen nor manipulated. Shortly after a ransomware attack, cybercriminals will demand a ransom amount, usually in cryptocurrency, in exchange for the cipher key. Programs such as Windows Defender and McAfee do have security in place to remove ransomware, but it is often the case that these programs are not kept up to date, leaving businesses vulnerable to ransomware attacks. WannaCry 2017 is well-known for the stir and panic it caused in May 2017 by affecting thousands of NHS hospitals, delaying critical medical procedures, and rerouting ambulances. The ransomware leveraged a Microsoft exploit, EternalBlue, which already had a patch that many conveniently did not apply. Unfortunately, most of the data it encrypted was lost for good due to faulty code. 4. Computer Viruses A virus is the most commonly known form of malware. It differs from other malware in its ability to attach to a host file and infect other files on the computer system. It copies itself whenever the file is copied, and once a user opens the file, the virus payload is executed. Viruses can be highly destructive, infecting the hard drive on victim’s computers and overwriting or exfiltrating critical information. Email attachments are the top vector leading to virus infections. Computer viruses often utilize deception techniques and keep evolving to evade antivirus software. Viruses like CIH (Chen lng-hau) do not increase the file size of the host file, thus becoming undetectable for antivirus programs that detect viruses based on the file size. 5. Computer Worms A worm is quite similar to a computer virus, except it is a standalone software that does not rely on a host file or a user to propagate itself. A worm is self-replicating and can quickly spread across computer networks by distributing itself to the victim’s contact list and other devices on the same network. A firewall can be effective in stopping the spread of worms through network endpoints. However, antimalware is required for detecting worms disguised as email attachments. NotPetya shook the entire world in June 2017. It was undisputedly the fastest spreading, most destructive worm that crippled hospitals, multinational companies and pharmaceutical giants globally by irreversibly encrypting systems’ master boot records. 6. Trojan Horse A trojan horse is a malware program that advertises itself as legitimate software and tricks users into downloading and executing it. Once activated, it can harm the victim’s computer in several ways, including keylogging. Mostly, it can create a backdoor to bypass firewalls and security software to give remote access to unauthorized users who can steal data and control the computer system. Trojans cannot self-replicate and are often propagated through email attachments and internet downloads. The backdoor trojan, PlugX malware, compromised around 7.93 million customer records from a Japanese travel agency, JTB Corp, in July 2016. And it all started with a single employee falling prey to a phishing email. A botnet is a network of internet-connected ‘zombie’ computers that can execute coordinated actions after receiving commands from a centralized server. Bots secretly infect a computer, which then becomes a part of the bot network. They can be used to launch spam emails and distributed denial of service (DDoS) attacks, leveraging hundreds of thousands of compromised computers. Conficker, or Downadup, is a fast-propagating malware discovered in November 2008. Over the years, it has infected millions of computers to create a botnet. Cybercriminals can utilize the botnet to carry out malicious activities, such as phishing, identity theft and bypassing security to access private networks. Less Common Types of Malware In addition to the types discussed above, there are many other types of malware that are less common but equally destructive. A rootkit is a collection of software tools that can gain access to an operating system and assume administrative privileges. It can use the acquired privileges to facilitate other types of malware infecting a computer. Moreover, it can also take over browsing sessions to prevent access to webpages with antimalware programs. 2. Fileless Malware Fileless malware is a malicious code that exploits legitimate software programs and operating system tools to infect a computer’s memory. As the name suggests, it does not need a file system to spread, and therefore, leaves no trace for detection through traditional antimalware programs. Scareware is basically a scam used by attackers to trick victims into thinking that their computers or mobile devices have been compromised. It typically displays pop-ups on webpages to scare a user into purchasing and installing fake, potentially harmful, security software. Today, bad actors often launch cyber attacks that are a combination of several malware types. For instance, a worm could quickly self-replicate and deliver an executable to encrypt file systems across computer networks and launch massive ransomware. These hybrid forms of malware are even harder to detect, contain and remove. How to Protect Your Business From Malware The threat landscape is ever-evolving, and so are the security mechanisms. With malware becoming more sophisticated than ever, businesses must stay ahead of the cybersecurity game by ensuring that: - All business applications and operating systems are always up-to-date, and available patches for known software vulnerabilities are installed. - Antimalware scans are run regularly across all devices that access the internal network. - Employees only install apps and software that they actually need from legitimate sources. - Mobile devices that access the private network are also well-equipped with mobile security solutions. - Single Sign-on (SSO) and Multi-factor Authentication (MFA) mechanisms are implemented to protect against keylogging. - In flexible working or bring your own device (BYOD) environments, employees have separate PCs for work and personal use. - Employees are aware of the cybersecurity best practices, and regular security awareness workshops are conducted. - Employees are knowledgeable enough to spot a phishing email and double-check before providing sensitive information. - Your organization has invested in Security Information and Event Management (SIEM) software to aggregate and analyze event logs generated by network and apps. - If you work with an MSP (Managed Service Provider), make sure they are also a Managed IT Security Provider. Certain certifications will help you identify whether or not they can provide a high level of security including, but not limited to: - Certified Informations Systems Security Professional (CISSP) - AICPA Service Organization Control Reports SOC 2 Certification - MSP Alliance Cyber Verify AAA Rated Company How to Get Rid of Malware No single security program is enough for malware that is known to morph and evolve rapidly to avoid detection. With today’s virtually endless endpoint devices and huge attack surface, security incidents are inevitable. A reputable enterprise antimalware program can detect an installed malware, quarantine the infected device to avoid transmission, and remove the malware. But let’s not forget that preventing a malware infection altogether is much easier than getting rid of it once it has infiltrated your IT infrastructure. The best course of action is to adopt a proactive approach to cybersecurity.
<urn:uuid:996a356d-3d79-4cbf-86f5-02117509064d>
CC-MAIN-2022-40
https://parachute.cloud/types-of-malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00581.warc.gz
en
0.921567
2,550
2.796875
3
What is Zip Bomb?GRIDINSOFT TEAM The classic zip bomb is a tiny archive, most of which is measured in kilobytes. When this file is unpacked, its contents are more significant than the system can handle. Typically, these are hundreds of gigabytes of data, and the more advanced may reach petabytes (millions of GB) or even exabytes (billions of gigabytes). So, yes, to be clear, we’re talking about filling the excubites in the kilobytes. The first mention of a zip bomb is dated 1996. One of the users of the then-popular messaging service Fidonet posted on the bulletin board a malicious archive, which an unsuspecting administrator opened. When you open a file, it starts unpacking all the data, which causes the program or the entire system to crash, as there is simply not enough space to unpack this amount of data. 42.zip - A Classic Zip Bomb The most common zip bomb you can find on the Internet - is "42.zip". It weighs only 42 Kb in a packed form. However, if you unpack it, you get 4.5 PetaBytes (36,000,000 GB) of data on the way out! This is achieved by a recursively nested zip files system, where the lowest zip-file level is decompressed to size 4.3 GB. The construction uses the most common decompression algorithm, which is compatible with most zip parsers. Zip Bomb Definition: How it works? The principle of the zip bomb is that it creates, for example, a text file that is either empty or contains the same symbols and is archived. Because the file contains the same information, it will archive itself and have a much smaller size than the others. Then another 16 of the same archives are created, but since they are completely identical in the hash, they will be like one single file and weigh nothing. Then another 16 copies, then another 16 copies, and so 6 times. Eventually, we have 6 layers of 16 archives, each of which has 16 same archives. What is compression? Compression is the reduction of the number of bits required to represent data. Let’s look at this in more detail: This string is 18 characters long. The xxx can be found a lot of times. This is what’s known as statistical redundancy. Let's take the longest common sequences in data and represent them using as few bits as possible. Now, compressing this string means we have to represent this information in less than 18 characters. Replace every occurrence of ‘xxx’ with a symbol, say ‘$’, and see what happens. Now we use an intermediate (compressed) string form along with some instructions on how to get the original string: The first line is our compressed data, and the second - is instruction. A dictionary that we have created tells us that if we need to decompress the data, we should replace every occurrence of $ with xxx to get back the original data. Now let's count the total number of characters. Now we need 10 + 5 = 15 to represent the same information. What is Zip Bomb Used For? Since the zip bomb does not directly damage the system, it is often used to cause a failure or disablement of the program trying to access it. It can also be used to disable antivirus software to create a backdoor for other typical malware. Instead of stealing the regular operation of the program, the zip-bomb allows the program to work as intended. Still, the archive is carefully designed, so unpacking it (for example, antivirus scanning for viruses) takes an excessive amount of time, disk space, or memory (or all of it). At this time, the attacker may try to infect the system with a real virus. Although sometimes in an attempt to scan attachments, the antivirus takes all the resources of the PC, thereby loading the system so much that further use of the device becomes impossible. Where Does the Zip Bomb Come From? It’s almost impossible to catch such a virus these days accidentally. Most modern anti-viruses have learned to recognize and neutralize zip bombs, and in practice, the effectiveness of such an attack is minimal.
<urn:uuid:c7e5c5d9-9259-4497-b317-d918b1324349>
CC-MAIN-2022-40
https://gridinsoft.com/zip-bomb
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00581.warc.gz
en
0.924491
917
3.21875
3
Adware is a prevalent threat in the vast network of the Internet. It’s so common because, as you’ll find out, it’s remarkably easy to distribute and fool unsuspecting victims. The following article contains everything you need to know about what adware is and how to protect yourself from it. In short, adware is unwanted software that displays ads on a computer, typically in the web browser. In addition to desktop computers, adware can also infiltrate mobile devices, often covering the entire screen with intrusive pop-ups. Although adware is not necessarily harmful, it definitely has the potential to be. Typically, adware creators distribute this intrusive software for ad revenue. Whenever a user clicks on an ad (pay-per-click), sees it (pay-per-view), or installs a program (pay-per-install), revenue is generated. This is adware at its most innocuous — not particularly harmful, but extremely annoying. Some adware is even used to record and collect data regarding your online activities. Once obtained, this data is then sold to advertisers so they can better bombard you with targeted ads. Simply clicking on an adware-laced ad is enough to lead to a serious malware infestation — this is where things begin to get serious. Although adware is not necessarily harmful, it definitely has the potential to be. Before we dive further into things, you need to understand that there is a difference between regular ads (from free ad-supported programs/apps or websites) and malicious adware that is intentionally installed onto an unsuspecting user’s device. This article will focus on the latter — unbidden adware that either records your online activity or infects your devices with harmful malware. Like most forms of malware, adware infects devices in a sneaky manner. It can be found hidden within downloadable software that comes from dubious sources, including illegal torrent websites, the dark web, and other unsecure websites. Still, even some reputable programs and download platforms have been known to contain malware-laced adware. Keep in mind, some downloadable software will allow you to view the complete list of contents bundled into a software package. Before clicking “Next”, be sure to carefully read over the text to make sure you know exactly what you’re installing. Unfortunately, some software installation tools only mention what extra software will be installed in the EULA. Other more dangerous software installers have even been known to inject adware or potentially unwanted programs (PUPs) into devices without warning. Software installers have been known to inject adware or PUPs into devices. Did you know that adware can infect your device just by simply visiting a website? Whether it be a trusted website or a suspicious one like an illegal torrenting or movie streaming platform, if you land on one, you may be in trouble. Hackers exploit browser vulnerabilities in order to inject adware into a website. Should you be unfortunate enough to visit an infected website, adware or another form of malware will find its way onto your system via drive-by downloading — no clicking necessary. Of course, clicking on infected ads is another way that malware can spread throughout your device. It’s not uncommon for adware to disguise itself as legitimate software to trick you into installing it. In this case, it functions like a Trojan as it wears a disguise to infiltrate your device and steal your sensitive data. Check out the following damaging effects that adware can have on your devices and your safety: Disruption from constant pop-ups — Adware may bombard you with pop-up ads. If you try to close them, they may open other pop-ups or redirect you to suspicious websites. Thus, intrusive pop-ups are not only aggravating, but they can also lead to other malware infections. Privacy breaches — Some forms of adware can act as spyware, too. This type of adware tracks your location and internet activity, sending data back to the adware sleuth, who then sells it to third parties. Once it’s clear what you’ve been searching for online, the adware will bombard you with targeted ads. Increased costs from internet data usage — If your mobile device contains adware, then you may see an increase in your data usage which will consequently lead to higher bills. To avoid this risk, make sure to only install apps from trusted sources. To further protect your devices, consider purchasing a reputable mobile antivirus program (Android or iPhone based). According to Malwarebytes’ 2021 State of Malware Report, adware was the most detected form of malware on Windows, Mac, and Android devices in 2019 and 2020. Source: Malwarebytes’ 2021 State of Malware Report Furthermore, adware was the second most detected threat to businesses in 2019 and 2020: Source: Malwarebytes’ 2021 State of Malware Report In their IT threat evolution Q1 2021, Kaspersky reports that in 2020, most malicious objects detected on macOS platforms were adware. The creators of these adware programs have begun updating their code to include support for the first Apple-designed processor — the M1 chip. Moreover, according to Check Point Research, adware is the most widespread type of malware found on mobile devices. Hiddad, short for “Hidden Ad”, was the most common genre of adware used throughout 2020. Hiddad stays hidden from view and operates by displaying ads that collect system information. Source: Check Point Research Cyber Security Report 2021 Adware was the most detected form of malware on Windows, Mac, and Android devices in 2020. Typically, you won’t notice adware programs installed on your computer unless they’re browser or toolbar extensions. Thankfully, you can detect and remove adware easily using a strong adware removal solution, such as Bitdefender or Kaspersky. In any case, the following are some telltale signs that your browser has been hijacked by adware: You get bombarded with ads, including pop-up ads, full-page ads, banner ads, desktop notification ads, etc. New toolbars, extensions, and/or plugins have been installed in your browser — any browser can be a target. Your web browser’s homepage has changed without your knowledge or consent. Your default search engine has changed. Some of the websites you often visit aren't appearing as they should. When you click on a link on a website, you get redirected to an unrelated website. Your browsing load time is significantly slower. Your browser frequently crashes. You can’t close pop-ups; new ones constantly pop up. Pop-up ads contain bogus virus detection solutions or other suspicious messages. Mac computers are not bulletproof when it comes to malware. While it’s true that macOS has stronger protection features than Windows, Macs are not entirely immune to malware — including adware. As such, an adware removal solution for Mac is highly recommended. In fact, adware is the most dominant form of malware detected on macOS platforms, according to Malwarebytes. This is partly due to Apple’s ever-growing user base. Another reason for this predominance is that adware is much easier to propagate than other malware forms. Lastly, NewTab was the main culprit when it came to adware detections on Mac in 2019. NewTab is a form of malicious adware that redirects web searches to provide cybercriminals with illicit affiliate revenue. Mac users unknowingly install it through browser extensions that look like standard, legitimate applications. Whether you’re a Mac user or a Windows user, be on the lookout for the following adware warning signs: online ad spamming, browser hijacking, slow computer response time, and frequent crashing. Oh, and think twice before installing a browser extension! Adware is the most dominant type of malware detected on macOS platforms. As previously noted, the security research company, Check Point, determined that adware is the most common type of mobile malware. Adware can infect your Android devices in several ways; one method is by exploiting a vulnerability in your web browser. Hackers exploit this vulnerable area as quickly as possible before a developer has the chance to apply a security patch. If your browser becomes infected, then you’ll be bombarded with a never-ending string of intrusive pop-up ads in no time. Adware can even make its way into your devices via official app markets like Google Play and App Store. For example, the Agent Smith app imitates popular apps and is responsible for affecting millions of devices with fraudulent ads for financial gain. Another common way that adware infiltrates Android devices is via apps from unofficial app stores. Corrupt apps may contain adware that illicitly gains ad revenue or leads to more sinister attacks through ransomware or other forms of malware. As far as adware removal on Android is concerned, we recommend an adware scanner and removal apps like Bitdefender or McAfee. Adware is the most common type of mobile malware. Luckily, you can avoid becoming a victim of adware by following a few simple rules: Update your software. It’s especially important to regularly update your operating system, browser, anti-malware program, and other software to prevent an adware infection. Software patches often strengthen any existing vulnerable areas and are a great way to protect against cyber threats. Don’t be so quick to download and install new programs or apps — especially freeware. If you don’t need it, don’t install it. Read online reviews to see what other users are saying. Don’t click on pop-ups and use a pop-up blocker. These ads sometimes contain malicious adware or other threats. Consider using Google Chrome; it blocks pop-ups by default. To turn this feature on, go to Settings > Security > Site Settings > Pop-ups and redirects. Every major browser should have a built-in option to block pop-ups. Read the terms and conditions before installing new software. If you don’t have time for that (understandably), try skimming the text while keeping an eye out for the names of any additional software that will be installed. Avoid illegal downloads from torrent websites and illicit movie streaming sites. Sure, free games and films may seem attractive; however, torrenting sites are often laced with malware. Illegal streaming sites aren’t much better, often containing ads or links that lead to malware infestations. Never download or open files from unknown sources, such as emails or texts, that look like phishing attempts. Be aware that phishing attempts often come from friends, family, and associates who have had their accounts hijacked. To spot a scam, look for red flags, such as odd-looking URLs and email addresses, bad grammar, and urgent requests. Never install apps onto a mobile device from unofficial sources. Although official app markets like Google Play and App Store may contain corrupt apps that can distribute adware or other forms of malware, they are the safest places to download apps. For additional safety, don’t download apps from any third-party Android stores or jailbreak your iPhone. Use an ad blocker. Ad blockers work great when it comes to blocking most types of ads on the web. Better yet, these tools can even be used to block malicious ads and pop-ups. Still, ad blockers do not guarantee that all malicious advertisements will disappear. Use a reputable antivirus solution, such as Norton and Kaspersky, to remove existing malware and warn you regarding potential adware and other threats. Adware can be difficult to remove, but once you have a better understanding of what it is, you’ll have a much better chance of getting rid of it once and for all. If any adware or potentially unwanted programs (PUPs) have been installed on your computer, you can locate them via the Apps folder/window. PUPs and software containing adware can be uninstalled manually without specialized tools; however, overly malicious forms of adware cannot be removed in this fashion. Not to mention, you may not even know which program is the culprit. The easiest way to remove adware is to use an antivirus solution like Norton or McAfee. These antivirus programs scan your system for malicious software and remove all traces of any detected adware. The easiest way to remove adware is to use an antivirus solution. An adware virus is malicious software designed to bombard a user’s web browser with ads and pop-ups. Keep in mind, adware itself is different from a virus. A virus is a piece of code that infects a computer and spreads to other devices, doing various levels of harm along the way. In contrast, effective adware latches onto a user’s system, making money for its owner through ad revenue, data theft, and other malicious means. A reputable anti-adware program like Bitdefender can help you keep adware at bay on your computer or mobile device. The impact that adware can have on your system ranges from mild annoyance to data breaches that can lead to substantial privacy and material loss. Thus, adware not only bombards you with ads, generating ad revenue for its owner, but it can also inject malware into your device, leading to severe damage. Steer clear of adware and its related threats using a trusted anti-adware program. The best way to remove adware is by using a cybersecurity suite. These adware removal tools can scan your computer for adware and related potentially unwanted programs (PUPs), removing them with your permission. You can detect adware by using an adware prevention & removal utility. These cybersecurity software programs are equipped to detect and uncover hidden adware in the form of browser toolbar malware, scripts, or other potentially unwanted programs (PUPs). Octav Fedor (Cybersecurity Editor) Octav is a cybersecurity researcher and writer at AntivirusGuide. When he’s not publishing his honest opinions about security software online, he likes to learn about programming, watch astronomy documentaries, and participate in general knowledge competitions.
<urn:uuid:3de0a9bd-f524-446e-a549-a6cae617e505>
CC-MAIN-2022-40
https://www.antivirusguide.com/cybersecurity/adware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00581.warc.gz
en
0.920992
2,955
2.875
3
Shortly after the 9/11 terrorist attacks rocked the United States in 2001, the National Institute of Standards and Technology began awarding grants to companies to support private-sector initiatives aimed at protecting America’s critical cyberinfrastructure. NIST is a nonregulatory arm of the U.S. Department of Commerce, and at that time, it was one of the first organizations working to help businesses (small and midsize businesses, in particular) find ways to protect themselves against cyberattacks. In May 2017, U.S. President Donald Trump gave an executive order that agency heads must incorporate the NIST Cybersecurity Framework, also known as CSF. However, Kevin Stine, the chief of the applied cybersecurity division at NIST, has since reiterated that the CSF is voluntary; it’s a tool that organizations can reference and a standard by which to judge an organization’s progress. - Cybersecurity considerations for business leaders navigating coronavirus disruptions (opens in new tab) Some common framework misconceptions Although the NIST agency has been around since the early 20th century, it’s not exactly well-known outside of technology circles. Even in those circles, there exist some common misconceptions about what NIST actually is or does. CSOs, CISOs, and other security leaders often confuse the organization with the cybersecurity framework that it has developed. Although the framework has been more widely adopted in recent years, some business leaders are still under the impression that it applies only to government organizations. Fortunately, that’s not the case. Everyone can — and should — apply it to their businesses. Because NIST is a nonregulatory agency, it can’t certify an organization as compliant; what’s great about the framework is that it has proliferated, it’s easy to understand, and it has been thoughtfully developed. Moreover, it has evolved with the times — the digital world now looks very different from the landscape of just two decades ago, and NIST’s framework provides security professionals with a common lexicon to help them manage risk in an always-changing business environment. In fact, it’s not necessarily required for companies to adopt each part of the framework to see positive results. According to a survey of IT and security professionals conducted by Dimensional Research, 64 per cent of respondents said they applied only parts of the framework. It’s possible to gain results by adopting only portions of the framework, and it can be worthwhile to break up the process for organizations that are just beginning the adoption journey. - Internet users suffer from 'security fatigue' (opens in new tab) Putting the framework to work All of the above are reasons to feel good about relying on the framework when assessing your cybersecurity posture. However, don’t forget the following principles as you do: 1. Start with why. Simon Sinek’s advice (“Start with why”) has become gospel among marketing and internal communications professionals, but it’s equally applicable to leaders of cybersecurity initiatives. The answer to the “why” provides the North Star you will need to efficiently invest in the solutions and people required to implement those portions of the NIST framework that make the most sense for your business. If you are a CSO, CISO, or another tech leader, you should always make decisions using a why-what-how approach. Without first articulating why you need particular tools or personnel, you’ll find it hard to get buy-in from other executives as you develop your security program. Even if you already have that buy-in or the freedom to make your own decisions, do not just go buy and hire. Build your plan around objectives rather than specific technologies and you will have an easier time conceptualizing and communicating its importance. “Why” will also inform the Framework Implementation Tiers, which provide the context around how your organization views cybersecurity risk and its processes for managing that risk. Recall that tiers describe the degree to which your cybersecurity risk management practices exhibit the framework characteristics ranging from “partial” in Tier 1 to “adaptive” in Tier 4. Lastly, understanding your “why” will also inform the development of the requisite framework profile(s) for your industry, company, or department as identified through your risk management processes. Profiles demonstrate the evaluation of mission versus the cybersecurity framework and the resulting priorities. These profiles are essential to ensuring your plans are consumable across roles, thus facilitating security investment discussions with C-suite executives and measuring progress against objectives. 2. It takes more than one party There’s no panacea when it comes to cybersecurity — and that’s true whether you’re a tiny startup or a massive Fortune 500 company. The NIST framework helps you understand and address your security needs in the context of a five-pillar system (identify, protect, detect, respond, recover), and it can be a great starting point for your organization. Even selective adoption can yield results, and the NIST was designed so that you can tailor it to support your specific business objectives That said, the shared responsibility model of PaaS and the nebulous nature of relationships between cloud entities can make visualizing your infrastructure — and thus your security posture — pretty tough. Simply put, the cloud comes with a lot of unknowns, and things can easily go wrong. Depending on the complexity of your infrastructure, it’s likely that you’ll need to partner with multiple third parties to ensure that you’ve covered all your bases. 3. Focus on people. When thinking about NIST’s guidelines and other cybersecurity frameworks, it’s often easy to forget about the people who build and manage them, and who they’re ultimately in place to protect. Regardless of which strategies and technologies you decide to implement, you’ll need the right people to help you ensure that they work effectively — that includes everyone from the C-suite to the newest entry-level employee. Just as it’s important to communicate your why, it’s also important to humanize your governance plan as much as possible. Remember, people will be spending valuable time and energy working toward it. Without articulating how your plan will directly impact the individuals on your team, you may find it hard to keep them motivated. Similarly, be thoughtful about assigning ownership over key tasks. When you make two people responsible for the same function, it’s easy for each to assume that the other is handling it. Unfortunately, scenarios like that are often at the heart of costly data breaches. Editor's Note: Since publishing the article, we've been contacted by NIST, saying that while Kevin Stine is talking about private industry in the second paragraph, the Cybersecurity Framework is *not voluntary* for federal agencies. - SMS 2-factor authentication is alive and kicking (opens in new tab) Pete Thurston, chief product and solutions officer, RevCult (opens in new tab)
<urn:uuid:5539b67e-e39a-4330-8c7f-5333957350c1>
CC-MAIN-2022-40
https://www.itproportal.com/features/understanding-the-gist-of-nist/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00581.warc.gz
en
0.944281
1,457
2.546875
3
//Blogs - 9 Aug 2022 What is Phishing? Phishing is an attack whereby the attacker impersonates a reputable entity or person in email or other forms of communication, such as SMS or instant messaging. Most commonly attackers will use phishing emails to distribute malicious links or attachments that can perform a variety of malicious functions. A phishing attack can have devastating results. For individuals, this includes unauthorised purchases, electronic theft of money, or identity theft. Phishing attacks can often be used to gain a foothold into an organisation’s network, as a part of a larger attack, such as ransomware or Business Email Compromise. This happens when employees are compromised in order to bypass security controls and distribute malware or fraudulent messaging inside the victim organisation. A successful attack on an organisation can have severe implications such as financial losses and extended outages, in addition to a reduction of market share, damaged reputation, and loss of customer trust. Types Of Phishing Attacks Email Phishing Scams In the most common version of email-based phishing, the attacker sends out thousands of fraudulent messages with the intent of gathering personal information, account credentials or for financial gain. This type of attack is very much a numbers game, even if 1% of several thousand recipients fall for the scam, then the attack can be considered successful. As with legitimate marketing campaigns, to improve success rates fraudsters will also take the time and effort to maximise their effort by trialling different messaging and tactics and studying their relative success rates. They will clone emails from a spoofed organisation, by using the same phrasing, typefaces, logos, and signatures to make the messages appear legitimate. Additionally, attackers will commonly try to push users into action by creating a sense of urgency. For example, an email could threaten account expiration and place the recipient on a deadline. By applying a time-sensitive cue, users are more likely to act sooner rather than later, without much thought. These scams can be hard to spot, typically having a misspelt website address or extra subdomain, so for example www.commbank.com.au/login could be www.combank.com.au/login. The similarities between the two website addresses give the impression of a legitimate link, making it more difficult to discover an attack is taking place. This is a more precisely focused attack as spear phishing targets a specific person or organisation, as opposed to thousands of people as described above. It’s a more specific type of phishing that often incorporates special knowledge about an organisation, such as its staff members’ names and titles, organisational structure and clients. A common spear phishing attack scenario is where the attackers will research names of employees within an organisation’s marketing department in order to gain access to the latest project invoices. Posing as a marketing director, the attacker emails a departmental project manager (PM) using a subject line that reads something like: “Updated invoice for Q3 campaigns”. This email will be a clone of the organisation’s standard email template. A link in the email redirects to a password-protected internal document, which is simply a spoofed version of a stolen invoice. The PM is requested to log in to view the document. The attacker steals the login credentials, gaining full access to sensitive areas within the organisation’s network. By providing an attacker with valid login credentials, spear phishing is an effective method for executing the first stage of further attacks, such as ransomware or Business Email Compromise. How To Prevent Phishing To protect against phishing attacks some steps should be taken by both employees and enterprises. For employees, simple vigilance is vital. A spoofed message will almost always contain subtle differences that expose their fraudulent purpose. These frequently include spelling errors such as website names. Users should also stop and think about why they’re even receiving the email and if it seems unusual or out of character for the alleged sender. At an enterprise level, a number of steps can be taken to mitigate both phishing and spear phishing attacks: - Two-factor authentication (2FA) is the most effective method for countering phishing attacks, as it adds an extra verification layer when logging in to applications. 2FA relies on users having two things: something they know, such as a password and username, and something they have, such as a mobile phone running an authentication app. - Organisations should enforce a strict password management policy that takes into account how people actually behave. For example, staff should be required to use passwords that are difficult for an attacker to guess but not so complex they can’t be remembered by people. Passphrases are often a better strategy than complex passwords. Password managers combine convenience and strong passwords and their use should be encouraged. Staff should be educated not to reuse the same password for multiple accounts, as this makes password spraying attacks much easier. - Empowering employees through engaging and informative cyber security awareness training will help reduce the threat of most cyber security attacks, including phishing. - Enable SPF and DMARC to make it more difficult for attackers to send email faking an organisation’s identity. Early Warning SMS Early warning notifications assist in managing critical security threats to your network. AusCERT monitors malicious activity online and the Early Warning Service provides SMS notifications of any immediate and serious threats relevant to your industry. To find out more about this service click here.
<urn:uuid:eb3487f2-036e-4880-a0ae-2138d94b376f>
CC-MAIN-2022-40
https://auscert.org.au/blogs/what-is-phishing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00581.warc.gz
en
0.932589
1,140
3.328125
3
Cybersecurity firm BlueVoyant published a report on August 27, 2020, finding that State and Local Governments have seen a 50% increase in cyberattacks since 2017. The report outlined the cyberattacks as either targeted intrusions, fraud, or damage caused by hackers. BlueVoyant noted that the 50% increase in attacks is likely a fraction of the true number of incidents because many go unreported. The main weakness with State and Local Governments is the general lack of a basic security program to educate and govern users while also lacking key technology protections for their networks and endpoints. Additionally, government entities are purchasing cyber insurance as standard operating procedure. Hackers recognize this and target them knowing that cyber insurance will pay out a ransomware demand. The study validated BlueVoyant’s position that active threat targeting happens across the board: “For every selected county’s online footprint, evidence showed some sign of intentional targeting,” What’s more, five counties — or 17% of the 28 studied — showed signs of potential compromise, indicating that traffic from government assets was reaching out to malicious networks. There’s a collective risk here because there is no standardization [around security controls]. You have certain state and local [governments] that are on dot-coms and dot-us or dot-orgs. One would think that these should be on the dot-gov domain because [that] means that you not only check the box as being a certified government site, but you get forced two-factor authentication and you’re always going to have HTTPS.” The main method these agencies are attacked is through Ransomware. Ransomware has grown exponentially in recent years, with government entities being attacked weekly. What’s also concerning is the increase in hacker’s extortion demands. Three years ago, the average ransomware demand was $30,000. In 2020, it grew to nearly half a million dollars. Even when municipalities don’t pay, the breach recovery costs can be enormous. The City of Baltimore spent more than $18 million on damages and remediation in a 2019 ransomware attack. The risk with small governments is similar to the risk with SMBs; they assume they are not at risk due to the size of their organization. What all these entities don’t realize is that hackers target them because they lack proper cybersecurity programs. The other primary attack vector used by hackers on government employees is Phishing. Phishing is a form of social engineering to deceive individuals into doing the hacker’s bidding. Hackers want users to click on malicious links in email which downloads malware granting hackers system access. The report notes that typosquatting was the main reason users were being tricked, a strategy used in Phishing Attacks. Typosquatting uses look-alike domains to fool users into clicking on links. Users land on identically formatted websites that steal their login credentials for the hackers to use. An example is “arnazon.com” instead of “amazon.com”. Now a hacker uses those stolen Amazon credentials to order merchandise delivered to their PO Box. 2020 Election Risks The upcoming 2020 election opens up the opportunity for hackers to cause more trouble. This puts cybersecurity into the spotlight as the last line of defense against election tampering. Governments need to prepare and develop a strong cybersecurity program ahead of these elections. CyberHoot has a simple and effective set of recommendations for State and Local Governments to protect themselves. State & Local Government Recommendations According to Austin Berglas, Head of Romsomware/Incident Response at BlueVoyant, “State and local governments can take three immediate steps to improve their security postures”. - Implement strong passwords. - Use unique 14+ character passwords/passphrases stored in a Password Manager. - Two-Factor Authentication - Something you know (password), something you have (cell phone), something you are (fingerprint, face ID). Choose and use two of these to authenticate. - Review and strengthen remote access - Ensure remote access ports automatically close after use - Enable Two-Factor Authentication on all remote access Ransomware & Phishing Protection CyberHoot also recommends the following additional actions to reduce the likelihood of falling victim to a Ransomware or Phishing attack: - Educate employees through an awareness training tool like CyberHoot - Phish Test Employees to keep them on their toes - Follow the 3-2-1 backup method for securing all your critical and sensitive data - Govern employees with cybersecurity policies - Purchase and train your employees on how to use a Password Manager - Follow proper Internet etiquette and protect others from phishing attacks using your domain name by setting up SPF, DKIM and DMARC records to block emails from using your domain name in their attacks. No matter what sort of attack vector hackers are using, following these recommendations is a great starting point in building a strong defense-in-depth cybersecurity program.
<urn:uuid:2600819a-03a9-4900-987e-0d392fbce140>
CC-MAIN-2022-40
https://cyberhoot.com/blog/state-local-government-cyberattacks-up-50/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00581.warc.gz
en
0.937382
1,049
2.53125
3
In a disrupted world, cloud computing role’s has never been so important as it is now. The impact of the coronavirus has been deadly. While factories, shops, malls and offices have closed down, organizations are trying to rework their businesses to survive and stay operational in a virtual world. The world has been forced to operate one of the biggest ‘work from home’ experiments. In this experiment, the cloud has risen to its claim of being scalable and resilient. The sudden shift of millions of people to a remote work infrastructure has been made possible only due to the cloud. The cloud is powering a host of services that have made people productive. From cloud-based collaboration tools (Zoom, Slack, Microsoft Teams, WebEx) to cloud-based telephony tools, almost every service is possible only due to the cloud. Due to the restrictions in physical meetings, online education has been favored heavily. UNESCO estimates that more than 1.5 billion learners — or 90% of the world’s student population are confined to their homes. Many countries and states have hence started online classes, with some countries like South Korea switching to online education for its students. The cloud is the foundation for powering most of these initiatives. For call centers, a cloud-based call center solution is proving to be critical to provide the much needed support to customer queries and maintaining business continuity. The might of the cloud Besides powering a gamut of online services, cloud has become one of the biggest forces today. Almost all the leading cloud players have committed their resources to lead the fight against Covid-19. Google, for instance, has launched the rapid response virtual agent system, which allows organizations to quickly setup virtual agents. Google has also made it easier for organizations to add Covid-19 content that enables the virtual agent to take advantage of content from organizations that have already launched similar initiatives. Organizations can create chat or voice bots that answer questions about Covid-19 symptoms. To help data scientists in their effort to combat COVID-19, Google has also made a hosted repository of public datasets which are free to access and query through its Covid-19 public dataset program. Researchers can also use BigQuery ML to train advanced machine learning models Similarly, Microsoft in partnership with other firms, has created the Covid-19 Open Research Dataset. This is a collection of more than 29,000 scientific articles about the coronavirus group of viruses for use by the worldwide research community. Salesforce which recently launched its Salesforce Care solution for Healthcare Systems, designed specifically for healthcare providers experiencing an influx of requests due to COVID-19, has announced additional free solutions to help companies in any industry to stay connected to their stakeholders. With the cloud, telemedicine or virtual healthcare is an option that has been explored by many patients and service providers. A natural partner for disaster recovery The cloud is a natural partner for disaster recovery, and has been time tested in times of natural and other climatic disasters. In the current scenario, the cloud’s emphasis on reliability, scalability and time tested distribution zones across continents and regions, offers companies assured and quick access to critical IT assets from anywhere. A DR in the cloud option also gives firms the option of not having to deploy their personnel to go into their data centers to check or maintain their IT infrastructure. To meet demanding remote workforce demands, companies can also use the inherent capabilities of the cloud to scale quickly. Going forward, the cloud’s importance is only going to increase, as more and more companies start looking at how they can quickly migrate their applications from on-premise locations to the cloud. Today, more than any other time in the world, the cloud role as a must need technology has only grown in stature. About the author: Nitin Mishra heads the product management and solutions engineering functions at Netmagic Solutions. During his nine years with the company, he has been responsible for conceptualizing and packaging hosting and managed services focused on IT infrastructure requirements of Internet and Enterprise applications.
<urn:uuid:1c45ddd0-f123-4c30-9b8a-86fa318496f1>
CC-MAIN-2022-40
https://www.cio.com/article/193535/how-the-cloud-has-risen-in-the-shadow-of-a-global-pandemicin-a-disrupted-world-cloud-computing-rol.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00781.warc.gz
en
0.963333
831
2.609375
3
An international research team led by the University of British Columbia (UBC) has uncovered for the first time the importance of a small gland tucked behind the sternum that works to prevent miscarriage and diabetes in pregnant women. The organ in question is the thymus, identified in a study published today in the journal Nature as playing a significant role in both metabolic control and immunity in pregnancy. How the immune system adapts to support mother and fetus has puzzled researchers for decades. The study – conducted by an international research team, including UBC’s Dr. Josef Penninger – reveals an answer. The researchers have found that female sex hormones instruct important changes in the thymus, a central organ of the immune system, to produce specialized cells called Tregs to deal with physiological changes that arise in pregnancy. The researchers also identified RANK, a receptor expressed in a part of the thymus called the epithelium, as the key molecule behind this mechanism. “We knew RANK was expressed in the thymus, but its role in pregnancy was unknown,” says the study’s senior author Dr. Penninger, professor in the department of medical genetics and director of the Life Sciences Institute at UBC. To get a better understanding, the authors studied mice where RANK had been deleted from the thymus. “The absence of RANK prevented the production of Tregs in the thymus during pregnancy. That resulted in less Tregs in the placentas, leading to elevated rates of miscarriage,” says the study’s lead author Dr. Magdalena Paolino, assistant professor in the department of medicine at the Karolinska Institutet. The findings also offer new molecular insights into the development of diabetes during pregnancy, known as gestational diabetes, a disease that affects approximately 15 percent of women in pregnancy worldwide, and about which scientists still know little. In healthy pregnancies, the researchers found that Tregs migrated to the mother’s fat tissue to prevent inflammation and help control glucose levels in the body. Pregnant mice lacking RANK had high levels of glucose and insulin in their blood and many other indicators of gestational diabetes, including larger-than-average young. “Similar to babies of women with diabetes in pregnancy, the newborn pups were much heavier than average,” says Dr. Paolino. The deficiency of Tregs during pregnancy also resulted in long-lasting, transgenerational effects on the offspring. The pups remained prone to diabetes and overweight throughout their life spans. Giving the RANK-deficient mice thymus-derived Tregs isolated from normal pregnancies reversed all their health issues, including miscarriage and maternal glucose levels, and also normalized the body weights of the pups. The researchers also analyzed women with diabetes in pregnancy, revealing a reduced number of Tregs in their placentas, similar to the study on mice. “The discovery of this new mechanism underlying gestational diabetes potentially offers new therapeutic targets for mother and fetus in the future,” says co-author Dr. Alexandra Kautzky-Willer, a clinician-researcher based at the Medical University of Vienna. “The thymus changes massively during pregnancy and how such rewiring of an entire tissue contributes to a healthy pregnancy has been one of the remaining mysteries of immunology,” adds Dr. Penninger. “Our work over many years has now not only solved this puzzle – pregnancy hormones rewire the thymus via RANK – but uncovered a new paradigm for its function: the thymus not only changes the immune system of the mother so it does not reject the fetus, but the thymus also controls metabolic health of the mother. “This research changes our view of the thymus as an active and dynamic organ required to safeguard pregnancies,” says Dr. Penninger. Structure and Function The thymus is a superior mediastinal retrosternal organ. It is bilobed and has two subcomponents: the cortex and the medulla and is made up of epithelial, dendritic, mesenchymal and endothelial cells. The thymus is one organ which has already reached its maturity in utero and involutes as people age. Involution of thymus involves changes in its architecture, as it loses its organized structure replaced by adipose tissue as it becomes functionally less active. Several studies since the 1960s demonstrate that the thymus is necessary for life. Mice that received a thymectomy had an immunodeficiency with a decreased number of lymphocytes. The thymus is the organ primarily responsible for the production and maturation of immune cells; including small lymphocytes that protect the body against foreign antigens. The thymus is the source of cells that will live in the lymphoid tissues and supports their maturation and proper function. Positive selection is used by the thymus to select self-antigen recognizing t-cells to be destined for apoptosis. The thymus is where T-cells get exposure to self-antigen, and 95% of all created T cells undergo apoptosis due to their recognition of self-antigen. The non-reactive T-cells then go through negative selection for those that bind to antigen with high affinity. Only lymphocytes that pass both positive and negative selection are allowed to travel out of the thymus. These T-cells are activated by bacteria, viruses or other foreign antigens and then undergoes mitosis. After the pathogen dies, the cells go through apoptosis and the ones that do not continue as memory cells. These cells allow the immune system to respond quicker and stronger the next time they interact with the same antigen. Hassall’s bodies, cells unique to the thymus, are involved in maturing thymocytes and clearing apoptotic cells. They are a vital part of lymphopoiesis. Originally derived from the ventral third pharyngeal pouch, the thymus grows from embryogenesis to 3 years of age and then involutes during puberty. During embryogenesis, the thymus migrates from the third pharyngeal pouch down into the superior mediastinum posterior to the manubrium. The thymus is large in infants and young children and coalesces over time, and thymic tissue is replaced by fat by early adulthood. Thymic involution is suggested to be caused by the increased levels of androgens present in the bloodstream during puberty. Blood Supply and Lymphatics The thymus’ blood supply is complicated and widely varies. Most often the blood is supplied by the inferior thyroid, internal thoracic, pericardiacophrenic or anterior intercostal arteries. Rarely, the thymus can obtain blood from the middle thyroid artery. Laterally, the thymus receives blood from branches of the internal mammary artery. These branches are named the lateral thymic arteries; they vary in number and are asymmetric. Posterior thymic arteries derive from the brachiocephalic artery and the aorta; however, they are rare. Accessory thymic arteries are diverse but have been documented to originate from the thyrocervical trunk, subclavian or superior thyroid arteries. Venous drainage variation is common, but most often the thymus is drained by left brachiocephalic and internal thoracic tributary veins. Thymic venous supply runs in the interlobular septa, into the thymic capsule, and leaves the cortex via a plexus on the posterior side of the organ. These veins then join together and drain each lobe separately. Sympathetic innervation of the thymus originates from the superior cervical and stellate ganglion. These fibers travel in a plexus along large blood vessels until they enter the thymic capsule. Parasympathetic fibers arise from the vagus, recurrent laryngeal and phrenic nerves. Several rodent studies have found that thymocytes respond to stimuli via norepinephrine, dopamine, acetylcholine, neuropeptide Y, vasoactive intestinal peptide, calcitonin gene-related peptide, and substance P. The thymus sits in the mediastinum just posterior to the manubrium. The muscles that depress the hyoid bone that attach to the sternum are near the thymus. These muscles include the sternohyoid and sternothyroid, and both muscles are bilateral. The thyrohyoid and sternocleidomastoid muscles are close to any ectopic thymic tissue or any superior extensions of the thymus. The thymus also lies anterior to the cardiac muscle and pericardium. Variants in the number of lobes, size, and location of the thymus are common. The most common anatomic variant is an extension reaching up to the thyroid gland. During descent of the thymus tissue may implant along the way and is then defined as an ectopic thymus. Fifty percent of people have ectopic thymic tissue. This variant is typically located in the anterior cervical region, deep to the sternocleidomastoid muscle, anterior to the carotid sheath, and can expand into the retropharyngeal space. Half of these masses connect to the thymus in the mediastinum. The thymus poses difficulty for surgeons due to its high variation in size and arterial supply. Imaging of the gland is also difficult and rarely provides surgeons any insight. On standard chest radiographs, the thymus is rarely discernible as it gets lost in the cardiac silhouette. The gland has smooth borders and is more visible in x-rays of infants and young children. Ultrasound is mostly used to assess the thymic parenchyma, and CT scan is most helpful to assess the location, size, shape, and its relationship to other structures. An ectopic thymus can be confused for lymphadenopathy or a tumor. Since the difference is difficult to discern clinically, their benign nature is most often confirmed after resection. Another complication of ectopic thymic tissue is that it can compress nearby structures; this can cause swelling, decreased blood flow, discomfort, and impaired thyroid function. Resection of these masses possesses surgical difficulty due to many adhering to the carotid sheath and in close proximity to vital pharyngeal muscles and phrenic nerve. Insulin is known to play an essential role in thymic growth. Insulin, growth hormone and insulin-like growth factor increase the development of lymphocytes and insulin can be found in the medulla of the thymus. Type 1 diabetes adversely affects the thymus; these patients will have a decreased immune system in addition to their diabetes and other related complications. The supplementation with insulin can be protective of their thymic function and preserve their immune system maturation. Thymus hyperactivity secondary to hyperplasia of the organ is common in myasthenia gravis. However, thymus tumors, lymphomas, systemic lupus, or hyperthyroidism can also cause this clinical finding. Patients with thymus hyperactivity will have pallor, lymphadenopathy, rhinorrhea, and tonsillitis. This condition’s treatment includes vitamins A and D, calcium, and iodine. Thalassotherapy, thymus radiotherapy, and heliotherapy may also be treatments. Myasthenia gravis (MG) is an autoimmune pathology resulting in muscle weakness. The thymus produces antibodies that interrupt the signaling of acetylcholine at the motor end plate. Patient’s muscle strength with worsened with continued contractions and this is a clinical finding that differentiates MG from Lambert-Eaton syndrome. Thymic hypertrophy in MG is so common that it is considered a diagnostic criterion; along with the antibodies found in the blood to the acetylcholine receptors and anti-muscarinic antibodies. Physicians may use an MRI or CT scan to evaluate the size of the thymus. MG is treated depending on the severity of the disease. Treatments range from immunosuppression and corticosteroids to surgical thymectomy. In some patients, only pyridostigmine bromide, a medication that slows the breakdown of acetylcholine, is necessary for symptom control. In patients with this condition, it is essential to keep in mind the potential drug exacerbations of disease when treating acute illness or comorbidities. In contrast, atrophy of the thymus is present in several congenital conditions. DiGeorge syndrome is when the thymus fails to form in utero. This agenesis results in an immature immune system and recurrent infections. Severe combined immunodeficiency (SCID) is a genetic disorder where the thymus disappears early in childhood, and the patient lacks T and B cells. These children are also at significant risk for severe, recurrent infections. As humans age, and their thymus regresses, they have an increased susceptibility for disease. This decrease in thymus size and function leads to decreased circulating T cells and alteration of their role. This change in function can increase autoimmune diseases, bacterial and viral infections, and neoplasms. Restoring thymic function or intervening before involution could maintain the immune system throughout adult life. The thymus is a current area of research with great promise. One study from Scottish researchers found they were able to re-grow an adult mouse thymus from stem cells. This new organ began to produce T-cells. Duke University in Durham, North Carolina, has successfully performed thymus transplants on children with DiGeorge syndrome. Thymus research poses incredible medical breakthroughs for many diseases and the possibility to revamp adult immune systems. - 1.Zdrojewicz Z, Pachura E, Pachura P. The Thymus: A Forgotten, But Very Important Organ. Adv Clin Exp Med. 2016 Mar-Apr;25(2):369-75. [PubMed] - 2.Rezzani R, Nardo L, Favero G, Peroni M, Rodella LF. Thymus and aging: morphological, radiological, and functional overview. Age (Dordr). 2014 Feb;36(1):313-51. [PMC free article] [PubMed] - 3.Yan F, Mo X, Liu J, Ye S, Zeng X, Chen D. Thymic function in the regulation of T cells, and molecular mechanisms underlying the modulation of cytokines and stress signaling (Review). Mol Med Rep. 2017 Nov;16(5):7175-7184. [PMC free article] [PubMed] - 4.Kohan EJ, Wirth GA. Anatomy of the neck. Clin Plast Surg. 2014 Jan;41(1):1-6. [PubMed]
<urn:uuid:a54f9afb-6157-48e2-9a7f-9046da0d4d63>
CC-MAIN-2022-40
https://debuglies.com/2020/12/24/thymus-gland-works-to-prevent-miscarriage-and-diabetes-in-pregnant-women/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00781.warc.gz
en
0.922195
3,254
3.296875
3
A new pilot outreach program is using the high penetration of phones in the country to reach enslaved people. Free the Slaves recently launched in India. It is still in its pilot phase, but it is already connecting with people throughout the country. It uses the knowledge that smartphones and mobile technology have reached tremendous penetration among Indian families to spread basic labor rights information. The goal is to reach out to enslaved people in India and give them hope through readily available tech. The Free the Slaves (FTS) outreach program pilot tested with Kaarak Enterprise Development Services. It is meant to educate and increase hope in at-risk rural communities. The goal is to connect with villagers who have a heightened vulnerability to debt bondage slavery and human trafficking. As there is typically at least one phone per family, mobile technology has become the natural vehicle for communicating with these people. The program pilot comprised four messages written in Bhojpuri. That local language was selected as it is the most common among the people in Uttar Pradesh state. Over a span of 28 days, people in 192 communities were called and sent these messages over mobile technology. The Free the Slaves messages shared information and awareness about bonded labor slavery and labor rights. One of the messages said: “You must be paid as much as you deserve and you should be able to understand how payment works.” Another said: “Since the Bonded Labor Act, it is illegal to force someone to work as a slave because of their caste, under threat of violence or without pay.” The villagers receiving the messages also learned about government rehabilitation and relief programs available to them. They underscored the importance of vigilance among community members. They also promoted the FTS program itself as well as the MSEMVS organization which works with communities to provide slavery resistance. It also offers support for slavery and sex trafficking survivors. The Free the Slaves messages concluded with a caution about the risks associated with migrating for employment. They provided tips for avoiding traffickers in the first place. The reception to these messages was highly positive and community members welcomed them. Follow-up efforts with focus groups showed that 92 percent of community members learned something new and found the information very helpful to them. Another 79 percent felt the information was applicable to their own situations. Many of the people did not know that bonded labor was illegal in India until they heard the messages.
<urn:uuid:b363e3a0-1a3c-43cc-a9de-45fa8a9307cd>
CC-MAIN-2022-40
https://www.mobilecommercepress.com/tag/smartphone-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00781.warc.gz
en
0.974687
484
2.640625
3
How many times have we all heard the following recommendations repeated by “experts” in school, at work, or on television? Use complicated passwords that contain both uppercase and lowercase letters, as well as numbers and special characters. Change all of your passwords often. Use a unique password on every website. Be especially vigilant in protecting your most sensitive passwords — which are those to any online banking systems that you use. For most of us, the answer is that we have heard the aforementioned advice so often, that, by now, we have lost count as to how many times we have heard it. Yet, it is wrong. Here is why: 1. The human mind cannot remember many complex passwords, and, as such, using complex passwords leads to security risks. Using long, complex passwords on a small number of sensitive sites might be a good idea, but employing such a scheme for any significant number of passwords is likely to lead to potentially serious problems: people inappropriately reusing passwords, writing down passwords in insecure locations, and selecting passwords with poor randomization and formatted using predictable patterns (e.g., following the common practice of using a capital for the first letter of a complicated password, followed by all lowercase characters, and then a number) — any of which can obviously undermine security. A better approach than telling people to always use complex passwords is to accept the reality that human minds are limited, and, to, therefore, advise folks to classify the systems to which they need to secure access. The government does not protect its unclassified systems the same way that it secures its top-secret information and infrastructure, and neither should you. Informally, classify the systems that you access, and establish your own informal password policies accordingly. On the basis of risk levels, feel free to employ different password strategies: Random passwords, passwords composed of multiple words possibly separated with numbers, passphrases (long passwords of 25 or more characters — sometimes full sentences), and even simple passwords each have their appropriate uses. Of course, multifactor authentication can, and should, help augment security when it is both appropriate and available. According to The Wall Street Journal, Bill Burr, the author of NIST Special Publication 800-63 Appendix A (which discusses password complexity requirements), recently admitted that password complexity has failed in practice, and that passphrases (and not complex passwords) should ideally be used for authentication. 2. Using the same password for multiple accounts is sometimes preferable to alternatives. While it is true that passwords to sensitive sites should not be reused on other sites, it is perfectly acceptable to reuse passwords to sites where the security is of no concern to the user; for many people, such “unimportant password” sites make up a significant percentage of the sites for which they have passwords. There is no reason to use a strong password, for example, on sites that require users to establish “accounts” solely in order to track users for marketing purposes; one might even argue that there is also no reason to use a strong password on sites that use accounts solely to ensure that comments posted to the site are attributable to their authors. Often the information that users provide to these sites includes no more than a (real or fake) name, email address, and password. Especially if one uses a separate email address for these types of purposes, it it truly of concern to him or her if a criminal who breached one such account gained access to the others? (While such information could be leveraged for social-engineering-type attacks, that information likely can already be garnered from social media sites and publicly-available, online databases, etc.) Instead of creating a plethora of new passwords, it may be wise, once again, to accept human limitations; if using the same password or similar passwords on “no need to secure my information” sites allows a person to create and remember stronger passwords for use on sites on which his or her security truly matters, doing so may be significantly preferable to the often-repeated non-reuse approach.Using the same password for multiple accounts is sometimes preferable to alternatives. #CyberSecurity #InfoSec #Passwords Click To Tweet 3. Your email and social media passwords may be significantly more sensitive than your online banking password. People tend to believe that their online banking and other financial-system passwords are the most sensitive among their many passwords, but, in many cases, they may be incorrect. Because many online systems allow people to reset their passwords after validating users’ identities through email messages sent to the users’ previously-known email addresses, a criminal who gains access to someone’s email account may be able to do a lot more than just read email without authorization: he or she may be able to reset that user’s passwords to many systems, including to some financial institutions. Likewise, social-media-based authentication capabilities — especially those provided by Facebook and Twitter — are used by many sites, so a compromised password on either social media platform could lead to unauthorized parties gaining access to multiple systems. So use strong passwords on these sites, and, of course, turn on multi-factor authentication on social media platforms when available. 4. People need to provide passwords over the phone, so telling them never to do so is not an effective way to protect them. On its website, the United States Federal Trade Commission (FTC) recommends: Of course, such advice would make sense if legitimate businesses never asked people to authenticate themselves by providing their passwords over the phone, but some businesses do request passwords in such a fashion on a regular basis. Better advice might be not that people should never provide a password over the phone, but that they should provide sensitive or secret information over the phone only if they initiated contact with the party requesting it. It is far less risky, for example, to provide an account’s phone-access password to a representative if one calls his or her bank using the number printed on the back of his or her ATM card, than if someone calls him or her purporting to be a bank representative and demands the same private information. 5. Changing passwords too often may harm security instead of improving it. On its website, the American Association of Retired Persons (AARP), which focuses on enhancing enhance the quality of life for people as they age, recommends that folks: Consider how many “critical” passwords people living in 2018 likely have. A huge number of folks, for example, have passwords to access their personal email, social media accounts, bank accounts, credit card accounts, mobile device accounts, Google or Apple accounts, work computer, work email, etc., all of which can be classified as “critical.” Even with just five such accounts — and most people alive today likely have significantly more than that number — changing passwords every two weeks would necessitate that a person learn a staggering 130 new passwords every year! It is not difficult to imagine that such a scenario will likely lead to passwords being reused, modified only in part (e.g., the password following josephsteinberg1 becomes josephsteinberg2), or written down. Of course, following the AARP’s advice might also lead to people getting locked out of accounts after multiple failed password attempts during which they enter expired passwords — the frustration of which may also ultimately cause them to even further undermine security with weaker, and more frequently-reused, passwords. Obviously, passwords should be changed if they have truly been put at risk by a breach or the like, but, otherwise, changing passwords frequently may actually compromise their efficacy as vehicles for authentication.Changing passwords too often may harm security instead of improving it. #CyberSecurity #InfoSec #Passwords Click To Tweet 6. Do not “password panic” after reported breaches — and ignore the “experts” who cry wolf. It seems like whenever there is a major data breach reported in the news, “experts” quoted all over the media advise people to change all of their passwords. This response to the news of a breach almost seems like a biological reflex — little thought is given, or analysis performed, before a chorus of voices chimes in with the usual recommendations. But, unless there is a true need, changing many passwords at one time is likely to create security problems similar to (or even worse then) scenarios in which passwords are frequently changed: When people create many new passwords at one time, they face serious limitations of human memory and are more likely than otherwise to write passwords down (bad idea), store them in a computer (which, unless they are properly encrypted and the device secured, is also a bad idea), or use passwords identical to, or similar to, one another on multiple sensitive sites (bad idea). Also, as I explained after several years ago after the Heartbleed bug — when I suggested that people ignore the advice of “experts” who were recommending that everyone change his or her passwords en masse — if a vulnerability that allows systems to be compromised is publicized, it is important not to change passwords on systems that may still be vulnerable. Once criminals know that there is a serious, widespread vulnerability they are certainly going to attempt to detect and exploit it. So while evildoers may not have actually exploited the vulnerability on any particular system on which you plan to change your password — and your password may still be secure — if after the vulnerability is publicized crooks do breach the system and then you change your password they will likely obtain it. Consider that if criminals stole your old password by exploiting a particular vulnerability that still exists, they can easily steal your new one, and that if your old one was not stolen, changing it may lead to the new one being stolen. As such, changing your password can sometimes increase the risk of its being compromised rather than diminish it. Furthermore, creating a false sense of urgency without investigating the facts is irresponsible, and puts people at risk when there is a true password emergency. How seriously do you think the multitudes of people who have repeatedly ignored the warnings from the FTC, security “experts,” and the media about the need to change passwords after some particular data breach or set of breaches — and suffered no harm as a result of ignoring such warnings — will take a future warning issued at a time when it is actually necessary to change passwords? (Note: Password managers have their place, but, the most sensitive of passwords should never be stored in a password manager – they should be committed to memory. Remember, password managers, by their very nature, violate the basic information security principle of never storing passwords in an unhashed format, and, when used for many passwords, password managers effectively put “all of your eggs into a single basket.” Couple those issues with the fact that many password managers have, at times, themselves, been compromised, and the reason to avoid using such managers for highly sensitive passwords becomes obvious.)
<urn:uuid:88138cb3-7bdf-4137-928c-7fef09a16fec>
CC-MAIN-2022-40
https://josephsteinberg.com/why-you-should-ignore-everything-that-you-have-been-told-about-passwords/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00781.warc.gz
en
0.95112
2,255
3.1875
3
Accidental house fires can ignite suddenly and spread uncontrollably. Capable of wiping out a home within minutes, they endanger everyone on the property and cause thousands of dollars worth of damage. There are potential fire starters in every house, and sometimes you might not realize the risk they pose until it’s too late. With house fires presenting such a significant hazard, you must be aware of the potential causes and how you can prevent one on your property. Take a look at the common ways house fires start in the US so you can identify the areas where your home is most vulnerable. LEADING CAUSES OF HOUSE FIRES The kitchen is the heart of the home and a place where many families spend time making memories. However, they are also the most common location for house fires to start. Between 2014-2018, the US fire department responded to an average of 172,900 home fires started by cooking. The kitchen provides many opportunities for fires to ignite, from unattended burners to dish towels getting too close to open flames. To prevent fires from starting in your kitchen, you must take precautions when cooking. Constantly keep your eyes on the stove, oven, and other appliances that pose a fire risk. Move flammable items away from the stove or open flames. Never leave the kitchen during high-heat activities, including boiling or frying. If you need to leave the room but there is no one else around to keep an eye on things, turn off the cooking appliances until you return. As soon as you are finished cooking, shut off the stove and other appliances. Don’t place anything flammable on a hot surface, for example, hot pads or dish towels. Instead, find a designated spot for them and practice placing them back after each use. Roll up baggy sleeves, and avoid wearing loose-fitting clothes that could easily catch fire. Instead, wear something tight fitted, or wear a securely fastened apron. Bedrooms are prime locations for undetected circuit overloads. The bedroom is one of the most common spots for electrical house fires to start. People leave phones, laptops, and other electrical devices on charge overnight, therefore overheating and overloading power sockets. Faulty or malfunctioning lights and cords can also cause fires, as can electric blankets and space heaters being left on overnight. Bedroom fires are hazardous and have a higher potential for tragedy because they often start when people are asleep. The first items that usually go up in flames are bedding, curtains, carpets, and other bedroom comforts. To prevent these fires from starting, ensure loose or unsafe-looking power wall outlets are fixed immediately. Replace chargers, extension leads, lighting cords, and other power cords as soon as they become less efficient. Employ the golden rule: If you can see wires, you need a replacement. Don’t overload wall outlets by having too many devices, extension leads, or power strips plugged in. When devices are not in use, unplug them until you need them again. Ensure space heaters are at least 3 ft. away from any flammable materials. Fireplaces are often an impressive feature of the home’s living room, with their beautiful ornate details serving as a statement piece for the space. However, with these decorative furnishings also comes a lot of maintenance, not just the occasional polishing and dusting of the mantel. If not regularly cleaned and maintained, they have the potential to set alight. With this in mind, another leading cause of house fires is heating equipment, with local fire departments responding to around 48,530 calls relating to heating system fires during 2014-2018. The NFPA cited chimneys as the heat source most likely to start a fire during this time. Besides chimneys and fireplaces, other heat sources contributing to living room fires include space heaters, candles, and wood-burning stoves. However, there are safety precautions you can take to prevent these appliances from starting a fire in your home. Have your chimney professionally cleaned before the cooler months set in. Keep flammable items, including curtains and upholstered furniture around 3 ft. from your heating equipment. Install a fireplace screen to stop live embers from escaping. Never leave a fire unattended, and make sure it’s completely put out before leaving your house. Use the recommended fuel to power your heating equipment. You will be able to find this in your manufacturer’s guide. However, if not, consult the internet. Teach children the importance of fire safety, and enforce the 3 ft. rule to ensure they don’t burn themselves or accidentally ignite something in your home. If smoking being hazardous to your health isn’t enough to turn you off, it is also a leading cause of house fires. Between 2012-2016, smoking was the leading cause of house fire deaths, with one of every 31 smoking material-related fires resulting in death. If cigarette butts fall on carpets, upholstered furniture, curtains, and other flammable materials, a straightforward mistake could lead to a dangerous, out-of-control inferno. These fires almost always occur when household residents are asleep, contributing to the higher death rate. With a single carelessly dropped cigarette providing sufficient fuel to ignite and spread an uncontrollable fire, it’s critical to be wary. Before you smoke inside, consider the potentially deadly consequences. To prevent house fires caused by smoking, always try to smoke outdoors. Ensure your cigarette is completely out before leaving your ashtray unattended. Don’t leave lighters, matches, or cigarettes in reach of children. Prevent a House Fire With These Tips Nobody wants to be involved in a house fire. Along with the property devastation and emotional distress, house fires may also set you back thousands of dollars. Not only does a house fire threaten the safety of you and your family members, but it can also destroy sentimental items or heirlooms that are irreplaceable. By understanding the leading causes of house fires and how you can prevent them, you’ll be able to protect your family and your personal property more effectively. Last Updated on
<urn:uuid:e3c2efa5-b858-462a-bba1-61d51e069f19>
CC-MAIN-2022-40
https://www.homesecurityheroes.com/leading-causes-house-fires/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00781.warc.gz
en
0.94123
1,299
3.015625
3
It’s been common to use temperature terminology, specifically a range from cold to hot, to describe the levels of tiered service available to data storage customers. The levels have been differentiated according to how crucial to current business the stored data is and how frequently it will be accessed. These terms likely originated according to where the data was historically stored: hot data was close to the heat of the spinning drives and the CPUs, and cold data was on tape or a drive far away from the data center floor. There are no standard industry definitions of what hot and cold mean when applied to data storage, so you’ll find them used in different ways, which makes comparing services challenging. Generally, though, hot data requires the fastest and most expensive storage because it’s accessed more frequently, and cold (or cooler) data that is accessed less frequently can be stored on slower, and consequently, less expensive media. The terms are still used by the major storage vendors to describe their tiered storage plans. Below, we’ll get into why these terms have become less useful for anticipating both storage cost and performance thanks to the advent of less expensive and more efficient storage offerings, such as hot cloud storage, that effectively offer hot storage performance at cold storage prices. Defining Hot Storage Hot storage is data that needs to be accessed right away. If the stored information is business-critical and you can’t wait for it when you need it, that’s a candidate for hot storage. To obtain the fast data access required for hot data storage, the data is commonly stored in hybrid or tiered storage environments. The hotter the service, the more likely that it will use the latest drives, fastest transport protocols, and be located near to the client or in multiple regions as needed. Cloud data storage providers charge a premium for hot data storage because it’s resource-intensive. Microsoft’s Azure Hot Blobs and Amazon AWS services don’t come cheap. Data stored in the hottest tier might use solid-state drives, which are optimized for lower latency and higher transactional rates compared to traditional hard drives. In other cases, hard disk drives are more suitable for environments where the drive is heavily accessed due to their higher durability standing up to intensive read/write cycles. No matter the storage media used, the workloads in hot data storage require fast and consistent response times. Some examples of the uses for this type of storage would be interactive video editing, web content, online transactions and the like. Hot storage services also are tailored for workloads with many small transactions, such as capturing telemetry data, messaging, and data transformation. Defining Cold Storage On the other end of the thermometer, cold (or cooler) data is data that is accessed less frequently and also doesn’t require the fast access of warmer data. That includes data that is no longer in active use and might not be needed for months, years, decades, or maybe ever. Practical examples of data suitable for cold storage include old projects, records needed to be maintained for financial, legal, HR, or other business record keeping requirements, or anything else that’s of value but not needed anytime soon. Data retrieval and response time for cold cloud storage systems are typically much slower than services designed for active data manipulation. Practical examples of cold cloud storage include services like Amazon Glacier and Google Coldline. Cold data is usually stored on lower performing and less expensive storage environments in-house or in the cloud. Tape has been a popular storage medium for cold data. LTO, Linear Tape-Open, was originally developed in the late 1990s as a low-cost storage option. To review data from LTO, the tapes must be physically retrieved from storage racks and mounted in a tape reading machine, making it one of the slowest, therefore coldest, methods of storing data. Storage prices for cold cloud storage systems are typically lower than warm or hot storage, but cold storage often incur higher per-operation costs than other kinds of cloud storage. Access to the data typically requires patience and planning. Today, cold storage also can be used to describe purely offline storage — that is, data that’s not stored in the cloud at all, so sometimes when you hear about cold storage it is the old definition of cold storage: data that is archived on some sort of durable medium and stored in a secure offsite facility without a connection to a network. This could be data that needs to be quarantined from the internet altogether (also called air-gapped) — for example, cryptocurrencies such as Bitcoin. (See our post, Securing Your Cryptocurrency, for more information on this topic.) |Traditional Views of Cold and Hot Data Storage| |Access Frequency||Seldom or Never||Frequent| |Storage Media||Slower drives, LTO, offline||Faster drives, durable drives, SSDs| What is Hot Cloud Storage? While structuring cloud data storage by temperature has been commonly used by the big, established cloud storage providers — Amazon, Microsoft, Google — to describe their tiered storage services and set pricing accordingly, today there are new players in data storage, who, through innovation and efficiency, are able to offer cloud storage at the cost of cold storage, but with the performance and availability of hot storage. Services like our own B2 Cloud Storage fall into this category. They can compete on price with LTO and other traditionally cold storage services, but can be used for applications that are usually reserved for hot storage, such as media management, workflow collaboration, websites, and data retrieval. The new model is so effective and efficient that customers have found it economical to migrate away altogether to cloud storage from slow and inconvenient cold storage and archival systems. This trend is continuing, so it will be interesting to see what happens to the traditional temperature terms as the boundaries between hot and cold blur due to new efficiencies, technologies, and services. What Temperature Is Your Cloud Storage? Organizations will vary in their needs so they’ll have different approaches to the question of where to store their data. It’s imperative to an organization’s bottom line that they don’t pay for more than what they need. Have a different idea of what hot and cold storage are? Have questions that aren’t answered here? Join the discussion in the comments. • • • If you’d like to experience the latest in hot cloud storage at cold storage prices, you can give B2 a try. Get started today and you’ll get the first 10GB free! Note: This post was updated from March 7, 2017. — Editor
<urn:uuid:bfaf0e4f-6df5-4228-af14-6d965e817d56>
CC-MAIN-2022-40
https://www.backblaze.com/blog/whats-the-diff-hot-and-cold-data-storage/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00781.warc.gz
en
0.935335
1,412
2.765625
3
This article is part of “the philosophy of artificial intelligence,” a series of posts that explore the ethical, moral, and social implications of AI today and in the future Last week, I wrote an analysis of “Reward Is Enough,” a paper by scientists at DeepMind. As the title suggests, the researchers hypothesize that the right reward is all you need to create the abilities associated with intelligence, such as perception, motor functions, and language. This is in contrast with AI systems that try to replicate specific functions of natural intelligence such as classifying images, navigating physical environments, or completing sentences. The researchers go as far as suggesting that with well-defined reward, a complex environment, and the right reinforcement learning algorithm, we will be able to reach artificial general intelligence, the kind of problem-solving and cognitive abilities found in humans and, to a lesser degree, in animals. The article and the paper triggered a heated debate on social media, with reactions going from full support of the idea to outright rejection. Of course, both sides make valid claims. But the truth lies somewhere in the middle. Natural evolution is proof that the reward hypothesis is scientifically valid. But implementing the pure reward approach to reach human-level intelligence has some very hefty requirements. In this post, I’ll try to disambiguate in simple terms where the line between theory and practice stands. In their paper, the DeepMind scientists present the following hypothesis: “Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment.” Scientific evidence supports this claim. Humans and animals owe their intelligence to a very simple law: natural selection. I’m not an expert on the topic, but I suggest reading The Blind Watchmaker by biologist Richard Dawkins, which provides a very accessible account of how evolution has led to all forms of life and intelligence on out planet. In a nutshell, nature gives preference to lifeforms that are better fit to survive in their environments. Those that can withstand challenges posed by the environment (weather, scarcity of food, etc.) and other lifeforms (predators, viruses, etc.) will survive, reproduce, and pass on their genes to the next generation. Those that don’t get eliminated. According to Dawkins, “In nature, the usual selecting agent is direct, stark and simple. It is the grim reaper. Of course, the reasons for survival are anything but simple — that is why natural selection can build up animals and plants of such formidable complexity. But there is something very crude and simple about death itself. And nonrandom death is all it takes to select phenotypes, and hence the genes that they contain, in nature.” But how do different lifeforms emerge? Every newly born organism inherits the genes of its parent(s). But unlike the digital world, copying in organic life is not an exact thing. Therefore, offspring often undergo mutations, small changes to their genes that can have a huge impact across generations. These mutations can have a simple effect, such as a small change in muscle texture or skin color. But they can also become the core for developing new organs (e.g., lungs, kidneys, eyes), or shedding old ones (e.g., tail, gills). If these mutations help improve the chances of the organism’s survival (e.g., better camouflage or faster speed), they will be preserved and passed on to future generations, where further mutations might reinforce them. For example, the first organism that developed the ability to parse light information had an enormous advantage over all the others that didn’t, even though its ability to see was not comparable to that of animals and humans today. This advantage enabled it to better survive and reproduce. As its descendants reproduced, those whose mutations improved their sight outmatched and outlived their peers. Through thousands (or millions) of generations, these changes resulted in a complex organ such as the eye. The simple mechanisms of mutation and natural selection has been enough to give rise to all the different lifeforms that we see on Earth, from bacteria to plants, fish, birds, amphibians, and mammals. The same self-reinforcing mechanism has also created the brain and its associated wonders. In her book Conscience: The Origin of Moral Intuition, scientist Patricia Churchland explores how natural selection led to the development of the cortex, the main part of the brain that gives mammals the ability to learn from their environment. The evolution of the cortex has enabled mammals to develop social behavior and learn to live in herds, prides, troops, and tribes. In humans, the evolution of the cortex has given rise to complex cognitive faculties, the capacity to develop rich languages, and the ability to establish social norms. Therefore, if you consider survival as the ultimate reward, the main hypothesis that DeepMind’s scientists make is scientifically sound. However, when it comes to implementing this rule, things get very complicated. Reinforcement learning and artificial general intelligence In their paper, DeepMind’s scientists make the claim that the reward hypothesis can be implemented with reinforcement learning algorithms, a branch of AI in which an agent gradually develops its behavior by interacting with its environment. A reinforcement learning agent starts by making random actions. Based on how those actions align with the goals it is trying to achieve, the agent receives rewards. Across many episodes, the agent learns to develop sequences of actions that maximize its reward in its environment. According to the DeepMind scientists, “A sufficiently powerful and general reinforcement learning agent may ultimately give rise to intelligence and its associated abilities. In other words, if an agent can continually adjust its behaviour so as to improve its cumulative reward, then any abilities that are repeatedly demanded by its environment must ultimately be produced in the agent’s behaviour.” In an online debate in December, computer scientist Richard Sutton, one of the paper’s co-authors, said, “Reinforcement learning is the first computational theory of intelligence… In reinforcement learning, the goal is to maximize an arbitrary reward signal.” DeepMind has a lot of experience to prove this claim. They have already developed reinforcement learning agents that can outmatch humans in Go, chess, Atari, StarCraft, and other games. They have also developed reinforcement learning models to make progress in some of the most complex problems of science. The scientists further wrote in their paper, “According to our hypothesis, general intelligence can instead be understood as, and implemented by, maximising a singular reward in a single, complex environment [emphasis mine].” This is where hypothesis separates from practice. The keyword here is “complex.” The environments that DeepMind (and its quasi-rival OpenAI) have so far explored with reinforcement learning are not nearly as complex as the physical world. And they still required the financial backing and vast computational resources of very wealthy tech companies. In some cases, they still had to dumb down the environments to speed up the training of their reinforcement learning models and cut down the costs. In others, they had to redesign the reward to make sure the RL agents did not get stuck the wrong local optimum. (It is worth noting that the scientists do acknowledge in their paper that they can’t offer “theoretical guarantee on the sample efficiency of reinforcement learning agents.”) Now, imagine what it would take to use reinforcement learning to replicate evolution and reach human-level intelligence. First you would need a simulation of the world. But at what level would you simulate the world? My guess is that anything short of quantum scale would be inaccurate. And we don’t have a fraction of the compute power needed to create quantum-scale simulations of the world. Let’s say we did have the compute power to create such a simulation. We could start at around 4 billion years ago, when the first lifeforms emerged. You would need to have an exact representation of the state of Earth at the time. We would need to know the initial state of the environment at the time. And we still don’t have a definite theory on that. An alternative would be to create a shortcut and start from, say, 8 million years ago, when our monkey ancestors still lived on earth. This would cut down the time of training, but we would have a much more complex initial state to start from. At that time, there were millions of different lifeforms on Earth, and they were closely interrelated. They evolved together. Taking any of them out of the equation could have a huge impact on the course of the simulation. Therefore, you basically have two key problems: compute power and initial state. The further you go back in time, the more compute power you’ll need to run the simulation. On the other hand, the further you move forward, the more complex your initial state will be. And evolution has created all sorts of intelligent and non-intelligent lifeforms and making sure that we could reproduce the exact steps that led to human intelligence without any guidance and only through reward is a hard bet. Many will say that you don’t need an exact simulation of the world and you only need to approximate the problem space in which your reinforcement learning agent wants to operate in. For example, in their paper, the scientists mention the example of a house-cleaning robot: “In order for a kitchen robot to maximise cleanliness, it must presumably have abilities of perception (to differentiate clean and dirty utensils), knowledge (to understand utensils), motor control (to manipulate utensils), memory (to recall locations of utensils), language (to predict future mess from dialogue), and social intelligence (to encourage young children to make less mess). A behaviour that maximises cleanliness must therefore yield all these abilities in service of that singular goal.” This statement is true, but downplays the complexities of the environment. Kitchens were created by humans. For instance, the shape of drawer handles, doorknobs, floors, cupboards, walls, tables, and everything you see in a kitchen has been optimized for the sensorimotor functions of humans. Therefore, a robot that would want to work in such an environment would need to develop sensorimotor skills that are similar to those of humans. You can create shortcuts, such as avoiding the complexities of bipedal walking or hands with fingers and joints. But then, there would be incongruencies between the robot and the humans who will be using the kitchens. Many scenarios that would be easy to handle for a human (walking over an overturned chair) would become prohibitive for the robot. Also, other skills, such as language, would require even more similar infrastructure between the robot and the humans who would share the environment. Intelligent agents must be able to develop abstract mental models of each other to cooperate or compete in a shared environment. Language omits many important details, such as sensory experience, goals, needs. We fill in the gaps with our intuitive and conscious knowledge of our interlocutor’s mental state. We might make wrong assumptions, but those are the exceptions, not the norm. And finally, developing a notion of “cleanliness” as a reward is very complicated because it is very tightly linked to human knowledge, life, and goals. For example, removing every piece of food from the kitchen would certainly make it cleaner, but would the humans using the kitchen be happy about it? A robot that has been optimized for “cleanliness” would have a hard time co-existing and cooperating with living beings that have been optimized for survival. Here, you can take shortcuts again by creating hierarchical goals, equipping the robot and its reinforcement learning models with prior knowledge, and using human feedback to steer it in the right direction. This would help a lot in making it easier for the robot to understand and interact with humans and human-designed environments. But then you would be cheating on the reward-only approach. And the mere fact that your robot agent starts with predesigned limbs and image-capturing and sound-emitting devices is itself the integration of prior knowledge. In theory, reward only is enough for any kind of intelligence. But in practice, there’s a tradeoff between environment complexity, reward design, and agent design. In the future, we might be able to achieve a level of computing power that will make it possible to reach general intelligence through pure reward and reinforcement learning. But for the time being, what works is hybrid approaches that involve learning and complex engineering of rewards and AI agent architectures.
<urn:uuid:3b4fbf32-0687-4717-8e82-3c6f2e85d8c7>
CC-MAIN-2022-40
https://bdtechtalks.com/2021/06/17/evolution-rewards-artificial-intelligence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00781.warc.gz
en
0.955831
2,615
2.75
3
By the end of this article, you should be able to distinguish between these concepts. Copyright: kdnuggets.com – “The Evolution From Artificial Intelligence to Machine Learning to Data Science” Recent years have seen many breakthroughs and discoveries in artificial intelligence (AI), machine learning (ML), and data science. These fields intersect so much that they have become synonymous. Unfortunately, it has caused some ambiguity. This guide aims to clarify the confusion by defining the terms and explaining how they are applied to business and science. We won’t cover them in-depth; however, by the end of this article, you should be able to distinguish between these concepts. Defining Artificial Intelligence As a field, AI centers around creating flexible automated systems. The ultimate goal of AI is to build systems that can function intelligently and independently much as human beings can. As such, AI must be able to mimic some of the senses that human beings have. They must at least be able to hear, see, and sometimes sense touch and smell. The AI must then be able to interpret stimuli received through these senses and respond accordingly. Thus, different fields and branches under the AI umbrella are dedicated to giving machines and systems these abilities. Major Branches of AI The major branches of AI are: - Machine Learning (ML) - Deep Learning (DL) - Natural Language Processing (NLP) - Fuzzy Logic - Expert Systems - Neural Networks These concepts aren’t separate fields from artificial intelligence but make modern and future implementations of AI possible. Stages of AI The three phases/stages of AI are as follows: - Artificial Narrow Intelligence (ANI) is the current stage of artificial intelligence. It’s also known as weak AI and describes systems of AI that can perform a limited set of defined tasks. - Artificial General Intelligence (AGI): We’re slowly approaching this stage, also known as strong AI. It describes AI as capable of reasoning just as well as human beings. Some academics feel that the AGI label should be limited to sentient AI. - Artificial Super Intelligence (ASI): This is a hypothetical stage of AI where the intelligence and capabilities of computers surpass those of human beings. For now, ASI does not exist outside of the realms of science fiction.[…] Read more: www.kdnuggets.com
<urn:uuid:0af9554e-ed8f-4101-b274-0b030e4221d9>
CC-MAIN-2022-40
https://swisscognitive.ch/2022/08/13/the-evolution-from-artificial-intelligence-to-machine-learning-to-data-science/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00781.warc.gz
en
0.943938
494
3.234375
3
According to MobiusMD, there are over 350,000 mHealth apps available in major app stores, a number that includes medical as well as wellness, health and fitness apps. Available apps have roughly doubled since 2015 driven by increased smartphone adoption and ongoing heavy investment in the digital health market. MobiusMD also found that 87 million people in the US used a health or fitness app monthly in 2020. That’s about 30% of adult smartphone owners, a number that’s expected to remain relatively stable in the next three years. And MobiusMD also noted that most smartphone users have used their device to gather health-related information, with Pew Research Center putting that number at 62%, making mHealth a more common smartphone activity than online banking (57%), job searches (42%) or accessing school work or educational content (30%). So 1 in 3 of all smartphone users in the US use an mHealth app at least once a month, most of which gather some level of protected health information (PHI). That’s a lot of apps that have to be HIPAA compliant. Mobile Apps Security implications of HIPAA Compliance Title II of the Health Insurance Portability and Accountability Act (HIPAA) sets the rules for sharing personal health information and preventing unsanctioned use. Specifically, it covers patient privacy protections and security controls for health and medical records and other forms of Protected Health Information (PHI) and Electronic Protected Health Information (ePHI). The HIPAA Security Rule requires appropriate administrative, physical and technical safeguards to ensure the confidentiality, integrity, and security of electronic protected health information. Specifically with regards to mobile apps, ensuring privacy and confidentiality can be achieved with secure authentication, data-at-rest encryption and data-in-transit encryption. HHS has published great resources for mobile health app developers. Secure Authentication in Mobile Health Apps and Mobile Wellness Apps In order to ensure good data protection in mHealth apps, app makers should first ensure secure authentication to the app. Access to mobile health apps should at a minimum require a patient to enter their username and password each time they open the app. Apps should also log a patient out after a certain time of non-use. Preferably, mHealth apps should also use biometric authentication (FaceID or TouchID) or multi-factor authentication to achieve a higher level of secure authentication. Data-at-Rest Encryption in Mobile Health Apps and Mobile Wellness Apps The second element of data protection is ensuring that all patient information, not just protected health information, is stored encrypted in the app. mHealth app makers can achieve this by encrypting the application sandbox with AES-256 encryption. In addition, strings, resources, in-app preferences may also store patient data so they should be encrypted as well. Data-in-Transit Encryption in Mobile Health Apps and Mobile Wellness Apps Finally, app makers should ensure that the mobile health app communicates with backend servers over an encrypted channel so that patient data sent or received cannot be intercepted by a Man-in-the-Middle or other network-based attack. In addition, app makers should take measures to validate digital certificates (both client-side and server-side) and ensure the authenticity of certificates and CAs. The Cost of a HIPAA Breach Given the rising fear of a HIPAA breach — a fear that is fueled by an increasing spate of high profile penalties, such as the $1.2 million settlement between the OCR and a Boston specialty hospital after a physician’s laptop with ePHI was stolen, and the $1.7 million settlement between the OCR and the Alaska Department of Health and Social Services after a USB with ePHI was stolen — some CIOs in the healthcare sector are moving to lock down mobile devices, and therefore significantly limit ePHI and other confidential data access and control. Yet, while this approach solves compliance needs, it triggers two difficult and potentially intractable problems: surging patient demand, and circumvention via BYOD. With respect to the first problem, the drive towards accessing and transmitting ePHI is not exclusively driven by physicians and other healthcare professionals; patients are also looking to reap the benefits. As noted by the Frost & Sullivan white paper Moving Beyond the Limitations of Fragmented Solutions, “as our healthcare system transitions to electronic health records (EHR), consumers are demanding digital access to personal health information.” As such, any move to limit the accessibility and sharing of ePHIs is ultimately going to prevent this patient/consumer demand from being met. And with respect to the second problem, while IT staff can lock down corporately-owned devices, they have no way to maintain total control over personally-owned devices (BYOD), and even partial control raises user privacy concerns. Nor, frankly, is such control a practical expectation. As Ken Congdon, the editor-in-chief of Health IT Outcomes notes: “Unlike other IT initiatives that are the brainchild of the IT department or driven by federal incentives, the BYOD movement is being propelled by the end users themselves — namely doctors and nurses. An overwhelming number of clinicians want to use their own mobile devices (e.g. tablets, smartphones) on the job. Denying these caregivers a means to do so in line with IT policies will only encourage some to sidestep IT roadblocks and use personal devices haphazardly. Better to find a way to address the BYOD demand as securely as possible, than to stand in the path of the avalanche”. Given the above, it’s clear that healthcare sector CIOs appear stuck between the proverbial “rock and a hard place”. On the one hand, they wisely fear the consequences of a HIPAA compliance breach, which could lead to huge fines and major, long-term reputation damage. And on the other hand, locking down devices to prevent access to files in the face of physician and patient demand, and fosters BYOD use that could ironically lead to data leakage rather than prevent it. However, this only appears to be an unsolvable problem, because there is an option for CIOs that allows them to choose compliance and productivity, rather than one or the other. Achieve HIPAA Compliance While Protecting Patient Data in Mobile Health and Wellness Apps Major mHealth and wellness providers use Appdome’s no-code mobile security and fraud prevention platform to implement a full suite of mobile app security, privacy, data protection and compliance features into any iOS or Android app – instantly without any coding. This ensures that their mobile apps have the security needed to protect user and patient data and achieve HIPAA compliance, as well as other regulations. Healthcare and wellness organizations retain complete control over ePHI on mobile devices so they can identify and thwart misuse, and fully comply with the HIPAA Security Rule. Doctors and other healthcare professionals, along with authorized agents, brokers and members get access to ePHIs they need on any mobile app, and they can rest assured that patient data is protected.
<urn:uuid:20f99472-331e-46cd-9fb7-c0d1920b7259>
CC-MAIN-2022-40
https://www.appdome.com/dev-sec-blog/hipaa-compliance-mobile-health-wellness-apps/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00181.warc.gz
en
0.931813
1,455
2.859375
3
Disaster planning has long been treated as a sideline project for IT departments, not worthy of serious focus or budgetary concern. Then came Sept. 11, and in its wake the need to prepare for catastrophe has become a front-and-center necessity for businesses big and small. Indeed, business continuity planning has become an industry unto itself, spawning a new and revised view of what it means to prepare for a disaster. Today, the idea that a disaster will strike has shifted from a possibility to a probability in the minds of corporate officials, forcing technology staffs to realize that downtime for any reason is unacceptable. Highlighting this is the attention CFOs and CEOs are now placing on Information Technology (IT) recourses. Recovery Time Objectives (the amount of time it takes to bounce back from a disaster to full productivity) and Recovery Point Objectives (the amount of acceptable data loss after a disaster) have shrunk to the point that tape backup systems can no longer protect the enterprise. Companies are now expected to ensure Disaster Prevention, part of business continuity and the processes by which vital data resources are protected and operate uninterrupted regardless. This is the proverbial “five nines” of reliability and uptime (99.999% availability), resulting in a maximum of a few minutes of unscheduled downtime annually — a lofty goal, but a difficult one to meet in the best of circumstances. How did business reach this level of necessity? Where did the idea of business continuity evolve from, and how did we reach this new plateau of standards to keep business up and running? What circumstances brought the enterprise from veritable carelessness only a few short years ago to the vital vigilance that we see today? While Sept. 11 may have shone a spotlight on this necessity, the evolution of data protection systems has been an ongoing process since the dawn of Information Technology. New Technologies, New Problems Prior to the dot-com craze, enterprises kept data either on paper or some other physical media (i.e. hardcopy, punch-cards, etc). Not being stored in digital form, theoretically, business data could survive any digital disaster. However, it still was susceptible to physical disasters such as an earthquake or fire, so to combat these issues companies began storing physical copies off-site in repository or secure facility. (This is a critical concept, as this theory of off-site storage later crosses over into the digital world in a nearly identical form.) In addition, mainframe systems of the time were backed up to heavily protected magnetic tape. The data kept on them was generally used in conjunction with physical hardcopy, so that a loss of the system for a day to restore data wouldn’t bring down the enterprise. Then came the widespread deployment of desktop computers, and data was no longer safe because employees were not storing vital corporate data on the company mainframe or in physical files. Suddenly, power fluctuations, physical anomalies, and a host of other disasters could literally wipe out valuable data without any potential of restoring it. Recognizing the risk this presented, backup systems and office-based servers emerged to begin to protect corporate data on PCs in the same way that paper files and mainframes magnetic tapes were protected. Moving forward, smaller server systems, which offered a more flexible and economical alternative to the older, slower mainframe systems, were implemented. Not surprisingly, mainframe systems began to dwindle and disappear, even in the enterprise space. This new computing power presented a whole new host of potential problems, not the least of which was data loss. Once again there were worries about the desktops getting vital data to the server systems and the fact that the server systems themselves were barely more secure than the desktop. The newest solution became constructing more complex backup systems to shift the data from the volatile servers to a somewhat more stable backup media, such as magnetic tape. Having tapes to restore lost data or even entire data-systems in the event of a disaster seemed like the perfect solution. Downtime Drains Bottom Line But companies became more and more dependent on data-systems and it became apparent that waiting to restore data caused not only revenue loss, but also damaged customer relationships and the company’s reputation. To address the need for virtually no downtime, disaster recovery services were born. The field of Disaster Recovery Services (DRS) is extraordinarily broad, but its purpose is simple: Restore data to a downed or corrupted server system or other data-system as quickly as possible. DRS extends from re-configuring tape systems to make them more reliable and faster to keeping duplicate servers on standby to allow them to take over at a moment’s notice. Disaster Prevention in action, that’s what Business Continuity Planning (BCP) really is. The science of determining ways to allow data-systems to continue working, even if an entire physical location is downed or destroyed, with the baseline idea being that data-systems are portable objects. This is a fundamental shift in thinking from the days of mainframe-based enterprise computing, where the system was the hardware for the most part. Today the level of operating systems and software packages are considerably more important than hardware, as they determine the level of power of the system itself. Once business IT staff accepted this, the doors of BCP were flung wide open and the advent of the distributed data-system emerged. No longer was the corporate data-system at the mercy of a single point of failure. The entire data-center could be grouped, clustered, and manipulated as a single entity to protect the data of the enterprise. For example, IT staffs formed load-balanced web-sites with groups of servers that could all share the load of a single or even multiple downed machines and created e-mail server groups that spanned the country, each one able to hold messages for an offline counterpart. But, as with any great plan, there was one significant flaw — the data-center could be the single point of failure. As we have seen recently in California, even the most redundant data-center can fall victim to a power grid failure, and when the diesel backup generators finally run out of power, the data-center, and corporate data, go offline. BCP had come a long way, but still had a long way to go before achieving the mythical “five nines.” Transcending Physical Boundaries Stepping up, mega-storage companies like EMC produced storage systems that could replicate themselves to other data-centers, not located in the same physical vicinity — a theory similar to that of what businesses used years before to protect physical data like punch-cards and hardcopy. Replicating meant the entire body of corporate data could be kept up to date in some other location, thereby protecting against the possibility of failure due to the loss of a physical location. Initially it seemed like an ideal solution, but it was really only a reversion to Disaster Recovery, just on a much larger scale. The data was safe in a secondary location, but inaccessible because the servers were still located and attached to the primary storage device. So, until the primary location could be brought back online, the data was unavailable and the company was losing revenue. Clustering, while a good solution for single-site High Availability, could not be stretched to multiple sites and therefore couldn’t protect data in the event of the loss of any physical location. Another leap forward was required to fully address the situation. Expanding on data replication, companies like NSI Software began to develop BCP software that allowed both the data-system and data to survive a physical site failure by transcending physical boundaries. Enabling the enterprise to eliminate the single point of failure in a cost-effective manor, real-time replication software products have made ensuring business continuity an attainable feat for businesses of all sizes. With replication software, clusters no longer needed to be physically connected to a shared storage array, and with High Availability systems, stand-alone machines could stand in for each other no matter where they were physically located. In addition, by utilizing platform and storage independent data structures, these replication products allowed IT staff to create duplicate hardware and software configurations in multiple physical locations that could share data and keep each other up-to-date. Now, systems could stand in for each other seamlessly on a moment’s notice without end-users having to perform any tasks or even noticing the change. Essentially, the end user can continue to work, uninterrupted, while the data-systems handle all the tasks of taking over the data-processing load for their downed counterparts. Examples of the value and capabilities of replication software are easy to see. Failure of an Exchange e-mail system in New York City can now seamlessly switch to a physical system in Detroit, without the CEO (or anyone else) missing a single message. Knowing the information is available, the IT staff can then correct the issues in NYC and fail-back the physical systems to restore them to their original state, without the pressure and rushing that often causes even more damaging mistakes than the original outage. Finally, the goal of achieving the “five nines” can now be met, signaling the conclusion of a monumental paradigm shift from keeping everything on physical media that could be duplicated off-site to a digital world of self-healing data-systems that create the truly digital, always on enterprise. Mike Talon has been working in the Information Technologies field for more than 10 years, specializing in data protection and disaster prevention. He currently works for NSI Software, a developer of data replication technologies and services. He can be reached at firstname.lastname@example.org.
<urn:uuid:eb3ccf7b-405b-47d6-89ec-d90690fdb6c6>
CC-MAIN-2022-40
https://www.datamation.com/security/new-realities-in-handling-disasters/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00181.warc.gz
en
0.952044
1,989
2.578125
3
VPN stands for Virtual Private Network. In simple terms, it is a service that protects your privacy and internet connection while online, as well as helps bypass censorship and other restrictions. It does this by creating an encrypted tunnel through which to send your data. In a sense, a VPN acts as a middleman between your device and remote servers, and carries your data over existing networks without exposing it to the public internet. In this article, we’ll explain what a VPN is and how it works in more detail, as well as cover the ways it can be useful in your daily life. What does a VPN do? Before we get into how exactly a VPN works, let’s take a look at what it can do, and how you can use it to your advantage while using the internet. Hide your online activities If you live in an oppressive regime, the government could use your internet history against you. If you’re connecting directly, your Internet Service Provider (ISP) knows every domain you visit. Using a VPN to secure the connection helps you avoid such surveillance – all your ISP will see is you connecting to a VPN. In some cases, they won’t even know that much. Even if you’re living in a democracy, there are reasons why you may want to hide your online activities. A prime example is torrenting – while not illegal by itself, torrenting is mainly used for (and associated with) distributing copyrighted material. Being caught while torrenting can lead to legal issues, which is why many torrenters use VPNs to hide their IP address and protect their identity. You may have also heard that certain companies make profit by selling your data. When you have a lot of data on someone, you can make accurate prediction models. For example, it makes much more sense for businesses that sell smart dog collars to target the people who have dogs. A VPN encrypts your traffic data and makes it invisible to your ISP or any other interested parties. Defeat government censorship Sometimes ISPs block particular sites or online services, such as social media and news. This practice is especially common in countries with strict Internet and overall content censorship. With a VPN, you can bypass the government censorship and blocks, because you’ll be connecting to the internet through a server in another country which doesn’t have internet content censorship laws. And even if you live in a country that employs advanced anti-VPN technology like Deep Packet Inspection (DPI), such as China, you don’t need to worry. Many VPNs have traffic obfuscation tools that hide the fact of VPN usage from such technologies. Access streaming content You might have heard that Netflix libraries aren’t all made equal. You pay more for your Netflix subscription in Switzerland but get a smaller library of movies and TV series than users in the US. It doesn’t sound right. To solve this, many people use a VPN because it lets you watch Netflix from anywhere, as if you’re located somewhere else – just choose a VPN server in that country. This enables you to remove limits from the content libraries of the services you have subscribed for. Since many entertainment platforms are moving to subscription-based models with third-party copyright holders licensing content based on region, expect more of this in the future. Get more flexibility with online purchases One of the most known lifehacks is that it’s best to buy plane tickets and buy hotel reservations in Incognito Mode. Even though VPN doesn’t do the same thing, it will prevent you from falling prey to price discrimination – the practice of charging a different price for the same goods or services depending on your location. Many retailers are guilty of price discrimination, and there’s a good chance that your next purchase will be cheaper if you’re using a VPN. Plus, if you’re abroad and want to order something for when you get back home, the VPN might be the only way to access the local webpage version. Bypass ISP restrictions ISPs sometimes deliberately slow down your internet connection by throttling bandwidth. This is especially true for P2P traffic – torrenting large files at high speed can be heavy on the internet infrastructure, and throttling is their way of solving that. By using a VPN, you can hide the contents of your traffic, making it harder to pinpoint you and impose download speed limits. This is also one of those rare situations where a VPN can actually increase your connection speed. ISPs also tend to collect data about their users, and use that information for targeted advertising purposes. Having your internet activities tracked and used against your own will is not fun. A VPN can prevent this by encrypting your traffic data and making tracking you virtually impossible. Secure public wifi networks A VPN’s primary function is security and privacy protection. It comes in particularly handy when you’re faced with using public wifi in airports or cafes. In these settings, a crafty hacker could set themselves up between you and the router, intercepting your traffic in what is known as a man-in-the-middle attack. A VPN can prevent this and ensure utmost online pricy because any intercepted traffic would be encrypted and useless. Your IP address could also be useful to hackers. Because an IP address reveals your real location, you could become the victim of doxxing or DDoS attacks. A VPN gives you a different IP address when you connect and prevents such events from happening. Access blocked websites A VPN is useful when you need to access websites that are blocked on your local network, such as at school or workplace, as well as websites that are only available in certain areas or countries. These sites may include social media, news channels, gaming and streaming platforms, and anything else that might be considered a distraction from work or school activities. A VPN will change your IP address and route your traffic through its servers, thus allowing you to bypass school or corporate network restrictions and access all and any online content you want. How does a VPN work? A VPN changes your real IP address by rerouting your traffic through one of its servers via an encrypted tunnel. It is a combination of network infrastructures such as VPN servers and VPN software. Simply put, you need a remote server and a VPN tunneling protocol (or VPN client app) to establish a secure connection. Let’s look at an example of how visiting Amazon would work without a VPN. You type the URL (https://amazon.com) into the address bar of your browser and press Enter. The Amazon homepage loads and you can do your Christmas shopping. Here’s how it works in more technical terms: - Your browser contacts a Domain Name Server (DNS) assigned by your ISP, asking it to translate the website domain into an IP address. - Knowing the Amazon server’s IP address, your device can now send a request and retrieve the website. - Your ISP routes your request to the Amazon server and returns a response. This is very simplified, but that’s essentially how any connection works if you’re not using a VPN. In this example, the Amazon website is secure and uses TLS/SSL (HTTPS), so your connection is encrypted. If you visit an insecure website that doesn’t have TLS, your data won’t be encrypted. But despite the TLS encryption, this type of session still isn’t completely private: - By sending a DNS request to your ISP, you are telling your ISP that you want to visit Amazon.com - Further communication through your ISP tells them what you’re looking up on Amazon - Amazon also knows your IP address and can therefore determine your location as well as, potentially, your identity Now let’s look at an example of how visiting Amazon would work if you were using a VPN: - Firstly, you would connect to a VPN server in a country of your choosing, e.g., the UK - The VPN app uses a tunneling protocol to create an encrypted connection to the VPN server - You type amazon.com into the address bar and click Enter. Yet this time, the DNS query is resolved by the VPN, denying your ISP knowledge of what you’re doing - The VPN establishes a connection between their server and the Amazon.com server - Traffic goes from you to the VPN server, then to Amazon’s server, and back Why are VPNs good for privacy? Connecting to the internet via a remote VPN server does several things: - It hides your IP address (and thus your location and identity) from the website or online service you’re using. In our above example, Amazon would see the VPN server IP address rather than your own - Additionally, it prevents your ISP and, by extension, your government from knowing what you’re doing online – your ISP can see you’re connecting to the VPN server IP, but nothing beyond that point - It encrypts your data, protecting your privacy and security if someone intercepts it. This is particularly relevant if you’re using public wifi and visiting insecure websites, which don’t encrypt the connection via TLS/SSL Your browsing history can get you in a lot of trouble in certain situations. For example, imagine you’re in China and visiting a political forum where users are expressing anti-government views. Or perhaps you’re visiting a porn site as a citizen of Saudi Arabia. Without a VPN, your ISP knows everything you’re doing on the internet. In countries with strict internet controls, ISP data is often freely available to government agencies. VPNs’ ability to redirect and encrypt traffic has made them a favorite tool for anyone seeking online security, anonymity, or simply trying to unblock censor and restricted content. What is VPN encryption? VPN encryption is a process of making the data traveling between a device and a VPN server unreadable to anyone without an encryption key, namely other devices. VPN tunnels that go from your device to the VPN service provider’s server are also secured by using encryption. VPN encrypts all of your internet traffic, including your browser, torrent, messaging app traffic, or whatever else you may be doing on the internet. No one will be able to see or intercept your online activities because of VPN encryption Although encryption slows down your connection a little, it does not interfere with your ability to connect to the Internet. It simply makes it impossible for someone to reveal network exchanges. How does VPN encryption work? Your data is encrypted all throughout the transferring process between a device and a VPN server. It gets deciphered only at the endpoint – when leaving the VPN tunnel and entering your device. VPNs use three types of cryptography: symmetric encryption, asymmetric encryption, and hashing. Here’s how VPN encryption works: - When you connect to a VPN server, the connection performs a “handshake” between a VPN client and a VPN server. During this step, hashing is used to authenticate that the user is interacting with a real VPN server, and asymmetric encryption is used to exchange symmetric encryption keys.A few popular examples of asymmetric (or public key) protocols used at this stage are RSA or Diffie-Hellman. - Once the handshake is successful, symmetric encryption is used to encrypt all data passing between the user and the VPN server. The most common symmetric encryption cipher used by VPNs is AES (specifically, AES-256). Most top VPN services rely on the Advanced Encryption Standard (AES) cipher to seal the data that goes through – the same type of encryption that financial and government institutions use. What is AES-256? AES-256 stands for Advanced Encryption Standard using 256-bit integers to process data. It is a symmetric key encryption algorithm for encryption and decryption. Generally, it’s considered the gold standard of modern encryption. VPNs use it to create a safe tunnel for your private data exchanges. You might see weaker AES standards like AES-128. This simply implies that the cryptographic key is shorter and easier (although still virtually impossible) to “brute force.” As a rule of thumb, the longer the encryption key, the more potential combinations, which would take longer to crack. It’s the same principle as using a longer password means it’s harder to guess. On the flip side, a longer encryption key means slower connections because the encryption and decryption take longer. In the wild, you will most often find three variations of AES: AES-128, AES-192, and AES-256. Additionally, you may encounter different modes of operation, such as AES-256-GCM or AES-256-CBC. Not all tunneling protocols support this kind of encryption. For example, PPTP uses the much weaker MPPE cipher, whereas the new WireGuard protocol primarily uses ChaCha20. What is a VPN server? A VPN server is what enables users to use the VPN service in the first place. It is a combination of VPN hardware, such as physical servers stored in physical places, and VPN software. The top providers have hundreds or even thousands of servers scattered across the globe. The further the VPN server is from the user’s real location, the worse the performance will be, so servers in various locations are important for better performance. On top of that, the more locations a provider has servers in, the more virtual locations a user can connect to without actually having to move. Some providers also use diskless, RAM-only servers. These are the kind of servers that have no external storage, and any data that’s on them gets wiped clean with every server reboot. VPN providers choose RAM-only servers to ensure complete user privacy and comply with their no-logs policies. What does a VPN server do? A VPN server forwards your internet traffic to the destination server and returns the response to you. When you connect to a VPN server, your IP address changes, and so does your virtual location. Thus, the websites that you visit will assume that you’re based in the VPN servers’ country. This is especially useful for bypassing geo-restrictions and various other content blocks and Internet censorship. By contrast, if you’re not connecting through a VPN server, the owner of any website you visit will know your real IP address and your location. You may want to avoid this for privacy reasons, as well as certain content restrictions. Some websites and services are available only in specific locations, or have local versions What is a VPN client? A VPN client (or a VPN app) is the software on your device that communicates with a VPN server, establishes the connection, and encrypts data. How does a VPN client work? A VPN app (or client) is where you control your VPN experience: which server to connect to, which tunneling protocol to use, which features to activate, etc. Most VPN service providers have apps for Windows, macOS, Android, iOS, Linux, Amazon Fire TV, and other devices and operating systems. That said, you can also use a VPN without a custom VPN app. All major operating systems offer VPN functionality in some form. For example, you can set up a VPN connection through your networking settings on Windows. You can also set up a VPN client on your wifi router by following instructions from your VPN provider. The primary function of a VPN protocol or tunneling protocol is to establish a safe tunnel between your device and the VPN server. When a VPN connects to a VPN server, it creates a tunnel to send data. The protocol used to create this connection determines how your data is sent through the network. Some protocols are more secure, some are faster, some are better on mobile devices or older PCs, some are better at bypassing firewalls, and some are just outdated Common VPN protocols Most VPN providers didn’t develop the protocols themselves but merely implemented the technology in their apps. Here are the most common protocols that you could find in most VPN clients: IKEv2 – stands for Internet Key Exchange version 2. It mainly handles request and response confirmations. For authentication, IPSec is also often used together with IKEv2 (IKEv2/IPSec). This protocol is very efficient on an unreliable connection. IKEv2 effectively reestablishes after a connection loss. It’s also one of the fastest, most used tunneling protocols on mobile devices because it can easily switch between wireless to cellular connection, and vice versa. OpenVPN – by far the most common tunneling protocol on desktop apps. This is an open-source protocol based on OpenSSL. It comes in two types – TCP and UDP. - UDP is the User Datagram Protocol. It is much faster because it doesn’t allow the recipient to resend data requests. This means less verification of data integrity, which allows for more rapid exchanges, hence better speeds. - TCP is the Transmission Control Protocol. It allows multiple data verifications, so the processing time may be slower, limiting your internet speed. Use UDP on the networks you can trust, while TCP will be better on public wifi hotspots. L2TP/IPSec – On its own, L2TP doesn’t provide any encryption. Its job is request and response confirmations. Encryption enters the arena with IPSec, which is often used in conjunction. There are many discussions about whether this protocol is secure because it was co-developed with the NSA. The Edward Snowden leaks seemed to imply that the NSA may have backdoors to access L2TP/IPSec traffic. WireGuard – the next-gen of tunneling protocols. It uses fewer lines of code, making it easier to audit, and squeezes the most out of your device’s processing power. It’s ideal for mobile devices and slower computers, has up-do-date encryption built-in, and offers reliable connections. WireGuard gives the best performance of any current VPN tunneling protocol. SSTP – Secure Socket Tunneling Protocol. Created by Microsoft, this protocol is not exclusive to Windows and provides a high level of encryption. While SSTP is very capable, there are concerns that Microsoft may have backdoors to access SSTP traffic. PPTP – Point to Point Tunneling Protocol. Developed in the late ’90s and the first to become widely available. This protocol relies on outdated encryption, which has become vulnerable to brute force attacks as computing power grew. As such, few VPN service providers currently offer this protocol. Proprietary VPN protocols Some VPN service providers have developed their own tunneling protocols: Catapult Hydra – developed for the Hotspot Shield VPN service. The company claims that this protocol allows the service to achieve much better connection speeds than using standard tunneling protocols. Whether due to Catapult Hydra or other reasons, Hotspot Shield has always been among the fastest VPNs. NordLynx – only available on NordVPN. NordLynx is a modified version of WireGuard, solving potential security issues while keeping the performance intact. Lightway – only available on ExpressVPN. It uses an open-source implementation of Transport Layer Security (TLS), wolfSSL. Its goal is to be as lightweight as possible, aiming for ease of maintenance and high performance. What to look for when choosing a VPN? VPN services are not made equal. Some of them have more features, better security measures. Others have completed third-party audits that add credibility to their transparency claims. When choosing a VPN service, you’re making a conscious decision to trust a company with your data. The least you could do is invest time in some research. Here are a few things to look out for: Even if you’re just looking for a VPN to unblock Netflix, the service’s reputation is essential. Your privacy is important and you should never trade it. Unfortunately, it can be challenging to know what VPN services are up to behind closed doors. Yet if a VPN provider has been caught red-handed giving away user data or bending the truth about their services – that’s a good way to know which VPN not to choose. Where a VPN operates from matters. Some countries require VPNs to collect user data whereas others have harsh copyright laws. As a user of such a VPN, you run the risk of letting your data get into the wrong hands. The Edward Snowden leaks shed light on the scope of surveillance around the globe. If you think that living outside of the US makes you safe against the NSA and you’ll have nothing to worry about, think again. The surveillance alliance known colloquially as the 14-Eyes shares intelligence data on each other’s citizens. And they’re not even the worst of the bunch. #3 Anonymous payment options You are as anonymous as your method of payment. Paying with a credit card leaves records not only on your banking statement but in the company’s accounting logs. It never hurts to check if your chosen service supports payments via cryptocurrency, prepaid cards, or other options. As a rule of thumb, the less personal information you provide, the better the service is for your privacy. #4 Technical specifications Encryption, reliable tunneling protocols, leak protection, a kill switch – all of these are necessary for a secure VPN. The provider can be very transparent, but if they don’t have the tech to provide privacy and security, you’re going to have a bad time. As there are hundreds of VPN services to choose from, picking the best one might seem like a daunting task. Luckily, there are a few ways to distinguish the good from the bad. Here’s what you need to look for in a quality VPN: - Tunneling protocols. Not all VPN protocols were created equal. Some, like PPTP, are downright outdated. So, when choosing your VPN, look for fast and secure protocols like OpenVPN, IKEv2, and WireGuard. - Server list. It comes without saying that you should pick a VPN that offers servers in the country you want to connect to. However, a broader coverage is always better in general, as the servers won’t be as crowded. You should also look for servers near you for a faster connection. - Logging policy. Always read the logging policy of the VPN you’re about to download. Look for a service that doesn’t keep any personal logs. Also, it’s better when the logging policy is audited by an independent third-party. - Streaming and torrenting. Not all VPN services are able to unblock various streaming platforms like Netflix. Similarly, not all VPNs support torrenting. Keep this in mind when looking for your perfect VPN - usually, reading a couple of reviews will give you the gist of whether the VPN will suit your needs. - Apps and devices. Whether you use Windows, macOS, iOS, Android, or Linux, it’s a good idea to check whether a VPN offers a good application for your operating system. Some VPNs also support routers, smart TVs, and gaming consoles. If you find it too difficult to pick a VPN yourself, you can check out our list of the best VPNs or simply download one of our top choices like NordVPN, Surfshark, or IPVanish. How to set up a VPN connection Setting up a VPN connection can be either very easy, or relatively difficult - it depends on whether you are using a VPN app, or attempting to manually configure VPN files on your chosen device. Using a VPN app is usually pretty simple and straightforward, while manual setup and installation on routers or devices that don’t necessarily support VPNs require some technical knowledge. Set up a VPN on your device Setting up a VPN on a device such as Windows or Mac computers, as well as iPhone and/or iPad, or Android devices is very simple, because most VPN providers have dedicated apps for them. Here’s how to set up a VPN on your device: - Purchase a VPN subscription. NordVPN has apps for all major operating systems and some other devices as well. - Download the app and follow installation instructions. - Open the app and connect to a VPN server. Set up a VPN manually Most devices have built-in VPNs that you can configure to your liking. However, they might not support certain tunneling protocols such as WireGuard or OpenVPN, or you might not be able to choose from a variety of locations. Besides that, setting up a VPN manually requires certain additional knowledge. Built-in VPNs also might not be as secure as the third-party VPN providers. Nevertheless, if you’d like to try this, we suggest taking a look at our extensive guide on how to manually set up a VPN on different devices. Install a VPN on your router Installing a VPN on your router is the best way to set up a VPN connection for devices that don’t support VPNs, or if you want to protect your whole home network. The process of setting up a VPN on a router is more complicated than manually setting it up on a device that already supports VPNs. Besides, not all routers support VPNs either. You won’t be able to install a VPN on most ISP-issued routers or older models. As setting up a VPN on your router requires some technical knowledge and focus, we suggest you take a look at our guide on how to install a VPN on your router. There are no perfect cybersecurity products, and using a VPN comes with some risks as well. Here are some potential VPN vulnerabilities that you should be aware of: - Some VPN services still use outdated protocols with known vulnerabilities. That is why most leading providers have phased out the Point-to-Point Tunneling Protocol (PPTP). - Hackers can impersonate VPN servers and intercept your data if your VPN is insecure. - Your real IP address can leak if a VPN server goes down while you’re connected and compromise your privacy. Top VPNs offer kill switch features to disable your internet connection when the VPN drops. - Your data is probably being sold if a VPN service is free. Think about it: the maintenance of server fleets costs money. Hence, when the service is free, the money has to come from somewhere. In many cases, the VPN is collecting your data and selling it off to third parties. - Some VPNs log user data, even though the logging may not be extensive. There have been instances of several VPN providers handing over user data to governments when asked. That’s why it’s important to make sure that your chosen provider is a no-logs VPN. Alternatives to VPN There are other tools out there that offer similar solutions. Which VPN alternatives work for you depends on what functionality you need. If you need to quickly unblock some site, it might not make much sense to pay for a top-notch VPN server. Even when a VPN is an appropriate solution, you might have identical results using other options. You can find workarounds to various problems by using VPN alternatives. Here are some of them: The Tor browser is an open-source browser and a network that offers anonymity by directing your traffic through a network of volunteer nodes. The traffic is encrypted, so no one along its journey can view it. To reach the desired website, your connection jumps through several of these nodes (also called relays or simply “servers”), making tracking your activities difficult. In some ways, Tor is a free alternative to VPN networks, but it has downsides. Firstly, these nodes your traffic goes through are often just servers hosted on volunteer users’ PCs. This, together with the fact that your connection goes through at least 3 nodes chosen randomly, means the speed can never compare to a top-tier VPN. Additionally, Tor has potential security issues. An experiment in 2007 showed how compromised exit nodes could be used to intercept traffic. Having enough of these nodes on the network may even lead to deanonymization. Tor is continuously monitoring all their compromised relays and blacklisting them, but they can’t realistically keep them all at bay. A proxy allows you to do the same thing as a VPN – appear as though you’re connecting from a different location. Proxy services work by connecting you to the internet through an intermediate server. They’re great if you want to access a blocked website at school, for example. The main difference from a VPN is that most types of proxies don’t use encryption, meaning they’re not as secure. Additionally, proxies work at the app level – you can set a SOCKS proxy up on your browser or torrent client, but they won’t protect any apps you use that don’t have a proxy set up. Some VPNs include proxy services as part of the package. Learn more: Proxy vs. VPN These tools integrate VPN functionalities within a browser so that you could surf the web without being tracked. For example, the Aloha browser even uses VPN tunneling protocols like IKEv2 and IPSec. The downside is that a VPN browser only protects your browser traffic. Everything else that leaves your computer can be seen and traced back to you. The VPN-related terminology can be hard to understand if you’re not already familiar with it. Here are some of the terms you may encounter when looking for a VPN or using one. Dedicated IP (static IP) Each time you connect to a VPN server, you will get a different IP address. These IPs are shared among many users and they are known as dynamic IP addresses. There are benefits to having a shared IP address. For one thing, this makes it a lot harder to link you to your online activities. However you need your IP address to stay the same whenever you connect for some things to work. To solve the issue, some VPN service providers offer dedicated IP addresses for an additional fee. DNS leak protection A DNS leak is a situation that occurs when your traffic goes through a VPN server, but your ISP’s DNS still resolves your DNS queries. This is primarily due to issues with the Windows operating system. Some VPNs have features built into their apps to prevent this from happening. If you get disconnected from a VPN server, your device will try to reconnect via your regular connection. That means the website you’re visiting now knows your real IP, while your ISP knows what website you’re on. The kill switch is a feature that solves this type of leak by “killing” your internet if the VPN drops. This phrase usually describes AES-256, the industry standard data encryption cipher. Multi-hop (double VPN) The multi-hop or double VPN feature lets you connect through 2 or more VPN servers instead of 1. It significantly increases security at the cost of performance. A “no-log”, “no-logs”, or “no logging” policy is the VPN provider’s promise not to store any data associated with your online activities. In reality, it’s often a “some logs” or “no activity logs” policy, as VPNs may keep track of timestamps of when you connect to a VPN server and other anonymous data. In recent years, top VPN services have been asking third-party companies to audit their no-log policies, making it the closest thing users have to proof that the providers don’t keep track of their data. A VPN subscription usually lets you use the service on several devices at once. This lets you install the VPN on all your smart devices or share a subscription among friends and family. The number of simultaneous connections can range from zero to unlimited. You may want to use a VPN for some online activities, while at the same time not using it for others. For example, suppose you use online banking. In that case, your VPN connection may trigger security measures put in place to protect users against suspicious logins. For cases like these, VPNs offer the split tunneling feature. On your VPN app, you can specify which websites or apps can bypass the encrypted tunnel and connect directly. That way, you can stay protected with a VPN when it counts, but route your Steam game downloads through your ISP to make them faster. Aside from the regular tunneling protocols, you may also find something called Shadowsocks. It stands in a league of its own – as an open-source encryption protocol project for proxies. First developed to defeat the Great Firewall of China, it disguises your traffic to seem like a regular HTTPS exchange. This makes it harder to detect (and block) than looking for signs of OpenVPN usage. Stealth mode (obfuscated servers) Obfuscated servers feature has many names and different implementations, but the idea is similar to Shadowsocks. Stealth mode is used to scramble regular VPN traffic and add an extra layer of encryption to the already encrypted VPN traffic, making it difficult to detect even by advanced methods like Deep Packet Inspection (DPI) Tor over VPN (onion over VPN) Several VPNs offer an integration with the Tor network for maximum security. This puts so many layers between you and the destination server that finding out what you’re doing is practically impossible. However, your connection speed will suffer significantly. Are VPNs legal? Yes, VPNs are legal in most countries. However, there are some countries which have laws that limit VPN usage or ban them altogether. However, this is a pretty rare occurrence. But while using a VPN is generally fine, doing so to commit a crime is illegal. Can VPN steal my data? Yes, a VPN provider can steal your data in theory. However, many top VPNs operate under no-logging policies, meaning they don’t collect your information. And if they do, then maybe they’re not that good anyway. How much does a VPN cost? A monthly VPN subscription costs between $5-12 on average, and an annual subscription is between $3-8 per month. Most VPN prices depend on the duration of your subscription. If you subscribe for more extended periods, you pay less per month, and the longer the subscription period, the lower the price. Can a VPN see my passwords? No, a VPN most likely can’t see your passwords. That would be possible only in cases when a website uses HTTP, so you should avoid typing out your login credentials on such websites anyway. Luckily, most websites use HTTPS, which encrypts your data and makes stealing your password impossible. How does a VPN increase your security? VPN increases your security by hiding your real IP address, hiding the IP address of a website or service that you are using from your ISP, and securing your connection by using encryption. These measures make a VPN one of the best cybersecurity and online privacy tools. Can you be tracked with a VPN? It’s very difficult to track a person who uses a VPN. However, there are ways of tracking your online activity even when you use a VPN: cookies, digital fingerprinting, DNS leaks, malware, and doxxing. So while a VPN is an excellent privacy tool, it does not completely eliminate tracking risks. Is a VPN worth it? Yes, a VPN is worth having, especially if you value your privacy and security while browsing online, torrenting or doing anything else that requires internet connection. Besides that, a VPN can also help you bypass content restrictions, access blocked websites and avoid ISP speed throttling. Is a VPN the same as wifi? No, a VPN is not the same as wifi, even though they both work in similar ways. Wifi simply connects your device to a server which in turn is connected to the public internet, while a VPN connects you to a server using encrypted tunnels, and allows you to use its server to access websites and apps.
<urn:uuid:87386a44-0178-49c1-809f-6bb84fb3c31e>
CC-MAIN-2022-40
https://cybernews.com/what-is-vpn/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00181.warc.gz
en
0.936132
7,593
2.90625
3
In this article, I want to talk about artificial intelligence (AI) and how it is transforming the training and education sector. Before we get to that part, let’s quickly take a look at how formal and informal education has evolved side by side throughout history. Evolution of formal and informal education Formalized education has existed for thousands of years. Greek philosophers used to deliver lectures and teach their students long before the time of the Romans. It goes back hundreds of years before the Julian calendar was even introduced. If we dig deeper, formal education can be tracked all the way back to the Babylonian times. Informal education has existed throughout history and is much older that formal education. Even before humans invented language and writing, they communicated with each other and transferred useful information to their fellow human beings. If that wasn’t the case, humanity would never have been able to move out of the caves to start living in tribal societies and so on. Educational technology evolved from the quill and animal skins to the printing press and now the internet. Perhaps the most significant development in EdTech, before the internet, was the printing press. Before the printing press, if you needed to create multiple copies of a book, you had to rely on handwritten copies. The printing press enabled mass production of written material and changed the game in the education sector forever. From 1440 to 1980, improvements were made to the printing press, the type writer was introduced, and distance learning was made possible to a certain extent through radio and television. The introduction of the internet changed the world in the 1980s, and distance learning became common through digital means in the 1990s. This was made possible due to a number of schools, colleges, and universities making use of online education. It is around this time that the term elearning became common and eventually replaced distance learning. According to the World Economic Forum, we are going through the forth industrial revolution. - The first industrial revolution was all about steam power - The second was about electricity - The third is about computers - And now a fourth industrial revolution is upon us, characterized by a fusion of physical, digital, and biological technologies. Artificial Intelligence is driving the fourth industrial revolution. It is already supporting several industries such as healthcare, automotive industry powering self-driving cars, financial software advising on stocks to invest in etc. and now education and training. So, what is artificial intelligence? The term artificial intelligence was coined by a Stanford researcher John McCarthy in 1956. Since then, it has been a popular sci-fi subject. However, it is becoming a bigger deal now than it ever was considering the developments in machine learning and deep learning. Basically, the concept of artificial intelligence is based on the notion of building machines capable of learning, thinking, and acting like humans. In short, a code that continues to learn and evolve is contributing to the revolution. Impact of artificial intelligence on elearning Just like the printing press and the internet, Artificial Intelligence will prove to be another game changer in the elearning industry. In fact, I believe that AI will be the highlight of this century not only in EdTech but in other sectors as well. AI is going to pay a critical role in competency based and adaptive skill building. It will define how the student interacts with the system and learns, will assess not only what a student knows now, but also determine which topics they need to master that is tied to the learning outcome. AI will automate the learning feeds and recommendation based on people’s competency, auto feedback on questions on various topics, career paths and career mapping. It can provide a great level of personalization and customization of training and deliver a unique learning experience for each individual. Possibilities for the future I came across this interesting article recently where the author talked about how data in the education sector still exists in siloes, and that the consolidation of said data will allow for machine learning to do its magic and make predictions and recommendations like never before. While AI is currently being used to curate educational material and make recommendations, the opportunities presented by this technology are limitless. If self-teaching Artificial Intelligence can learn to sift through data and learn new skills, it can also learn to sift through text, audio, and video answers to open ended questions. It is easy to grade students on multiple choice questions, true and false questions, and fill-in-the-blank type questions and so on, because these questions have a pre-determined correct answer and anything outside of it is a wrong answer. When it comes to open-ended questions, you still need a human to grade the students on their performance. Artificial intelligence can be used to automate that part of the process, essentially filling the need for a human teacher to grade the students. Artificial intelligence like Amazon’s Alexa is already being used as a home assistant and can also be used to curb the curiosity of a lifelong learner. Not only can AI tutors change the role of teachers, there have been some exciting new developments with AIs that can create curriculum based on pre-provided course outlines. For example, at QuickStart, we have integrated artificial intelligence with our LMS, turning it into a cognitive learning platform. The AI notices the courses the learner is interested in, crawls the public domain for supplementary educational material, and makes it available to the learner along with the official courseware. It equips the learner with everything they need to combine informal learning along with formal learning. And this is only the first iteration of the AI. We are planning to develop the AI to study the data available on the internet so it can recommend new courses as well as career paths to the learners. Impact on modes of learning When we think of multi-modal elearning, people usually prefer self-paced courses and instructor-led courses. The preference depends on the type of learner you are and the kind of availability you have. At QuickStart, we deal in hard skills and most of our courses are on the technical side of things. This means that people need to interact with a human instructor while the training is being delivered, and that is why instructor-led training is a clear winner amongst our offerings. When it comes to such training, availability is one of the biggest issues. You can take self-paced courses any time, but for instructor-led courses, you need to be available at a certain time and date, so that you can attend the class, albeit virtually, as and when it happens. Having an AI tutor, who teaches the class, interacts with the student, answers questions in real time, makes recommendations, tracks and grades the student’s performance, and shares progress reports can prove to be a game changing implementation of artificial intelligence, as that will take two different modes of learning, and bring the best of both worlds (instructor-led and self-paced) to the learner. With the kind of developments being made in the EdTech sector, especially in terms of artificial intelligence, the age of AI tutors is closer than you think. In conclusion, the code that learns is a killer application and it will create a huge impact in the learning and development industry in years to come.
<urn:uuid:e271b24c-1104-459f-9a90-518b221c8110>
CC-MAIN-2022-40
https://www.cio.com/article/221684/foreseeing-the-future-of-edtech.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00181.warc.gz
en
0.960551
1,496
3.15625
3
IBM is planning to debut a new collaborative attempt with material manufacturers to deliver three dimensional computer chips which will be far more complicated than all the products floating around in the market today. The company stated this week that it is looking forward to designing microchips comprising of 100 chip layers staked together. According to IBM, the stacking of chips will eventually make all sorts of electronic items function at a higher speed and in a more power-efficient manner. It is noteworthy that three dimensional chips have already made their way into some state-of-the-art products. However, the chips suffer from some serious drawbacks such as an expensive production methodology, and overheating. "Today's chips, including those containing 3D transistors, are in fact 2D chips that are still very flat structures," explained Vice President of IBM Research, Bernie Meyerson, a vice president of IBM Research in a statement announcing the partnership between Big Blue and 3M, as reported (opens in new tab) in The Register. “We believe we can advance the state-of-art in packaging, and create a new class of semiconductors that offer more speed and capabilities while they keep power usage low,” he added.
<urn:uuid:a27c4244-5e69-46f5-807a-9008d959e6dd>
CC-MAIN-2022-40
https://www.itproportal.com/2011/09/09/ibm-collaborates-material-manufacturers-design-computer-chips/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00181.warc.gz
en
0.967638
249
2.84375
3
Crypto security between clients and servers on the Internet often rely on Transport Layer Security (TLS) protocols. Today’s protocols evolved from Secure Sockets Layer (SSL). Many people still use the acronym SSL when referring to TLS. Martin Gardner, a celebrated editor of mathematical games for Scientific American in the later 20th century, wrote a column about the RSA cipher (also available paywalled). Gardner’s column included a challenge from Rivest, Shamir, and Adleman to crack a sample ciphertext encoded using RSA, for a $100 prize. This was eventually called the RSA-129 Challenge, since the public key contained 129 decimal digits. A team of researchers claimed the prize in 1994 after brute-force cracking the RSA public key using over 1,600 cooperating computers. Steven Levy’s book Crypto provides an entertaining history of public-key cryptography and SSL through 2001. Describes how the RSA cipher is used to share a secret in the TLS protocol. Video notes: cys.me/vid/c10 Video #11 describes web server authentication using certificates vimeo.com/208069671 The previous video describes the crypto used to publish a DVD vimeo.com/200426387 See the entire Cryptosmith series in its album vimeo.com/album/4229550
<urn:uuid:9425fea2-baa5-4ca9-a255-6d03b1a76831>
CC-MAIN-2022-40
https://cryptosmith.com/vid/c10/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00181.warc.gz
en
0.842776
282
3.375
3
IT Security Policy IT Security Policy: An Overview An organization’s IT security policy involves procedures and rules that enable employees and other stakeholders to safely use and access digital resources and assets. It is important to note that an information technology security policy is far more than a set of strategies. It is also a reflection of the company’s culture, and buy-in from everyone in the organization is necessary for its successful execution. For an IT security policy to be effective, it has to be documented and made available to people at all levels of the organization. The document should outline important elements, such as: - The high-level and granular objectives of the policy - The policy’s scope - The goals of the policy, both for the organization as a whole and for the specific departments and assets it is designed to protect - Any responsibilities related to making sure the organization complies with internal measures and governmental legislation Why Do Enterprises Need an IT Security Policy? The importance of an IT security policy cannot be overstated. Enterprises need it because it clearly outlines everyone's responsibility regarding the protection of specific processes and assets. It serves as a central document that anyone can refer to—a cybersecurity compass that provides direction, in a sense. In addition, because the company’s executives accept and endorse the policy, it represents a commitment at the highest levels to the security of the organization's IT infrastructure. In this way, the policy serves as both a technical reference point and a cultural artifact—tangible evidence of the organization’s commitment to cybersecurity. IT Security Policy Key Components The key components of an IT security policy include confidentiality, integrity, and availability, also known as the CIA triad, and authentication. Confidentiality involves preventing information from being stolen or accidentally made available to unauthorized people—whether from within or outside the organization. This is because threats can be internal, too, and limiting employee access to specific areas of the company’s infrastructure prevents bad actors from abusing their privileges. At the same time, it limits the possibility of people accidentally divulging information, changing a setting, or otherwise impacting the integrity of data or systems. Data integrity refers to how accurate the data is and whether it is changeable only by those with the appropriate authorization. By maintaining a high level of integrity, your IT team ensures that your data is usable, both by individuals and systems. To maintain stringent integrity standards, limiting the number of people who can access your data is essential. In other words, a system characterized by integrity is much unlike Wikipedia or Quora, which invite people to access and contribute data. With Wikipedia, for example, it is easy for nearly anyone to modify content, and perhaps you have seen the results: inaccuracies, inconsistencies, and even fake information included as a joke. An IT security policy takes the opposite stance. It minimizes the number of people and systems that can alter data. Availability, in terms of an IT security policy, refers to whether or not data can be accessed by the appropriate people or systems when and how they need it. At times, it can be difficult to balance availability with confidentiality, especially because as you boost confidentiality, you have no choice but to limit availability. Availability in terms of digital systems needing to access data is just as important, if not more so. For example, an application usually depends on a database that holds information. In some cases, this data is highly sensitive, and if allowed outside the organization's digital boundaries, there could be considerable damage—fines resulting from data exposure, for instance. Your IT security policy has to both make this data available to the application without potentially exposing it to bad actors. Authentication involves verifying that anything that claims to be true is, in fact, true. A simple example would be a user’s identity as they try to log in to a system. For instance, if someone steals the username and password of an authenticated user, they can try to log in using those credentials. But your IT security policy may require multi-factor authentication (MFA) for that segment of your network. If that is the case, the malicious actor will need more than just the username and password. And because it may not be possible to find a way to provide additional authentication credentials, such as a fingerprint or facial profile, you may be able to thwart their attack. What Are the Three Types of IT Security Policy? The three types of IT security policy include: - Organizational: This focuses on creating a company-wide blueprint that outlines policies for all of the organization's digital infrastructure. - Issue-specific: An issue-specific policy is designed around a specific issue, such as who can make configuration changes to the organization’s firewalls. - System-specific: A system-specific policy aims to protect a particular system, such as the backend of the company’s website, making sure only authorized people can access it. IT Security Policy Best Practices Here are some of the most effective IT security policy examples and best practices: - Use the COBIT framework: The Control Objectives for Information and Related Technologies (COBIT) framework is designed to facilitate how IT systems and tools are managed, implemented, and improved. An effective IT security policy leverages several of its principles, such as end-to-end enterprise coverage and employing integrated frameworks. - Have a strict password management policy: Passwords are usually necessary to access important systems, so managing them needs to be a priority. Effective password management involves requiring everyone to use unique, strong passwords, as well as outlining how to change them securely when needed. - Have an acceptable use policy: An acceptable use policy describes the proper way to use computers, the internet, social media, email servers, and sensitive data. It is best practice to never presume that people know the right ways to access and use data. By including relevant instructions in your IT security policy, you give everyone a central source of truth they can refer to. - Institute a regular backup policy: A properly executed backup policy can help maintain the resiliency of your organization. Many companies choose to follow what is known as the “3-2-1 rule:” maintain three copies of data, place them on two different kinds of backup media, and have one backup saved off-premises so it can be used for disaster recovery. How Can Fortinet Help? The Fortinet Security Fabric includes a variety of tools that provide visibility into your IT environments, centralized network and security management, automated incident response, and access to real-time threat intelligence from around the world. The Security Fabric also enables third-party integrations, as well as automated enforcement of your security policies. What is an IT security policy and its importance? An organization’s IT security policy involves procedures and rules that help people safely use and access digital resources and assets. What are the five components of an IT security policy? The five components of an IT security policy include confidentiality, integrity, authenticity, availability, and non-repudiation. What are the three types of IT security policies? The three types of IT security policies are organizational, issue-specific, and system-specific.
<urn:uuid:7d0547eb-f2fe-4cc1-822a-822a35c55a25>
CC-MAIN-2022-40
https://www.fortinet.com/tw/resources/cyberglossary/it-security-policy
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00181.warc.gz
en
0.9345
1,504
2.6875
3
You’ve probably had a chip credit or debit card for several years at this point. But do you know how that card is different than the old one with just a magnetic strip? Let’s look at the security of credit cards to understand how cards work to keep your transactions safe. Magnetic Stripes: The Least Secure Traditionally, credit cards have stored data in a magnetic stripe on the back of card. This is an older technology, developed in the 1960s, that’s very similar to how cassette tapes work. There are microscopic magnetic particles on the stripe, which have their magnetism adjusted to write data to the card. Tape readers, like those inside terminals, can pick up this data when you scan the card. While these cards work well enough, they have security flaws. The data contained in the magnetic stripe is not protected by any form of encryption, and never changes. This means that criminals can employ tactics like skimming, where they install devices inside legitimate scanners, and then steal card details from unsuspecting people. By placing a skimming device inside an ATM or gasoline pump, they can read the data on the stripe, clone it onto a duplicate card, and then make fraudulent transactions. In addition to skimming the card details, these schemes usually include a way to steal your PIN to complete transactions. Skimming isn’t the only way that the old card stripes are vulnerable, though. Data breaches can occur through malware attacks, too. One of the most infamous examples of this is Target, which suffered a massive data breach in late 2013. That breach happened when crooks stole credentials to Target’s network, then installed malware that stole credit card details and other information from store terminals. Card stripes having all the information needed to complete transactions in an unprotected form is clearly a bad idea. Thankfully, a better solution exists. Chip Cards: Much More Secure Today, most cards include a chip inside them. These “chip cards” are properly known as EMV cards (from Europay, Mastercard, and Visa, which were the companies that created this standard). Chip cards have been rolling out since the late ’90s across the world, but in the US, they’ve only come onto the scene in the last few years. The biggest security upgrade with chip cards is that they don’t contain all your vital card data in the chip. Instead, when you make a payment with the chip, it generates a one-time code for that transaction. If an attacker were to steal this, they would have a useless number instead of your card details. The terminal can do whatever it needs with this number, including verifying your card with the provider. But it would be nearly impossible for someone who took your card to duplicate this chip. For additional security, many parts of the world use a chip-and-PIN setup. With this, you’re required to type a PIN each time you make a purchase. This hasn’t happened much in the US, though. Here, we still use chip-and-signature in most cases, which only asks you to verify your purchase by signing a slip and comparing signatures. In October 2015, card companies shifted most of the liability for fraudulent transactions to the party that hadn’t implemented chip technology. So if your bank didn’t issue you a chip card, or if a store didn’t take chip cards, they would be liable for fraud. For backward compatibility purposes, most chip cards still include a magnetic stripe on the back, allowing them to work with older terminals. Contactless Payments and Apple Pay The US has also been adopting contactless payment cards after they have been standard in other regions. These cards, and terminals that accept them, are marked with contactless symbols, like the one below: These cards use near-field communication (NFC) to start a transaction without you having to physically insert your card. While they’re protected in the same way that EMV chips are, contactless payments typically don’t require a PIN or signature. Thus, they’re often limited to small purchases. Finally, mobile payment platforms like Apple Pay provide yet another option, and with more security. Once you add your card to Apple Pay, the service never actually provides it to merchants when you pay. Instead, it provides single-use codes for each transaction, keeping your actual credit card number safe. The other major security advantage of Apple Pay is that it requires you to authenticate purchases using your usual device security. So when you want to buy something in a store, you have to scan your fingerprint or use Face ID to authorize it. Credit Card Security Isn’t Perfect Now you know more about what happens in the transactions we make every day. While technologies like EMV cards and Apple Pay have made large strides, credit cards still aren’t bulletproof. You’ll notice that we left out a major area in this discussion: online purchases. While chip cards have greatly reduced in-person fraud, these security measures don’t do anything for online purchases. To buy something with a credit card online, you only need the card information that’s printed right on the back. These are called “card-not-present” transactions and make up a large amount of credit card fraud today. The best ways to protect yourself from online credit card fraud are to keep your card physically safe, and be careful when entering your card somewhere online. Don’t type your card into a merchant’s website unless you’re certain that it’s genuine. This is just one of the many ways to stay safe when shopping online.
<urn:uuid:c560f4dc-084d-4259-9a62-78a31b704604>
CC-MAIN-2022-40
https://www.next7it.com/insights/credit-card-security-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00381.warc.gz
en
0.947608
1,185
2.828125
3
Open source communities around the world have been on the forefront of assisting medical researchers, health care professionals and government health agencies with research on the coronavirus responsible for the rapid spread of COVID-19 around the world. "Open" means the developers of a project – whether that be software, a physical device, or research papers – have agreed that the project's product can be freely distributed and redistributed without licensing fees. While open source is most often associated with the software development process the term was coined to define, the distribution approach is being applied in other intellectual property fields as diverse as hardware, research, writing and visual. Openness has been particularly important to those dealing with the pandemic. Having research results made available under a Creative Commons license, for example, means the information can be freely copied and distributed to all researchers to whom it would be useful. Open source software allows teams of developers to design software to meet specialized needs cheaply for what are essentially small niche markets: software used specifically to administer COVID-19 cases, or software designed to help research labs do work with specialized proteins that might be useful for treating COVID-19. Much of the COVID-19 help from the software-driven open source community has come in the form of hackathons, events in which software coders and developers get together (online instead of face-to-face during the pandemic) to develop software for the common good. According to the nonprofit health data standards-development organization Health Level 7 (HL7) International, there have been at least 20 major COVID-19-focused open source hackathons, sponsored by a wide range of groups that includes MIT, Johns Hopkins University, Microsoft Research, and even the White House. The organization leading the charge on the software front has been the Debian Project. The organization, which develops the foundational Debian Linux distribution, also develops a specialized distribution called Debian Med as part of its Debian Pure Blend line. (That line releases specialized operating systems designed to meet needs specific to certain industries or users.) Debian Med is focused on medicine and health care, and is available with collections of free software packages that are sorted by categories, called tasks, with each category addressing a different area of medicine. There's a category for medical practice and patient management, for example, as well as separate categories for molecular biology, medical imaging, psychology and so on. When Debian held a special open source COVID-19 Biohackathon in early April, much – but not all – of the work was to increase Debian Med's usefulness on the COVID-19 front, both for researchers seeking to develop treatments or vaccines, and for the health care workers on the front lines in hospitals and clinics around the world. The software packages were designed for everything from medical practice management to the sequencing of RNA. The open source COVID-19 hackathon Debian held in March was so successful that the project is currently holding another COVID-19 Biohackathon, which began on June 15 and will run through June 21. "We considered the outcome a great success in terms of the approached tasks, the new members we gained and the support of Debian infrastructure teams," Andreas Tille, the "initiator" of Debian Med, wrote in a post to the Debian email list. "COVID-19 is not over and the Debian Med team wants to do another week of hackathon to continue with this great success." The hardware open source community, often referenced as the maker movement, has also been hard at work. Makers have made multiple efforts to help supply hospitals and clinics with inexpensive and easy to make medical devices. Tom Soderstrom, the IT chief innovation officer at NASA's Jet Propulsion Laboratory, designed three models of washable, reusable, comfortable respirator masks that can be printed on 3D printers at a cost of about $2 each. The designs, 3D printer files, detailed test results, as well as build and test instructions are all available online, and the whole project has been released as open source. Ventilators, essential to treating the worst cases of COVID-19, have also been in short supply, and there are a number of open source projects underway to develop low cost ventilators that can be made from 3D printed parts. Included are some designs that could cost less than $100 to produce, a vital consideration for small clinics in third-world countries, such as the OpenLung Emergency Medical Ventilator that uses a bag valve mask. These ventilators, respirators and hackathons are only a small part of the involvement of various open source communities in fighting COVID-19. In March, Mozilla, the organization behind the open source Firefox web browser, announced the open source COVID-19 Solutions Fund as part of its Open Source Support Program, which grants awards of up to $50,000 each to open source projects responding to COVID-19. In addition, Mozilla is also openly supporting the Open COVID Pledge, an international coalition of scientists, technologists, and legal experts that is calling on companies, universities and other organizations to make their intellectual property temporarily available free of charge for use in ending the pandemic and minimizing its impact.
<urn:uuid:b5367ec1-613a-4a25-a47e-9a5affacf69a>
CC-MAIN-2022-40
https://www.datacenterknowledge.com/open-source/meet-groundswell-open-source-covid-19-efforts
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00381.warc.gz
en
0.953461
1,067
2.90625
3
Our ResearchIn Our Spare Time... Recently we became interested in the ZigBee wireless protocol, which can be used to create a personal area network based on low-power digital radios. ZigBee is typically used in low data-rate applications that require long battery life and secure networking (based on 128-bit encryption). Because of its low power usage, ZigBee is becoming increasingly popular with home/building automation applications, medical data collection, industrial control applications, and wireless sensor networks. Our research topics include evaluating ZigBee network identification and enumeration, investigating the capabilities of several free open source ZigBee network stacks (ZBOSS, FreakZ, and Freakduino), and testing the security of a few of the commonly-used protocols that ride on top of ZigBee.
<urn:uuid:632bf03d-7070-4a59-973a-9a70f7700b2e>
CC-MAIN-2022-40
https://aerstone.com/project/project-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00381.warc.gz
en
0.881481
163
2.53125
3
Nvidia researchers recently developed a new generative adversarial network methodology intended for generating realistic landscape photos from a segmentation map or rough sketches. And even though it is not yet perfect, it has made considerable efforts to help individuals to develop their synthetic landscape. Initially, the GauGAN model was said to be a tool for helping architects, game designers, and urban planners to rapidly come up with synthetic images. The model was trained on more than one million images, comprising of 41,000 images from Flickr, with researchers saying that it serves as a “smart paintbrush” since it helps in filling in the details on the rough sketches. “It’s like a coloring book picture that describes where a tree is, where the sun is, where the sky is,” Nvidia vice president of applied deep learning research Bryan Catanzaro said. “And then the neural network is able to fill in all of the detail and texture, and the reflections, shadows, and colors, based on what it has learned about real images.” In a recent demonstration at the company’s GTC conference, the researchers were able to show how GauGAN works, and how it renders images in real-time, changing the styling between several seasons, and how water interacted with and reflected the particular landscape. Even though the machine used included a recently unveiled Titan RTX, Catanzaro claimed that running a similar application on a CPU, especially if the image rendering was created on-demand was possible. “This technology is not just stitching together pieces of other images, or cutting and pasting textures,” Catanzaro said. “It’s actually synthesizing new images, very similar to how an artist would draw something.” In a research document to be presented at the CVPR conference later in June, the researchers claimed that using human testing through Mechanical Turk indicated that its images were more on-demand compared to those produced by SIMS, pix2pixHD, and CRM algorithms. According to Catanzaro, GauGAN, in comparison to other algorithms, featured a better vocabulary and required lesser parameters. At the close of 2018, a research team comprising of Catanzaro presented a research paper on forecasting future video frames, particularly for synthesized city images. Nvidia also utilized generative adversarial networks in developing artificial brain MRI images, to assist in overcoming a shortage in brain images for training networks. Diversity is important as far as the success of training neural networks is concerned even though medical imaging data is normally imbalanced, “Hoo Chang Shin, a senior research scientist at Nvidia said. ”There are so many more normal cases than abnormal cases, when abnormal cases are what we care about, to try to detect and diagnose.” Nvidia is a Santa Clara, California-based technology company that is involved in the designing of graphics processing units (GPUs), specifically for the professional and gaming markets. The company also designs systems on a chip unit or SoCs, particularly for the automotive and mobile computing market. Aside from making GPUs, Nvidia provides scientist and researchers parallel processing abilities that enable them to operate high-performance apps efficiently.
<urn:uuid:b96ae63c-70e7-4d09-b270-be165580c9ec>
CC-MAIN-2022-40
https://algorithmxlab.com/blog/nvidias-ai-turns-doodles-into-photo-realistic-images/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00381.warc.gz
en
0.947541
663
3
3
Just like OSPF or EIGRP, BGP establishes a neighbor adjacency with other BGP routers before they exchange any routing information. Unlike other routing protocols however, BGP does not use broadcast or multicast to “discover” other BGP neighbors. Neighbors have to be configured manually and BGP uses TCP port 179 for the connection. In this lesson we’ll take a close look at the different “states” when two BGP routers try to become neighbors. Here they are: - Idle:This is the first state where BGP waits for a “start event”. The start event occurs when someone configures a new BGP neighbor or when we reset an established BGP peering. After the start event, BGP will initialize some resources, resets a ConnectRetry timer and initiates a TCP connection to the remote BGP neighbor. It will also start listening for a connection in case the remote BGP neighbor tries to establish a connection. When successful, BGP moves to the Connect state. When it fails, it will remain in the Idle state. - Connect: BGP is waiting for the TCP three-way handshake to complete. When it is successful, it will continue to the OpenSent state. In case it fails, we continue to the Active state. If the ConnectRetry timer expires then we will remain in this state. The ConnectRetry timer will be reset and BGP will try a new TCP three-way handshake. If anything else happens (for example resetting BGP) then we move back to the Idle state. - Active: BGP will try another TCP three-way handshake to establish a connection with the remote BGP neighbor. If it is successful, it will move to the OpenSent state. If the ConnectRetry timer expires then we move back to the Connect state. BGP will also keep listening for incoming connections in case the remote BGP neighbor tries to establish a connection. Other events can cause the router to go back to the Idle state (resetting BGP for example). - OpenSent: In this state BGP will be waiting for an Open message from the remote BGP neighbor. The Open message will be checked for errors, if something is wrong (incorrect version numbers, wrong AS number, etc.) then BGP will respond with a Notification message and jumps back to the Idle state. This is also the moment where BGP decides whether we use EBGP or IBGP (since we check the AS number). If everything is OK then BGP starts sending keepalive messages and resets its keepalive timer. At this moment, the hold time is negotiated (lowest value is picked) between the two BGP routers. In case the TCP session fails, BGP will jump back to the Active state. When any other errors occur (expiration of hold timer), BGP will send a notification message with the error code and jumps back to the Idle state. In case someone resets the BGP process, we also jump back to the Idle state. - OpenConfirm: BGP waits for a keepalive message from the remote BGP neighbor. When we receive the keepalive, we can move to the established state and the neighbor adjacency will be completed. When this occurs, it will reset the hold timer. If we receive a notification message from the remote BGP neighbor then we fall back to the Idle state. BGP will keep sending keepalive messages. - Established: The BGP neighbor adjacency is complete and the BGP routers will send update packets to exchange routing information. Every time we receive a keepalive or update message, the hold timer will be resetted. In case we receive a notification message we will jump back to the Idle state. This whole process of becoming BGP neighbors can be visualized, this might be a bit easier then just reading about it. The official name of a “diagram” that shows the different states and we can move from one state to another is called a FSM (Finite State Machine). For BGP, it looks like this: Now you know about the different states, let’s take a look at some Cisco BGP routers to see what it actually looks like on two routers. I’ll use the following topology for this: Just two routers in two different autonomous systems. Before I configure BGP, let’s enable a debug:
<urn:uuid:b757254d-1b3b-4d16-9fd5-c83a5ea28ee6>
CC-MAIN-2022-40
https://networklessons.com/bgp/bgp-neighbor-adjacency-states
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00381.warc.gz
en
0.883089
927
2.53125
3
Secure Sockets Layer (SSL) certificates are what cause your browser to display a padlock icon, indicating that your connection to a websites is secure. Although the padlock may soon be hidden from view, certificates aren't going anywhere. Let's start with some definitions and explain some of the terminology. On a strictly technical level, SSL was actually superseded by Transport Layer Security (TLS) many years ago, but the name has stuck around. So, in this article we'll use SSL to refer to the entire SSL/TLS family of protocols. SSL is a security technology for establishing an encrypted link between a server and a client, such as a website and a browser, or a pair of email servers. An SSL certificate is a digital certificate that authenticates a website's identity and enables an encrypted connection. What is the purpose of SSL certificates? SSL certificates serve two important purposes: - Authentication. It authenticates the identity of the computer you are talking to. - Privacy. It ensures that a connection between two computers is encrypted. On the web, SSL makes a connection to a website more trustworthy: You are talking to the website identified in the certificate, and nobody is listening in or tampering with the communication between you. This is particularly important when you are exchanging private information like credit card details or passwords. It does not make the website more trustworthy though, only the communication between it and you. Not every website that has an SSL certificate can be trusted. Evil websites, like phishing sites, can have SSL certificates and you can establish safe, trustworthy connections to evil sites using SSL! Despite lots of (now outdated) advice, SSL certificates and padlocks should not be used as an indicator that a website is "safe". Equally, if a website does not have a certificate, that does not mean it cannot be trusted. How do SSL certificates work? SSL encryption is possible because of the public-private key pairing that SSL certificates facilitate. A website visitor’s browser gets the public key necessary to open an encrypted connection from a server's SSL certificate. The public key is not secret and anyone can see it, so it doesn't matter if it's intercepted. Anyone with the public key can use it to encrypt a message, but only the corresponding private key on the server can decrypt it. Depending on the type of certificate it also provides a visitor with information about the holder of the certificate: - The domain name the certificate is valid for - Information about the holder of the certificate - Which certificate authority issued the certificate - Issue and expiration date of the certificate - The public key needed for the encryption SSL certificates are generally divided into three types: - Domain Validated (DV) Certificates. DV certificates assert a link between a certificate and a domain. Projects like Let's Encrypt, which provides free certificates and automates the process of creating and installing them, rely on domain validation. - Organization Validated (OV) Certificates. OV certificates assert a link between a certificate and an organization. The body issuing the certificate must validate the legal and physical existence of the organization. - Extended Validated (EV) Certificates. EV certificates assert a link between a certificate and an organization using a more thorough vetting process than OV certificates. Where do you get SSL certificates? SSL certificates are issued by a Certificate Authority (CA). Most browsers will accept certificates issued by hundreds of different CAs. If you are looking for a certificate for your website, one option is to contact your hosting provider. They will usually be able to point you in the right direction, and will probably be able to provide one. Mention what type of certificate you are looking for since that is important information to start on your quest. Alternatively, you can automate the process of certificate creation and installation using services like Let's Encrypt. Is an SSL certificate necessary for a website? The majority of the web is now encrypted, making sites without SSL the exception. SSL protects private data in transit, such as credit card details. Even when it isn't protecting sensitive data, it stops attacks that might send you to fake websites, and prevents criminals injecting adds or malware into your traffic. If that isn't enough for you, there are other reasons to use SSL too. Aside from securing your traffic, having an SSL certificate also helps your website's search engine rankings. The current Google algorithm rewards sites with SSL by giving them higher rankings (or, better put, it punishes sites that do not use SSL). SSL also makes a site look more professional and secure. Depending on the visitor’s browser, sites without an SSL certificate may trigger a warning that the site is not secure. An increasing number of browser features require SSL to work. Features like getting a user’s location, accessing their microphone, or storing data locally on their device, all require that your website supports HTTPS, which relies on SSL. Which makes sense, because you are providing sensitive information to such sites. It poses a security risk if those features could be tampered with by a person-in-the-middle, or other network interference or impersonation.
<urn:uuid:4b59274c-6ebb-4149-825e-70b027729cb2>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2021/09/what-are-ssl-certificates
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00381.warc.gz
en
0.933696
1,063
3.78125
4
Many components are used to enlarge the signals in today’s fiber optic transmission system, like EDFA (Erbium-Doped Fiber Amplifier). However, in some cases, the power level of an optical signal should be reduced. For example, in DWDM (dense wavelength division multiplexing) systems, multiple wavelength channels arriving at a node may pass though different paths and experience different losses, their powers need to be equalized before entering the optical amplifier to get flat gain since the gain of each channel depends on the power levels of the other channels. In this case, a point reduction in optical signal strength may be required. And a component is usually used which is known as fiber optic attenuator. This article is to give a basic introduction of fiber optic attenuator in details. A fiber optic attenuator, also known as an optical attenuator, is a passive component that is used to reduce the power level of an optical signal by a predetermined factor in fiber optic transmission system. The intensity of the signal is described in decibels (dB) over a specific distance the signal travels. Fiber optic attenuators are generally used in single-mode long-haul application. As technologies advanced, many principles are used in the operation of fiber optic attenuator to accomplish the desired power reduction. Several operation principles of fiber optic attenuators are being introduced here. Gap-loss Principle: in attenuator using gap-loss principle, the reduction of the optical power level is accomplished by two fibers that are separated by air to yield the correct loss. The optical signal is attenuated when it passes a longitudinal gap between two optical fibers. This kind of attenuator is also called air gap attenuators which are susceptible to dust contamination and can be sensitive to moisture and temperature variations. In addition, this attenuator is very sensitive to modal distribution ahead of transmitter. Thus, it is recommended to be used very close to the optical transmitter. The farther the air gap attenuator is placed away from the transmitter, the less effective the attenuator is, and the desired loss will not be obtained. To attenuate a signal far down the fiber path, and optical attenuator using absorptive or reflective techniques should be used. Gap-loss principle is showed as following picture. Absorptive Principle: as the fiber optic has the imperfection to absorb optical energy and convert it to heat. This absorptive principle is used in the design of fiber optic attenuator, using the material in optical path to absorb optical energy. This principle is very simple, however, it can be an effective way to reduce the optical signal power. The following picture shows the absorptive principle. Reflective Principle: another imperfection of fiber optic is also being used to reduce the signal power, which is reflection. The major power loss in optical fiber is caused by the reflection or scattering. The scattered light causes interference in the fiber, thereby reducing the signal power. Using reflective principle (shown in the picture below), fiber optic attenuator could be manufactured to reflect a known quantity of the signal, thus allowing only the desired portion of the signal to be propagated. Various principles are being applied to reduce the power single. Also various types of attenuators are being manufactured to meet different applications. The following part is about the main types of the fiber optic attenuators. Fixed and variable attenuators are the main types that are being provided in today’s market. Their characteristics are being introduced. Fixed Attenuator, as the name implies, has a fixed attenuation level. Fixed attenuator can theoretically be designed to provide any amount of attenuation that is desired and be set to deliver a precise power output. Fixed attenuators are typically used for single-mode applications. They mate to regular connectors of the identical type for example FC, ST, SC and LC. Variable attenuators allow a range of adjustability, delivering a precise power output at multiple decibel loss levels. Variable attenuators can be divided into two types. One is stepwise variable attenuator which can change the attenuation of the single in known steps such as 0.1dB, 0.5dB, or 1 dB. The other one is continuously variable attenuator. This kind of fiber optic attenuator produces precise level of attenuation, with flexible adjustments. It allows the operators to adjust the attenuator to accommodate the changes required quickly and precisely without any interruption to the circuit. They are also available with various fiber optic connectors. Fiber optic attenuator, an important device to control the power level of optical signal precisely, are being designed to different operation principles and types. Getting the basic knowledge about its working principle and types could help to select the fiber optic attenuator to the required applications.
<urn:uuid:719f6b37-2ee4-47a6-94b6-a173212b6e46>
CC-MAIN-2022-40
https://www.fiber-optic-tutorial.com/category/network-solutions/optical-attenuator
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00581.warc.gz
en
0.923667
993
4.28125
4
Optical amplifier is an important technology for optical communication networks. Without the need to first convert it to an electrical signal, the optical amplifiers are now used instead of repeaters. As we know, there are several types of optical amplifiers. Among them, the main amplifier technologies are Doped fiber amplifier (eg. EDFA), Semiconductor optical amplifier (SOA) and Fiber Raman amplifier. Today, we are going to study and compare different types of optical amplifiers in this paper. Before the comparison of the different types of optical amplifiers, let’s take a closer look at fiber optic amplifier. In general, a repeater includes a receiver and transmitter combined in one package. The receiver converts the incoming optical energy into electrical energy. The electrical output of the receiver drives the electrical input of the transmitter. The optical output of the transmitter represents an amplified version of the optical input signal plus noise. Repeaters do not work for fiber-optic networks, where many transmitters send signals to many receivers at different bit rates and in different formats. However, unlike a repeater, an optical amplifier amplify optical signal directly without electric and electric optical transformation. In addition, an ideal optical amplifier could support multi-channel operation over as wide as possible a wavelength band, provide flat gain over a large dynamic gain range, have a high saturated output power, low noise, and effective transient suppression. Several benefits of optical amplifiers as the following: - Support any bit rate and signal format - Support the entire region of wavelengths - Increase the capacity of fiber-optic links by using WDM - Provide the capability of all-optical networks, not just point-to-point links OK, after a brief introduction of the optical amplifiers, we formally begin today’s main topic. As we talk above, there are three main types of today’s amplifier technology. Each of them has their own working principle, features and applications. We will describe them one by one in the following paragraphs. Doped fiber amplifier (The typical representative: EDFA) Erbium-doped fiber amplifier (EDFA) is the most widely used fiber-optic amplifiers, mainly made of Erbium-doped fiber (EDF), pump light source, optical couplers, optical isolators, optical filters and other components. Among them, a trace impurity in the form of a trivalent erbium ion is inserted into the optical fiber’s silica core to alter its optical properties and permit signal amplification. The working principle of the EDFA is to use the pump light sources, which most often has a wavelength around 980 nm and sometimes around 1450 nm, excites the erbium ions (Er3+) into the 4I13/2 state (in the case of 980-nm pumping via 4I11/2), from where they can amplify light in the 1.5-μm wavelength region via stimulated emission back to the ground-state manifold 4I15/2. Advantages & Disadvantages of EDFA - EDFA has high pump power utilization (>50%) - Directly and simultaneously amplify a wide wavelength band (>80nm) in the 1550nm region, with a relatively flat gain - Flatness can be improved by gain-flattening optical filters - Gain in excess of 50 dB - Low noise figure suitable for long haul applications - Size of EDFA is not small - It can not be integrated with other semiconductor deviecs Semiconductor optical amplifier (SOA) Semiconductor optical amplifier is one type of optical amplifier which use a semiconductor to provide the gain medium. They have a similar structure to Fabry–Perot laser diodes but with anti-reflection design elements at the end faces. Unlike other optical amplifiers SOAs are pumped electronically (i.e. directly via an applied current), and a separate pump laser is not required. 1.Stimulated emission to amplify an optical signal. 2.Active region of the semiconductor. 3.Injection current to pump electrons at the conduction band. 4.The input signal stimulates the transition of electrons down to the valence band to acquire an amplification. Advantages & Disadvantages of SOA - The semiconductor optical amplifier is of small size and electrically pumped. - It can be potentially less expensive than the EDFA and can be integrated with semiconductor lasers, modulators, etc. - All four types of nonlinear operations (cross gain modulation, cross phase modulation, wavelength conversion and four wave mixing) can beconducted. - SOA can be run with a low power laser. This originates from the short nanosecond or less upper state lifetime, so that the gain reacts rapidly tochanges of pump or signal power and the changes of gain also cause phase changes which can distort the signals. The performance of SOA is still not comparable with the EDFA. The SOA has higher noise, lower gain, moderate polarization dependence and high nonlinearity with fast transient time. Fiber Raman amplifier (FRA) Fiber Raman Amplifier (FRA) is also a relatively mature optical amplifier. In a FRA, the optical signal is amplified due to stimulated Raman scattering (SRS). In general, FRA can is divided into lumped type called LRA and distributed type called DRA. The fiber gain media of the former is generally within 10 km. In addition, it requires on higher pump power, generally in a few to a dozen watts that can produce 40 dB or even over gains. It is mainly used to amplify the optical signal band of which EDFA cannot satisfy. The fiber gain media of DRA is usually longer than LRA, generally for dozens of kilometers while pump source power is down to hundreds of megawatts. It is mainly used in DWDM communication system, auxiliarying EDFA to improve the performance of the system, inhibiting nonlinear effect, reducing the incidence of signal power, improving the signal to noise ratio and amplifing online. The principle of FRA is based on the Stimulated Raman Scattering (SRS) effect. The gain medium is undoped optical fiber. Power is transferred to the optical signal by a nonlinear optical process known as the Raman effect. An incident photon excites an electron to the virtual state and the stimulated emission occurs when the electron de-excites down to the vibrational state of glass molecule. The Stokes shift corresponding to the eigen-energy of a phonon is approximately 13.2 THz for all optical fibers. Advantages & Disadvantages of FRA - Variable wavelength amplification possible - Compatible with installed SM fiber - Can be used to extend EDFAs - Can result in a lower average power over a span, good for lower crosstalk - Very broadband operation may be possible - High pump power requirements, high pump power lasers have only recently arrived - Sophisticated gain control needed - Noise is also an issue After talking about these three types of optical amplifiers, we make a comparison of them as the following table. Related Article: Differences Between Pre-Amplifier, Booster Amplifier and In-line Amplifier Related Article: Optical Amplifier – EDFA (Erbium-doped Fiber Amplifier) for WDM System
<urn:uuid:b9499fe8-4b35-4a7c-b1ff-f337b465bb9a>
CC-MAIN-2022-40
https://www.fiber-optic-tutorial.com/tag/edfa
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00581.warc.gz
en
0.905097
1,573
2.84375
3
The easily accessible, highly valuable nature of healthcare records is seeing people’s most personal data becoming increasingly accessible to cybercriminals. No other single record bank contains as much Personally Identifiable Information (PII) as that held by healthcare organisations, which makes this data invaluable to hackers. Nowhere else are hackers able to get their hands on information that allows them to form such a thorough profile of their potential victims. Healthcare records not only offer up a patient’s name, address and social security details, but also often include their financial and insurance information – which ultimately can enable attackers to commit identity fraud and financial exploitation. Further exacerbating this problem is the incredibly complex network of IT systems now deployed by healthcare organisations, to help patients communicate with healthcare professionals and to provide access to electronic health records and medical devices. This leaves businesses within the healthcare industry even more vulnerable to cybercriminals’ increasingly sophisticated tactics and ever-evolving techniques. Safe and secure communications Healthcare organisations are poorly prepared for protecting their data and that of their patients from mobile security threats. There are fundamental concerns with how these businesses approach cybersecurity, due to a complete lack of know-how, budget and resources when it comes to preventing potential cyberattacks. Indeed, healthcare organisations have been advised that they should be spending at least 10 per cent of their IT budget on cybersecurity yet the industry average is just 3 per cent, according to the 2015 Health Information Management and Systems Society Leadership Survey. This lack of investment is further impaired by healthcare organisations underestimating the importance of and lack of investment in mobile security, a failure to implement basic prevention measures, and ignoring key security tools such as encryption. The end result of this is offering cybercriminals an open goal to infiltrating their systems. Embracing end-to-end encryption In this digital age, healthcare professionals must be able to communicate with colleagues and patients as securely as if they were speaking to them face-to-face without fear of their communications being intercepted. Security tests have repeatedly proven that end-to-end security is the only way to prevent cybercriminals, intruders, corporate espionage, hackers, rogue nation states and more from violating mobile communications. With that in mind, healthcare organisations must provide their employees with encrypted mobile communication services. We are not talking about consumer messaging platforms that have recently begun tagging encryption onto their services as an after-thought, but communications services that have been built with security in mind from the get-go. The rapid rise in sophistication of techniques deployed by cybercriminals means that encryption has to keep on evolving too. We’re now seeing security systems that deploy RSA 4096-bit encryption, which researchers have estimated would take over 1,000 years to crack. Furthermore, they use encryption keys that are kept encrypted in a secure cloud that can only be accessed when a user validates they are who they say are – meaning even if an organisation like the NSA wanted to get to them, they couldn’t. Through technology like this, healthcare professionals would be able to communicate with one another and their patients safe in the knowledge that their messages will only be seen their intended recipient. Furthermore, they will also be notified of any attempted attack on their privacy, giving them confidence their communications are as secure as possible. Time to act The vast quantity of PII available in the healthcare industry guarantees it will remain an attractive target to attackers and a weak point for employees, unless organisations make serious changes to their communications policies. Healthcare executives must place more focus on the danger that cyberattacks pose to their organisations, and put more emphasis on protecting their data and that of their patients by deploying industry-leading security tools. Improved, ongoing security training for employees will also ensure they are onboard with this culture shift. It’s all well and good having security policies in place but if employees don’t have a thorough understanding of what the cyber threats are, how dangerous they are and how to be resilient against them, then they are rendered useless. Now is the time for healthcare organisations to embrace end-to-end encryption and boost their chances of countering breaches and avoiding the high costs of remediation. [su_box title=”About Jonathan Parker-Bray” style=”noise” box_color=”#336588″][short_info id=’60694′ desc=”true” all=”false”][/su_box]
<urn:uuid:4eea0183-8d77-49c3-b309-8b2760988393>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/articles/securing-healthcare-industry-end-end-encryption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00581.warc.gz
en
0.953737
915
2.53125
3
Having duplicates in databases is the most prominent data quality issue around and not at least duplicates in party master data is often pain number one when assessing the impact of data quality flaws. A duplicate in the data quality sense is two or more records that don’t have exactly the same characters, but are referring to the same real world entity. I have worked with these three different approaches to when to fix the duplicate problem: - Downstream data matching - Real time duplicate check - Search and mash-up of internal and external data Downstream Data Matching The good old way of dealing with duplicates in databases is having data matching engines periodically scan through databases highlighting the possible duplicates in order to facilitate merge/purge processes. Finding the duplicates after they have lived their own lives in databases and already have attached different kind of transactions is indeed not optimal, but sometimes it’s the only option as explained in the post Top 5 Reasons for Downstreet Cleansing. Real Time Duplicate Check The better way is to make the match at data entry where possible. This approach is often orchestrated as a data entry process where the single element or range of elements is checked when entered. For example the address may be checked against reference data and a phone number may be checked for adequate format for the country in question. And then finally when a proper standardized record is submitted, it is checked whether a possible duplicate exist in the database. Search and Mash-Up of Internal and External Data The best way is in my eyes a process that avoids entering most of the data that is already in the internal databases and taking advantage of data that already exists on the internet as external reference data sources. The instant Data Quality concept I currently work with requires the user to enter as few data as possible for example through a rapid addressing entry, a Google like search for a name, simply typing a national identification number or in worst case combining some known facts. After that the system makes a series of fuzzy searches in internal or external databases and presents the results as a compact mash-up. The advantages are: - If the real world entity already exists you avoid the duplicate and avoid entering data again. You may at the same time evaluate accuracy against external reference data. - If the real world entity doesn’t exist in internal data you may pick most of the data from external sources and that way avoiding typing too much and at the same time ensuring accuracy.
<urn:uuid:48620e46-1c9b-4d9b-9149-6506599b1ce9>
CC-MAIN-2022-40
https://liliendahl.com/2013/09/22/the-good-better-and-best-way-of-avoiding-duplicates/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00581.warc.gz
en
0.906801
503
2.546875
3
Change column values¶ In Data Prep, you can change data values using the Change into operation in the column menu. This example shows a change operation on the "Medical Specialty" column that changes text to uppercase. Use the Change into operation to select one or more columns and then change the data in those columns to: - Capital case - Numeric values - Unescaped HTML - Custom values - Trim leading and trailing spaces from cells in the column - Collapse consecutive, multiple spaces into a single space Change values in a single column¶ To find and replace text in a single column: Locate the column where you want to change values. Hover over the column menu icon , then hover over Change into and select the change you want to make. Data Prep generates a copy of the original column that reflects the changes, for example: Click Save at the top to accept the changes. Change values in multiple columns¶ If you need to perform a change values across your entire dataset or a specific set of columns, you use the advanced Change into pane. Here are some examples where the advanced function is useful: - The dataset has both "incorporated" and "Inc" everywhere. You want to standardize the entire dataset to have only the "Inc" value. - The dataset has "incorporated" everywhere, and for the most part that's accurate. But you need to change the value to "Inc" for some specific columns in the dataset. - You've pulled two datasets into your Project—one has "NA" and the other uses blanks to represent non-applicable values. You want to change all of the "NA" values into blanks. To find and replace across multiple columns: Hover over any column's menu icon and click find + replace. Click the column name that appears in the Find + Replace pane. In the advanced Find + Replace pane that displays, click the check box next to each column that you want to include in the find and replace operation. The rest of the steps for find and replace across multiple columns are the same as the steps for find and replace for a single column. See Find and replace. Changing values by Name or Criteria¶ In the advanced Find + Replace pane, you can select multiple columns by either Name or Criteria. Change values by Name:¶ Finding and replacing by Name applies the replace operation only to the specific columns you select. To select columns by Name: - Click the check box adjacent to the column(s) that you want to select. - Click the top-most check box to select all columns. - Use the Columns and Types filters at the top of the panel to quickly filter down to the columns you want to select for the operation. - Use the search function to locate a column by name. Change values by Criteria:¶ Finding and replacing by Criteria applies the replace operation to any column that meets the criteria you specify. For example, if you have String type columns in your dataset and you specify the replace operation for String type columns, then all existing columns of this type in your dataset—and any new String type columns that are introduced to the dataset prior to this Step—will be dynamically replaced. To select columns based on criteria: - Optionally specify the data type of the column—Boolean, DateTime, Number or String. - Optionally specify the pattern for the column name—contains, starts with, equals or ends with. Notice the header message updates to indicate the number of columns you have selected based on that criteria. You may later notice the number of selected columns increases or decreases if new data is brought into an earlier Step that introduces or removes columns that meet your criteria. If you switch between the Name and Criteria options before saving the replace operation, Data Prep retains your selections and provides a Restore last selection link that returns you to your initial selection method. Example: Change into Numeric¶ This column operation converts all numbers stored as text strings into numeric values. By doing this, mathematical operations can be performed on values in this column when, as numbers stored as text, these actions would otherwise be considered invalid. Numbers stored as strings appear left-aligned within a cell and in black text; numbers stored as numeric values are right-aligned and appear in green. When this operation is applied to cells that cannot be converted to numeric values, it will have no effect. In a column with both text and numbers on different rows, only those rows that can be converted will be changed. If a value that appears suitable for conversion is not successfully converted, it is likely that there are non-number characters somewhere in the cell. The following are examples of characters that can inhibit the transformation: Leading or trailing spaces. These can be removed by using the column opreation for "White Space trim leading and trailing" before your apply the "Transform into numeric operation". The "White Space trim leading and trailing" operation examines all rows for spaces at both the beginning and end of the text string. Where it finds them, they are removed—leaving only the value in the cell. Intermediate characters (such as commas or spaces.) Operations such as Column split or a Compute columns that uses REGEX may be required, first, in order to successfully create a column of numeric values. A single period (“.”) in a cell of numbers will be interpreted as a decimal point. These strings will be able to be converted into numeric values without requiring any other operations.
<urn:uuid:d76b2178-9c1a-4f26-be25-cf211d832ca4>
CC-MAIN-2022-40
https://docs.datarobot.com/en/docs/platform/companion-tools/data-prep-pax/dp-columns/dp-col-change.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00581.warc.gz
en
0.830935
1,203
2.65625
3
weerapat1003 - stock.adobe.com Wirth Research, an engineering company that specialises in computational fluid dynamics, has become increasingly concerned with environmental sustainability. It initially focused on the design of racing cars, allowing clients to replace expensive wind tunnel work with computerised modelling, but in recent years it has designed equipment that reduces the aerodynamic drag of lorries, and a device which reduces cold air escaping from open-fronted supermarket fridges, cutting energy use by a quarter. The Bicester-based company also wanted to reduce the energy used by its detailed computerised modelling, which for car aerodynamics simulates around half a billion tiny cells of air. It had already adjusted the resolution of cells within each model, with a finer sub-millimetre mesh used near sharp edges. Then, during the pandemic when it realised staff could work effectively from home, Wirth moved its computing from its own site to a renewable energy-powered datacentre in Iceland run by Verne Global. The new hardware has cut energy requirements by three-quarters and the power used is carbon neutral. Engineering manager Rob Rowsell says that the total cost of the new equipment spread over several years, plus use of the Icelandic facility and connectivity, amounts to less than the old UK power bill. On top of that, as it plans to continue with hybrid working, the company has moved to smaller offices in a eco-friendly building. Wirth wants to make its computing processes still more efficient. It can already halt iterations of virtual models when they stabilise rather than running them a fixed number of times, but it is looking at how it can use artificial intelligence (AI) trained on previous work to use a handful of iterations to predict a stable version of a model that would normally take much longer to reach. The prediction would not need to be entirely accurate as the company would then carry out a few more iterations to check the model was stable. “You would end up being able to do 15 or 20 iterations rather than 100,” says Rowsell. There is much potential to use AI to tackle climate change, says Peter van der Putten, director of decisioning and AI solutions at Massachusetts-based software provider Pegasystems and an assistant professor at Leiden University in the Netherlands. But in recent years, AI has increasingly meant using deep learning models that require large amounts of computing and electricity to run, such as OpenAI’s GPT3 language model, trained on almost 500 billion words and using 175 billion parameters. “Until recently, it was fashionable to come up with yet another model which was bigger,” says van der Putten. But environmental considerations are highlighting the benefits of making AI more efficient, with rising electricity costs increasing economic justifications. “Both from a financial point of view as well as from a climate point of view, small is beautiful.” Another reason is that simpler, more efficient models can produce better results. In 2000, van der Putten co-ran a challenge where participants tried to predict which customers of an insurance company would be interested in buying cover for caravans, based on dozens of variables on thousands of people. This featured real-life noisy data, which can lead complex models astray. “You might start to see patterns where there are none. You start to overfit data,” says van der Putten. This problem occurs when training data is not quite the same as the data for which predictions are required – such as when they cover two different sets of people. Simpler models also work well when there are clear relationships or when there are only a few data points. It can also be difficult and expensive to revise big models trained on vast amounts of data. For evolving situations, such as allocating work to a group of employees with lots of joiners and leavers, lighter “online learning” models designed to adapt quickly based on new information can be the best option. Van der Putten says that as well as being cheaper and having less environmental impact, these models are also easier to interpret. There is also the option of using classic machine learning algorithms, such as support vector machines, used to classify items, which tend to be lighter as they were developed in times of much more limited computing power. Van der Putten says that AI specialists divided into tribes favouring specific techniques from the late 1980s and early 1990s, but practitioners then learnt to use different approaches in different situations or combine them. “Getting back to more of a multi-disciplinary approach would be healthy” he says of things now, given the alternatives to big data-driven deep learning generally use much less computing power. Got to start somewhere One option is to give AI models a starting point or structure, according to Jon Crowcroft, professor of communication systems at Cambridge University and founder of Cambridge-based data discovery firm iKVA. Language models used to include structural rules rather than being based on analysing billions of words, and similarly science-focused models can benefit from having relevant principles programmed in. This particularly applies when analysing language, videos or images, where volumes of data tend to be very high. For example, an AI system could learn to identify coronavirus spike proteins more efficiently if it was given a sample spike shape. “Rather than just having zillions of images and someone labelling them, you have a ground truth model,” says Crowcroft. He adds that this approach is appropriate when each result can have significant consequences, such as with medical images. It can need specialists to provide initial material, although this may not be a significant drawback if those setting up the model are experts anyway, as is likely to be the case for academic use. Such initial human input can cut the computing power required to develop an AI model significantly and makes it easier to explain the model. It could also help to shift where AI works, as well as how. A federated machine learning model could involve genuinely smart meters analysing a customer’s electricity use and sending an occasional update of a resulting model to the supplier, as opposed to present-day meters that send usage data every few minutes. “The electricity company cares about having a model of everyone’s use over time”, not what every customer is doing in near real-time, Crowcroft says. Carrying out AI locally would mean much less data being sent across networks, saving power and money, and would offer greater privacy as detailed usage data would not leave the property. “You can flip the thing around,” adds Crowcroft. Such “edge learning” could work well for personal healthcare monitors, where privacy is particularly important. Reducing energy required for AI If a centralised deep learning model is required, there are ways to make it more efficient. London-based code optimisation specialist TurinTech reckons it can typically reduce the energy required to run an AI model by 40%. If a slightly less accurate fit is acceptable then much greater savings are possible, according to chief executive Leslie Kanthan. In a similar fashion to overfitting a model to the particular group of people who make up training data, a model trained on past financial trading data cannot predict its future behaviour. A simpler model can provide good predictions, be much cheaper to develop and much faster to set up and change – a significant advantage in trading. TurinTech’s optimiser uses a hybrid of deep learning and genetic or evolutionary algorithms to adapt a model based on new information, rather than needing to regenerate it from scratch. “It will try to bend the deep learning model to fit,” Kanthan says. Harvey Lewis, an associate partner of Ernst and Young UK and chief data scientist of its tax practice, says that evolutionary algorithms and Bayesian statistical methods are useful in making deep learning more efficient. However, it is common to take a brute force approach to tuning parameters in models, running through vast numbers of combinations to see what works, which for billions of parameters “is going to be pretty computationally expensive”. The costs of such work can be reduced by using specialist hardware, Lewis says. Graphics processing units, which are designed to perform calculations rapidly to generate images, are better than general-purpose personal computers. Field programmable gate arrays which can be configured by users and tensor processing units designed by Google specifically for AI are yet more efficient, and quantum computing is set to go even further. But Lewis says that it makes sense first to ask whether complex AI is actually required. Deep learning models are good at analysing large volumes of consistent data. “They are excellent at performing the narrow task for which they have been trained,” he says. But in a lot of cases there are simpler, cheaper options which have less environmental impact. Lewis likes to find a baseline, the simplest AI model that can generate a reasonable answer. “Once you’ve got that, do you need to take it further, or does it provide what you need?” he says. As well as saving money, electricity and emissions, simpler models such as decision trees are easier to understand and explain, a useful feature in areas such as taxation that need to be open to checking and auditing. He adds that it is often beneficial to combine human intelligence with the artificial kind. This can include manual checks for basic data quality problems, such as whether fields marked as dates are recognisable as these, before automated work starts. It is then often more efficient to divide processes between machines and humans, with software doing high-volume sorting such as spotting pictures with dogs, then people making the more challenging judgements such as classifying the breed. “Bringing a human into the loop is a way to help performance and make it much more sustainable,” Lewis says.
<urn:uuid:fffd7482-e33a-4ef9-847c-0f726699741c>
CC-MAIN-2022-40
https://www.computerweekly.com/feature/How-to-make-AI-greener-and-more-efficient
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00581.warc.gz
en
0.961825
2,027
3.140625
3
With data center energy consumption rapidly heading for 4% of global electricity usage according to the Uptime Institute, it’s more pressing than ever to understand the carbon impact of online services. The Climate Question is a very interesting global programme reflecting a variety of views on climate change and how best to understand it. For example, we see big tech firms reporting record profits during lockdown as the new industrial titans of the digital age. Previous industrial revolutions have placed a huge burden on our planet. Will this digitial revolution have the same effects? Will the big tech companies address this with better transparency on how using their services affect the planet? As part of this BBC World Service radio program Mats Lewan visited Facebook and Hydro66 in Northern Sweden to report on how the cleanest data centers in the world are setting the pace to address the problems caused by an ever-growing Internet. This episode where Hydro66 hosted Mats Lewan was seeking answers to what effect the vast amount of data created has on the planet and the climate. Cold air, clean energy at low cost make this data paradise with close to #netzero effect on the environment. The Climate Question was looking for answers; have a listen to find out what they have discovered. (27 minutes) Experts and reporters on the program: Here are some of the questions Mats posed for us inside the data center at Hydro66. What exactly is a data center? Just as you make office buildings specialized for people, a data center is a specialized building for computer equipment. Particularly with regard to cooling as the computers get very hot. Electrical power needs to be always on even if there are utility grid problems and the Internet has to always work. Basically for a data center, building services failure is not an option. What can we see in the data center and what is it we can hear? The server rooms are quite large – this one is 500 square meters, compared to the average European home at 100 square meters, so the size of 5 homes. And we have another 5 rooms just like this one! As we said we need to keep the computer equipment cool so the background noise you can hear is industrial scale cooling equipment to move fresh air into the building. If we turned the building fans off it would be almost silent! And of course it could be completely dark in here, but we have motion activated lighting so that staff can maintain the building and computers from time to time. Who uses these servers? This particular data center was built to be used by many different customers. It’s a similar model to a shopping mall, with many retail shops sharing the same building. Our customers range from local government to international private companies. The common factor is that we can provide a highly technical building at a better price than if they had to build it themselves. Why does a data center use so much power? Servers use a lot of power to compute and over time we have figured out the most efficient way to stack many servers in a relatively small space. So the computers use a lot of energy, and then depending on how efficient your building design we also need some power to keep them cool. So going back to the average home size comparison, we are using perhaps 250 homes worth of power in this 1 room. Why is cooling needed? Basically the computers generate heat as a by-product of the work they do, and that heat needs to be removed, otherwise the room gets too hot and the computers switch off. This heat removal process is extremely important to be efficient and it should be done in the most environmentally friendly way possible. Why do you want to place a data center in northern Sweden? It’s really perfect conditions for computer equipment here. The outside air is cool and clean, there is a massive amount of 100% green renewable energy on the doorstep and there is a huge amount of Internet connections. Add in the skilled local workforce and the respect for the environment and you can’t ask for a better location. We like to call it data paradise. What sets you apart from other server halls? We set out with a mission to create the greenest data center in the world. Full stop. We believe we have achieved that and won some great awards along the way. Key things have been our cutting edge cooling design, our elimination of fossil fuels in our power supply and the use of local materials and workforce. This data center is fully embedded in the local community and recently a new term has arisen for what has always been in our DNA. What we have achieved is now known as #netzero and we are very proud of that. Can you not use the heat, for heating for example? Or to an adventure pool? We do use the heat as part of our cooling design. Sometimes the air outside is simply too cold to bring in, so we use some heat to normalize that. That helps us with overall efficiency. In this particular part of the world there is a very well established district heating system so there is actually no technical or economic case for our heat to be used by the community. Again, everything we do is driven by efficiency and the environment. Is there sufficient fossil-free energy, or will there be a tug-of-war with other industrial users? Currently the river system we are directly connected to supplies about 10% of Sweden’s total electricity. This one river is capable of much more production – and in fact about 50% of the current production is being exported to the south and even internationally. So although there will be new local sources of demand such as fossil free steel and carbon fibre production, lithium battery factories etc, there is also new supply in the form of onshore wind – the largest farm in Europe is currently being built just down the road. Data centers are long time scale assets and we see no problem with getting enough energy locally for the foreseeable future. What will a data center look like in ten years? If I was being clever, I would say it will look like the one we are standing in now! All joking apart, data centers are a bit behind in general – most of them are built in city centres and using dirty power. It’s not sustainable at all. We are the future where massive scale data “factories” can be built on clean energy infrastructure beside the power source and not relying on long high voltage transmission lines. It means city centre power can be better used for other purposes such as electrification of mass transit for people. We are just at the start of this journey – watch this space.
<urn:uuid:3929cbf7-3743-4bd9-bc47-5dbeb07bd825>
CC-MAIN-2022-40
https://www.hydro66.com/blog/bbcs-the-climate-question-visits-the-hydro66-data-center-in-sweden-to-find-out-can-the-internet-ever-be-green/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00781.warc.gz
en
0.955319
1,343
2.65625
3
As marine renewable energy (MRE) developers prepare to deploy these technologies, efforts are underway to guard against cybersecurity threats that could threaten the function of a device and connected systems. Pacific Northwest National Laboratory (PNNL) created the first-ever cybersecurity guidance report for MRE devices on behalf of the U.S. Department of Energy’s Water Power Technologies Office. The guidance is designed to help MRE developers consider risks in their design and operations, which will be crucial as the blue economy technologies harness power across waves, tides, and currents in an effort to reduce the overall carbon footprint. These cybersecurity measures also will help improve MRE’s resiliency as a predictable, affordable, and reliable source of renewable energy. The technical report is designed to protect the devices, as well as industrial control systems, energy delivery systems and the maritime industry. “In this nascent stage, developers can start thinking about how their systems will be used and deployed so they can incorporate cybersecurity controls or methods into their designs,” said Fleurdeliza de Peralta, a PNNL risk and environmental assessment advisor and one of the authors of the report. Identifying and analyzing cybersecurity risks and threats The PNNL team started with data gathering through a formal request for information document sent to developers, one-on-one discussions, and presentation to stakeholder members of the DOE Marine Energy Council. The researchers reviewed cyber threats and vulnerabilities of information technology (IT) and operational technology (OT) devices used in wave-point absorbers, oscillating water columns, oscillating surge flaps, and current turbines, and examined the supply chain risks for potential security issues associated with firmware, hardware, and software that will be used in IT/OT devices. Through this fact gathering, the team created customized guidance for developers who will be working to deploy the devices and the end users of the technology. The guidance accounts for the variety of methods that threat actors could maliciously gain unauthorized access to an MRE device – through a satellite, Wi-Fi, or cloud computing – and threats to the actual physical device itself. Threats can include malware or phishing emails, a virus in vendor-controlled devices, or an attack that could cripple an organization’s network. After the initial data gathering, the PNNL team identified different network architectures and configurations for a MRE device to determine different types of threats. The researchers then used two approaches for analyzing the threats: a system-based approach focusing on protecting information or digital assets that need to be protected; and a threat-based approach that focused on protecting control systems and network configurations. The cybersecurity best practices guide implements the core functions of the National Institute of Standards and Technology Cybersecurity Framework, which is to identify, detect, protect, respond, and recover. The guidance is risk-based and describes security practices that protect the MRE system and its end user from cyber threat actors with malicious intent. As the push toward a blue economy gains traction, the new guidance serves as a baseline for best practices in securing the MRE industry from cyber threats. The report will be updated as new threats are discovered and new technology on devices are deployed. – Edited by Chris Vavra, web content manager, CFE Media and Technology, email@example.com.
<urn:uuid:2a816caa-ff6d-4dbf-80a6-286e5c3d44f7>
CC-MAIN-2022-40
https://www.industrialcybersecuritypulse.com/it-ot/how-to-protect-marine-renewable-energy-devices-from-cyberattacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00781.warc.gz
en
0.924157
681
2.59375
3
Myth: “IPv6 Security enhancements (such as IPsec) makes it safer than IPv4” Truth: IPsec is an end-to-end security mechanism, providing authentication and encryption on the network layer. Although developed in conjunction with IPv6, deployment problems with IPsec resulted in it not being widely adopted in the new IP stack. Similarly, IPv4 has had an adopted version of IPsec that can be implemented for extra security. It is unlikely that the adoption of IPv6 across the globe will stimulate a widespread use of IPsec. Saying IPv6 is safer than IPv4 is in itself a challenging claim. With the failure to make IPsec required with the implementation of IPv6, v6 and v4 have nearly identical encryption and authentication controls. Recently, a remote code execution flaw in the Windows TCP/IP service was announced known as CVE-2022-34718 – see link. The vulnerability could allow an unauthenticated and remote attacker to execute code with elevated privileges on affected systems without user interaction. However, only systems with IPv6 enabled and IPSec configured are vulnerable. Systems are not affected if IPv6 is disabled on the target machine. “If a system doesn’t need the IPsec service, disable it as soon as possible” said Mike Walters, cybersecurity executive and co-founder of Action1. “This vulnerability can be exploited in supply chain attacks where contractor and customer networks are connected by an IPsec tunnel. If you have IPsec tunnels in your Windows infrastructure, this update is a must-have.” How could an attacker exploit this vulnerability? An unauthenticated attacker could send a specially crafted IPv6 packet to a Windows node where IPSec is enabled, which could enable a remote code execution exploitation on that machine. Security device bypass via unfiltered IPv6 and tunneled traffic. Only a lack of knowledge is considered a bigger risk than the security products themselves. Conceptually its simple, security products need to do two things – recognize suspicious IPv6 packets and apply controls when they do. However, in practice this is hardly possible in v4 let alone an environment that may have rogue or unknown tunnel traffic. “There are 16 different tunnels and transition methods – not to mention upper layer tunnels like: SSH, IPv4-IPSec, SSL/TLS and even DNS,” says Joe Klein, Cyber Security Subject Matter Expert for the IPv6 Forum and Expert Cyber Architect at SRA International. “The first step is knowing what you’re looking for.” The current crop of security products used today, especially those converted from v4 to v6, haven’t necessarily matured enough to match the threat they’re protecting against. GYTPOL’s got you covered! You can easily remediate and disable IPv6 protocol using GYTPOL with Zero-Impact. If you still want to do it manually, please take a look at this video:
<urn:uuid:1a36995a-6fcd-42b9-b99d-1e26440c98e2>
CC-MAIN-2022-40
https://gytpol.com/myth-ipv6-security-enhancements-makes-it-safer-than-ipv4/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00781.warc.gz
en
0.936069
618
2.609375
3
What do you do when you need to deliver time-sensitive information to hundreds, or even thousands of people? Do you have a mass notification system in place to help get the word out quickly and efficiently? There are times when alerts sent to the general population of a hospital, school or business are not only helpful, but necessary. In a hospital setting, implementing a mass notification system extends beyond emergencies – doctors, nurses and administrative staff can also be alerted of shift changes or increased availability. This can be accomplished with little to no effort when a mass notification system is in place. With the systems that are available today, the same message can be broadcast to literally thousands of people. For those messages that need to be dispersed quickly, mass communication sent to personal devices is becoming very popular. Within minutes, an important alert can be sent in a timely fashion to everyone who is affected by the content of the message. Mass Communication Alerts Historically, the reason for a mass communication has been emergency related. Mass communication alerts have typically been sent for extreme weather bulletins or when there is a dangerous situation taking place at a specific location, such as a particular building on a campus or business site. While mass notification systems incorporate a variety of response mechanisms to allow educational institutions to improve their communications in the event of an emergency, the systems can also be used to improve the business process of hospitals as well as major corporations. Now that alerts have become broader in scope, it is becoming more common to see alerts about: - Event Notification: to alert of upcoming, canceled, or even impromptu events - Attendance alerts: in an educational setting, this can alert parents and guardians when a student is tardy or absent. - News Updates: about a particular item that affects the group. - Building Closure: for maintenance reasons or an outage of electricity. - Ad-hoc meetings: when it is necessary to gather a group of individuals There are multiple methods of communicating messages, from text messages and e-mail alerts to delivering a voice message. Read on to determine the differences and assess for yourself which would be most successful within your organization. How are messages delivered? Most systems offer features which allow you to call recipients and leave a pre-recorded message. Seems simple, but on one hand, when a call is received from an unknown phone number, it is often ignored or sent to voice mail. Thus, defeating the whole purpose of the mass notification, as the event may have already taken place once the recipient retrieves the voice mail. In response to this, some systems have taken steps to alleviate this problem and skip calls altogether and send a text message. This approach could prove more successful, since research shows that when a text is received, it is often checked sooner than a voice mail even from a known caller. It is quicker to check when the phone signal is weak and the receiver can still get the full text message. The New York Times references data from uReach Technologies, who operates the voice messaging systems of a leading wireless provider. The data states that over30 percent of voice messages go unheard for a minimum of three days. But what about wireless subscribers who do not use or even have the ability to send or receive text messages? This is where multi-modal systems come in and prove to be most effective. Leading manufacturers now offer multi-modal systems, which allow you to use multiple delivery methods to communicate your message. Whether you want to send a voice call, e-mail or text, these multi-modal systems can handle it all. Some systems have the ability to know when a voicemail system answers and can leave a message but also continue to contact other devices simultaneously, avoiding any downtime in getting the word out. Are you the information officer for a large school district and wondering how you will know if students, faculty and parents received your message? Or maybe you work in a hospital setting and you’re experience a staff shortage so you need to alert team members who are not currently in the hospital. How will you know if they’ve received word that they’re needed? Problem solved – when you select a mass notification system that offers full reporting capabilities so you can always keep track of who received your message. Deploying: Premise-based vs. hosted If cost is your major concern, then a premise-based system will prove to be more cost-effective than hosted with delivering mass notification alerts. Emergency notification lends itself to utilizing larger numbers of lines and shared equipment to get critical information out to big groups as rapidly as possible, and in those cases hosted solutions may be ideal. For ongoing, less time-sensitive communications however, a small investment in technology can lead to large returns in stakeholder experience and loyalty. By using trunks and lines that are already paid for, in most cases there will be no additional operational expenditure to send these messages. Regardless of your preferred method, it is necessary to take the steps to implement a mass communications system, as early warning is critical since most people in these environments cannot be mobilized easily.
<urn:uuid:35ee3ad7-0ebe-4e7c-a846-0b32a2e2087f>
CC-MAIN-2022-40
https://nectoday.com/top-5-non-emergency-uses-for-mass-notification/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00781.warc.gz
en
0.952611
1,032
2.921875
3
Authenticating users to a company’s network domain is synonymous with enterprise authentication. The network domain is the IT’s technical term for the company network, and authenticating users to the network, and the various business resources connected to it, is really what enterprise authentication is all about. The domain controller is the authentication service that sits on the domain network and ensures that all resources operating on it are authorized and all users asking to connect to the network and its resources are authenticated and entitled to access the network and resources joined to it. How do you authenticate users to the company domain? At the heart of the company domain is the domain controller, which is tasked with ensuring that all systems and applications connected to the network are authorized, and all users asking to connect to the network are authenticated and entitled to access its resources. Once a computer is configured to work on a specific network domain – otherwise known as joined to the domain – login is required by the user. Through this login process the domain controller authenticates the user to the network domain and also to the computer workstation itself – all in one go. For example, when a user logs into a computer that is connected to a company domain, the domain controller checks the submitted password and determines what the user is entitled to do on the network based on the role associated with that user. If the workstation is offline, a local authentication mechanism is typically used as a fallback to allow the user to access the workstation, but not the network. When the workstation is reconnected to the network, the domain controller will require the user to authenticate to the network. Domain controllers most commonly use username and passwords to authenticate users. To protect high-security networks, the domain controller can be configured to require multi-factor authentication. This typically means using a smart card or one-time password token device in addition to the password. Managing network domains using domain controllers has been around for many years. Once put in place, replacing, upgrading or even changing configurations on a domain controller risks locking out users or denying access to critical business systems. As a result, domain controllers are administered with extreme caution to prevent costly disruptions to business operations. Couple this conservative administration with the fact that most domain controllers have been around for many years, and you end up with antiquated authentication practices that are hard to shake. Active Directory – the almost ubiquitous domain controller Microsoft Active Directory refers to a suite of capabilities initially developed to manage a company’s Windows network domain. At its heart is its domain controller called Active Directory Domain Service (AD DS). Its role is to authenticate and authorize computers and users to connect and operate on the network. Passwordless authentication for Active Directory users Active Directory today is bifurcated into the legacy AD and Azure AD. Legacy AD is the on-premise version of Active Directory which has been in use by many businesses for many years to manage their Windows network domains. Azure AD is the cloud version of AD, built and marketed to support the needs of modern businesses operating in the cloud. Azure AD supports passwordless authentication. To enable passwordless authentication on legacy AD, it needs to be configured to work in conjunction with Azure AD or a third party solution.
<urn:uuid:9f0c9a49-1805-47a9-863f-41b945d012d7>
CC-MAIN-2022-40
https://doubleoctopus.com/octocampus/passwordless-authentication-for-domain-users/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00781.warc.gz
en
0.939305
671
2.71875
3
A significant part of hacking consists of diverting the function of existing systems and software, and hackers often use legitimate security tools to perform cyber attacks. Pentesting tool Cobalt Strike has been one such target, but what happened recently with a Red Hat Linux version of the Cobalt Strike Beacon is worthy of note. According to cybersecurity researchers, it could be the work of an advanced threat actor. How is Cobalt Strike Beacon Used in Cyberattacks? Cobalt Strike is an exploitation platform. The idea is to emulate attacks from advanced adversaries and potential post-exploitation actions. You can see it as a framework used by security teams for test purposes and threat groups. The software creates connections (using Cobalt Strike servers) to attack networks. In addition, it contains tons of components that are pretty convenient and customizable. The beacon is the client. That’s why attackers have to install it on the targeted machine, which usually happens after exploiting a vulnerability. If the attack succeeds, hackers can maintain a persistent connection between the beacon and Cobalt Strike rogue servers, sending data periodically. A New Variant of Cobalt Strike Cobalt Strike Beacon Linux enables emulation of advanced attacks to a network over HTTP, HTTPS, or DNS. It provides a console where you can open a beacon session and enter specific commands. The console returns command output and other information. Users get access to a status bar and various menus that extract information and interact with the target’s system. Beacon’s shell commands are handy for performing various injections, remote command executions, and unauthorized uploads and downloads. The skilled hackers who implemented this Linux variant achieved tremendous success. Their version has a scary ability to remain undetected. It can get disk partitions, list, write and upload files, and execute commands as well. The malware has been renamed Vermilion. The name vermillion came from the Old French word vermeillon, which was derived from vermeil, from the Latin vermiculus, the diminutive of the Latin word vermis, or worm. How Does a Beacon Attack Work? The Cobalt Strike’s Command and Control (C2) protocol was apparently the heart of the attack. It’s a DNS-based communication that helps circumvent classic defense mechanisms that focus on HTTP traffic. Instead of translating the DNS request into an IP address, which is the normal behavior with hostnames, the malware can base64 encode hidden tasks in an AES encrypted struct and send everything in a DNS TXT query to hardcoded subdomains. Once the beacon gets the signal, it decrypts the struct to perform the unauthorized tasks. The malware can configure the beacon automatically. It executes tasks in separate threads asynchronously by scheduling jobs, which prevents any crashes. Vermillion Strike Pushes the boundaries Fox-IT researchers found a bug in Cobalt Strike in 2019 that defenders could exploit to identify attacker servers. Many blue teams (defenders) have created specific alerts to fight against red teams (attackers that work for the same company), criminal organizations, and state-sponsored groups that use Cobalt Strike Servers, and some say it could be due to that bug. You should note that a patch is available to license holders now, but, of course, not to hackers pirating the software. In addition, Cobalt Strike is supposed to be a Windows-only malware. Unfortunately for defenders, Vermillion Strike seems to have removed all limitations. Vermillion Strike can communicate with all Cobalt Strike servers because it uses the same configuration format as the official Windows beacon. It can now apply to an extensive range of servers and networks. VirusTotal Failed to Detect the Malware Based on telemetry, the researchers discovered the attack targeted various sectors such as telecom companies, government agencies, financial institutions, and advisory companies worldwide since last month. What makes this attack impressive is not the port to Linux, even if it’s undeniably rare and noticeable, but more the use of the malware in actual attacks on multiple targets, including security-aware organizations. A New Weapon for APTs Advanced Persistent Threats (APT) are particularly sophisticated actors who can maintain undetected, unauthorized access for months and even years. A Linux variant of a dangerous malware with a very low detection rate can be considered a persistent threat. This new malware likely benefits advanced threat actors. Besides, the limited scope of the attack and the fact that Vermillion Strike has not been found in any other attacks, at least for now, also suggests advanced actors, like criminal organizations or state-sponsored hackers. In any case, Intezer experts predict this won’t be the last Linux variant, as Linux servers are prevalent in cloud computing environments. Further reading: Top Vulnerability Management Tools
<urn:uuid:af71f6de-4ce6-4339-a920-3656ce2d6965>
CC-MAIN-2022-40
https://www.esecurityplanet.com/threats/new-cobalt-strike-beacon-variant-targets-linux/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00781.warc.gz
en
0.913724
992
2.5625
3
An EHR (Electronic Health Record) is an individual’s official health document that is shared among multiple facilities and agencies. An EHR is a digital version of a patient’s paper chart. EHRs are real-time, patient-centered records that make information available instantly and securely to authorized users. While an EHR does contain the medical and treatment histories of patients, an EHR system is built to go beyond standard clinical data collected in a provider’s office and can be inclusive of a broader view of a patient’s care. EHRs are a vital part of health IT and can: - Contain a patient’s medical history, diagnoses, medications, treatment plans, immunization dates, allergies, radiology images, and laboratory and test results - Allow access to evidence-based tools that providers can use to make decisions about a patient’s care - Automate and streamline provider workflow
<urn:uuid:cad85d7e-c68c-426e-a800-5cd62b4be296>
CC-MAIN-2022-40
https://www.cardlogix.com/tag/ehr/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00181.warc.gz
en
0.886653
196
2.9375
3
Since its first introduction 15 years ago, the Internet of Things (IoT) has now become one of the hottest topics within recent years as thousands of new IoT products are being launched into the market each year. Although the first IoT product was only a modified Coca Cola machine, IoT is now a part of our everyday lives. During Cloudbric’s visits to the Cloud Expo in London, quite a number of people have shown interests in discussing IoT security. Information security is often a neglected topic, but with IoT, it’s begun to turn heads. Stories have been already published to show that security measures are needed for IoT products as IoT hacks are on the rise with smart car hacks, baby monitors hacks, children’s toys hacks, etc. running rampant. Technically speaking, however, not all IoT products need security. Children’s bracelets that only sense a child’s mood through body temperature do not need as much security measures compared to bracelets that track a child’s location. Security is often concerned only when data is evaluated as being valuable when compromised. How Is Security Different for IoT Businesses? Currently, there are three major types of security that businesses regularly use: physical, information and convergence. Physical security is the protection of personnel, hardware, programs, networks, and data from physical circumstances and events that could cause serious losses or damages. Examples of physical security include CCTV surveillance, security guards, access control protocols, etc. The second type of security is information security. Information security is a set of business processes that protects information regardless of how the information is formatted or whether it is being processed, in transit or stored. The most common methods of information security are: encryption, malware detection, and digital signatures. The third and newest type of security is convergence security. Although the quickest growing, convergence security is a new security concept and its meaning is just what its name suggests—a convergence or combination of physical and information security. With convergence security, the security systems of a company are joined together with the company’s IT solutions—allowing the company’s physical security to play an integral role to IT and become the ultimate solution to IoT security. Many people are mistaken that convergence security is difficult to develop; however, it is just the action of incorporating information security technology to existing industry systems. Convergence security is just the act of customizing physical and information security to an industry’s protocol—it does not require a whole new concept of security algorithms. For example, if a manufacturing factory is transitioning into a smart factory, where all the equipment is automated, security is needed to ensure that hackers do not interfere with manufacturing schedules and output. The factory can then work with an information security firm to make sure previous physical security measures are updated with the newly implemented information security scheme, thereby maintaining their existing security measures while updating protocols to meet industry convergence security standards. Top Five Industries of IoT Security Development The development of IoT can be categorized into five different industries. Just as industries vary with the type of data or functions that they process, their actual security regulations also vary. For example, the automotive industries security regulations are much stricter than those of consumer electronics (i.e: baby monitors, refrigerators, etc). Also, the extent of what security solutions have actually developed is based on the extent in which that industry has been developed. Because as the demand for the product or service skyrockets, so does the demand for its security. Because of these reasons, five industries have been identified as foci in the demand for IoT security development. Probably the most pressing IoT security issues is smart car technology. Duk Soo Kim, the CTO of Penta Security Systems, said that “Security technology has been used to protect the assets of businesses and people, while smart car security protects those people’s lives.” It is clear that hacking smart cars or transportation system/traffic information systems can directly lead to serious physical damage and/or casualties. The US Department of Transportation has already taken key steps toward requiring security technology to be installed in every smart car in the U.S by proposing regulations for standard Vehicle-to-Vehicle (V2V) technology. Smart car security solutions such as AutoCrypt, CycurLIB, ArgusIDPS and Aerolink are already available in the market. 2. Consumer Electronics Consumer electronics are the most common of IoT products. From major tech conferences, such as CES and the Internet of Things World Forum to television commercials, you can see that IoT is quickly becoming a part of our common lives. Although we have seen a surge in consumer electronic hacks in the past couple of years, the focus of smart consumer electronics remains to be “connectivity,” with little focus on security development. For example, home appliance manufacturers call its new refrigerator as “family hub” since items are more connected, but home appliance companies often don’t highlight how the data being collected is protected. Much to our surprise, reports of refrigerators containing spam began circulating starting in 2014, awakening the dangers of what is called thingbot. 3. Smart Office Smart offices, also known as smart buildings or smart businesses, are a rising trend in companies. With the rising concern that smart offices are an easy target to hackers, it is imperative to develop smart office security as hackers can affect a business’ productivity when they access a building’s communications system. Security for standard buildings have been incorporated in the the past; however, smart offices now involve managing and restricting access that include physical, remote, network, and device level factors. 4. Smart Factory A smart factory is a factory with a fully integrated automation solution in its facility. In smart factories, industrial control systems (ICS), which are computer based systems, are installed to monitor and control industrial processes such as power, oil, gas pipelines, water distribution and wastewater collection systems. The most used type of ICS is Supervisory Control And Data Acquisition (SCADA), which allows factory workers to simplify their operational duties by only needing to use electronic communications instead of local documents. Despite its convenience, SCADA is not completely secure as it was proven during the huge malware attack in June 2014 in the European SCADA systems. A malware called Stuxnet was uploaded to European SCADA control systems and sabotaged major confidential projects as well as industrial control system software. 5. Smart Grid A Smart Grid is when Information and Communications Technology (ICT) is incorporated with the existing electric grids so that the information about producing and consuming electricity is exchanged in real time. According to the U.S. Congressional Research Service, attacks on the U.S. power grid are continuing to increase. As countries’ economies, governments and securities rely on electricity, there is a need to build strong convergence security around smart grids’ industrial control systems. These five categories vary in terms of their services and information processed, but it is imperative any company that deals with people’s safety (both physical and digitally) must invest in security. For products that are integrated with IoT, physical or information security is no longer safe enough. As the demand for IoT products and services increase, these companies need to commit to creating convergence security systems that completely secure customers’ products and private information.
<urn:uuid:c6adc65f-d703-40c4-8226-813c1fae501c>
CC-MAIN-2022-40
https://en.cloudbric.com/blog/2016/05/security-in-iot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00181.warc.gz
en
0.951901
1,502
2.78125
3
The NIST framework is a voluntary set of standards, guidelines and practices which helps organizations better manage their cybersecurity risks. NIST Cybersecurity Framework fosters risk management strategy among both internal stakeholders as well external parties involved in an organization’s network or digital assets while also helping larger companies integrate these efforts together into one integrated plan for greater effectiveness. The Framework is organized by five key Functions. These Functions are: Identify, Protect, Detect, Respond, and Recover. These five terms, when considered together, provide a comprehensive view of the lifecycle for managing cybersecurity. The Five Key Functions of NIST Cybersecurity Framework The first step in the NIST Cybersecurity Framework is to identify your critical processes and assets. It is important to ensure that you’re able to continue with business as usual and not lose anything vital if a situation were to occur. Risk management is the process of identifying, assessing and documenting risks to assets. Risks can come from internal elements, such as an employee opening a phishing email and downloading a virus, or externally, such as threat actors who aim to harm your business. Risk Assessments can be performed by your IT Professionals and will help you identify the best solutions for the risks your business faces. Contact CTG Tech today to schedule your FREE Risk Assessment. When employees need access to information, computers and/or applications, it is important that they are granted only the access they need. Keeping track of what each employee has access can prevent any improper usage, sharing confidential information, or destroying files. Should your employees require access to be granted, or removed, it will be a simple fix from your MSP (Managed Service Provider). Your company should have a way to detect unauthorized or suspicious activity on your network as well as physical environments. CTG Tech takes time to understand how each client’s business operates, in order to know how the data is supposed to flow. Staying aware and understanding how critical business applications work allows us to be better equipped if something were to go wrong, or issues arise. Ensuring each person understands what is expected of them allows a plan to be executed successfully. The more prepared your business is, the faster (and cheaper) recovery will be. Maintaining an updated template for your response plan will ensure your company is prepared should any incident arise. A crucial part of recovery is communicating all necessary information with the appropriate channels. When an incident occurs and it’s not handled properly, your business’s reputation can suffer.
<urn:uuid:12a76bd9-ea1d-47e4-b9e4-f138c9ccd018>
CC-MAIN-2022-40
https://www.ctgmanagedit.com/nist-cybersecurity-framework/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00181.warc.gz
en
0.935427
546
3.015625
3
Amid efforts to expose cybersecurity vulnerabilities in a network before an attacker does, penetration testing, also referred to as a pen test or a white-hat attack, continues to gain momentum as a viable means to detect weaknesses in an organization’s network infrastructure. If the term penetration testing is foreign to you, it is not as intrusive as it sounds. The objective of a penetration test is to provide information technology (IT) and system managers with critically needed intelligence regarding their organization’s security vulnerabilities. Whether the testing is performed manually or via sophisticated automation tools, it is best conducted by a third party that can use the same tools many hackers rely on. Many of these tools are widely available, arming testers with a better understanding of how they can be used to attack an organization.
<urn:uuid:556c0eeb-40cf-42b1-8212-ad65b5c182b9>
CC-MAIN-2022-40
https://resources.missioncriticalpartners.com/insights/topic/utilities
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00181.warc.gz
en
0.964402
157
2.75
3
NCC Group uncovers Bluetooth Low Energy (BLE) vulnerability that puts millions of cars, mobile devices and locking systems at risk We’ve conducted the world’s first link layer relay attack on Bluetooth Low Energy (BLE), the standard protocol used for sharing data between devices that has been adopted by companies for proximity authentication to unlock millions of vehicles, residential smart locks, commercial building access control systems, smartphones, smart watches, laptops and more. Our research shows that systems that people rely on to guard their cars, homes and private data are using Bluetooth proximity authentication mechanisms that can be easily broken with cheap off-the-shelf hardware — in effect, a car can be hacked from the other side of the world. Through the research, we demonstrate, as proof of concept, that a link layer relay attack conclusively defeats existing applications of BLE-based proximity authentication and prove that very popular products are currently using insecure BLE proximity authentication in critical applications. By forwarding data from the baseband at the link layer, the hack gets past known relay attack protections, including encrypted BLE communications, because it circumvents upper layers of the Bluetooth stack and the need to decrypt. “What makes this powerful is not only that we can convince a Bluetooth device that we are near it—even from hundreds of miles away—but that we can do it even when the vendor has taken defensive mitigations like encryption and latency bounding to theoretically protect these communications from attackers at a distance,” said NCC Group Principal Security Consultant and Researcher, Sultan Qasim Khan, who conducted this research. “All it takes is 10 seconds—and these exploits can be repeated endlessly. “This research circumvents typical countermeasures against remote adversarial vehicle unlocking, and changes the way engineers and consumers alike need to think about the security of Bluetooth Low Energy communications,” he added. “It’s not a good idea to trade security for convenience— we need better safeguards against such attacks. This is not a traditional bug that can be fixed with a simple software patch, nor an error in the Bluetooth specification. In fact, this research illustrates the danger of using technologies for reasons other than their intended purpose, especially when security issues are involved— BLE-based proximity authentication was not originally designed for use in critical systems such as locking mechanisms. There are steps that can and should be taken to guard against these attacks: - Manufacturers can reduce risk by disabling proximity key functionality when the user’s phone or key fob has been stationary for a while (based on the accelerometer) - System makers should give customers the option of providing a second factor for authentication, or user presence attestation (e.g., tap an unlock button in an app on the phone) - Users of affected products should disable passive unlock functionality that does not require explicit user approval, or disable Bluetooth on mobile devices when it’s not needed Potential attack surface Since the technology is so common, the potential attack surface is vast. It includes: - Cars with automotive keyless entry – an attacker can unlock, start and drive a vehicle. NCC Group has confirmed and disclosed a successful exploit of this for Tesla Models 3 and Y (over 2 million of which have been sold) - Laptops with a Bluetooth proximity unlock feature enabled – this attack allows someone to unlock the device - Mobile phones – a criminal could prevent the phone from locking - Residential smart locks – an attacker could unlock and open the door without mechanically picking or cutting the lock. NCC Group has conducted a successful exploit on Kwikset/Weiser Kevo smart locks, which has been disclosed to the vendor - Building access control systems – allowing an attacker to unlock and open doors while also impersonating someone else (whose phone or fob is being relayed) - And asset and medical patient tracking – someone could spoof the location of an asset or patient “This research offers more evidence that risks in the digital world are increasingly becoming risks in the physical world as well. As more and more of the environment becomes connected, the potential keeps growing for more attackers to penetrate cars, homes, businesses, schools, utility grids, hospitals, and more,” Khan concluded. NCC Group disclosed details to companies behind the products tested before issuing research publicly, and has discussed mitigation approaches with the Bluetooth Special Interest Group (SIG). NCC Group services Today’s hardware and IoT producers must consider security in all phases of commercial product development, from first design to end-of-life. NCC Group Hardware and Embedded Services leverage decades of real-world engineering experience to provide pragmatic guidance on architecture and design, component selection, and manufacturing.
<urn:uuid:4fd5f126-9e0c-47f0-ba4f-bee122f77707>
CC-MAIN-2022-40
https://newsroom.nccgroup.com/news/ncc-group-uncovers-bluetooth-low-energy-ble-vulnerability-that-puts-millions-of-cars-mobile-devices-and-locking-systems-at-risk-447952
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00382.warc.gz
en
0.932264
1,141
2.734375
3
Microsoft Excel: Level Two Who Should Attend All students should take our Microsoft Excel: Level One class first, unless you have intermediate level Excel skills. What You Will Learn - Produce and link multiple spreadsheets. - Practice with drop down lists, formatting, conditional formatting, and embedding. - Receive an introduction to data analysis with Pivot Tables. - Learn to use Excel as a database. - Create and use range names. - Select nonadjacent cell ranges. - Use the AutoCalculate feature. - Display and print formulas in a worksheet. - Identify and fix formula errors. - Learn advanced functions for Graphs and Charts. - Format a Data Series and Chart Axis. - Add headers, footers, and page numbers to a worksheet. - Reference external data sources. - Protect and hide a worksheet. - Save a “reader friendly” Custom View. - Create and use templates. - Add records to a list using a Data Form. - Work with AutoFormat, create a custom number format, find and replace formatting, and use conditional formatting. - Work with and understand Absolute and Relative cell references. - Preview and print worksheets and use advanced print options. - Hands-on exercises with time set aside to practice what you have learned. Why You Should Attend Our training maximizes learning and allows for more “hands-on” practice. You also receive a copy of Teach Yourself Visually Excel - a user-friendly, color manual.
<urn:uuid:90be9878-f9ff-4a77-8a77-b1bbdfa4c5f5>
CC-MAIN-2022-40
https://www.federaltraining.com/courses/Microsoft_Office/Excel_2016_2019_training/intermediate.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00382.warc.gz
en
0.683887
348
3.25
3
On government networks, edge locations now produce huge amounts of data. The amount of information collected and stored, nearly block by block, has skyrocketed, thanks to sensors, video cameras and multiple other connected resources. Data is often at its most valuable when it is extracted, analyzed and acted upon in a timely manner, especially if recommended actions can be delivered in near real-time. In recent years government edge computing has been focused on things like traffic flow, availability of parking spaces and city security. But with today’s growing number of 5G cell towers and other network points of presence, edge computing can be leveraged by governments in new ways, extending additional citizen services based on location, with localized applications and personalized targeting. This is where edge computing, with its localized applications and analytical capabilities, can start offering specialized types of information targeted at average consumers. For example, Citizens will soon be able to use augmented reality (via their phones, headsets or tablets) to see what is available to them block by block in a city. This can be as basic as finding a parking spot, but it eventually will lead to more advanced solutions, such as seeing the local history of that block, learning what government services are available and knowing where things like police stations, fire hydrants, potholes and other current information. And government maintenance workers can have an augmented reality view of pipes beneath a street or other maintenance issues, including block-by-block trouble tickets. User Profiles Can Help Set Customized Information Views Some citizens will likely cite privacy concerns and may not want to participate in government-provided edge services. But for those who do, specialized applications, using data analytics and artificial intelligence, can customize the data views that are served to specific end-user profiles. Based on citizen-selected interests, citizens can receive highly localized data feeds with a recommend a mix of both commercial and governmental information, spanning the range from weather and traffic alerts, to recommended restaurants (complete with recent health inspection reports), to the location of where the last postal service pick-up is in the neighborhood. Localized details also can include things like health services and the locations of available cardiac defibrillators. While government edge services can offer highly customized citizen experiences, this type of geolocation-targeted citizen personalization is not without controversy. For example, it also can be tied to surveillance and facial recognition. But on the flip side, edge computing can support powerful solutions such as digital twin capabilities. Imagine having a digital twin view of every government vehicle that may need maintenance. Street maps also could show real-time views of traffic flows and detours. Edge computing and virtual replications can be leveraged together to extract new business values. Highly localized views of digital twins can start to predict future events based on locally available data. In the future, this also will help assist self-driving cars, routing local deliveries, and making predictions related to crowd movement and security control. In some cases datacenter-class processing capabilities may be needed at the edge, providing the processing power and advanced analytics that interaction-intensive applications often require. This can be particularly useful for edge-based Security as a Service. Understanding the Government Network Edge IDC defines edge as the technology-related actions performed outside of centralized datacenters. For government, the term especially refers to edge systems which serve as the intermediary between connected endpoints and the core IT environment. Modern government edge systems are distributed, software defined, and flexible. Government is in a unique position to offer edge-based services across a set geographic area, with the potential for interactions by all citizens. - Various levels of computing power, storage, analytics and artificial intelligence can be installed and used at the government network edge. - Edge computing nodes can operate independently, or they can serve as part of a larger set of distributed systems. - Systems can include “heavy edge,” such as a remote office/branch office capabilities, or “light edge,” providing things like mobile access or block-level controls for government functions and citizen services. Geographically enhanced government edge systems are just one of the trends influencing the future growth and capabilities of government computing. The growing availability of localized data, the expansion of network points of presence, and the proliferation of artificial intelligence and machine learning are just now tapping into the vast potential of localized processing and the personalized services that can be brought to the network edge, whether in a city, or on highways or national borders. And, depending on the level of systems integration desired, additional actions can be triggered, from security alerts to personalized citizen notifications and services. Different government agencies are now leveraging edge computing, analytics and AI to make real-time decisions that will have broad citizen impact. For the next several years this will continue to be a high-growth area for government spending. Edge computing will continue to grow in concert with the roll-out of 5G connectivity and new capabilities built into edge-based machine learning and AI solutions. Both can help speed governments’ ability to respond to rapidly emerging data trends. For an overview of edge computing for government, look at our document, Edge Computing, 5G, and AI: Government’s Exponential Perfect Storm. To get started, government agencies should look at what data they already are collecting at remote locations. How fast is the collection growing? Can you improve efficiencies by processing data at the network edge instead of moving it to a central data center? We expect data-driven edge initiatives will be a transformational effort for local and national governments in the coming years. Agencies that invest now can build a solid foundation for long-term edge processing options. This includes Transforming the citizen experience, anticipating needs and shepherding user experiences to where services can be found quickly, and also developed and rolled out quickly, essentially driving the evolution of new data-driven services. And keep in mind that locally distributed edge computing can create its own security and management issues. We take a closer look at this in this document, Government Cybersecurity Challenges at the Edge. For more guidance on how government agencies can use edge computing to extract, analyze and make real-time decisions on data, trends influencing the future of government edge systems, and edge security, read the new eBook, “Extending Missions & Finding Business Value In Government Edge Computing”. Click the button below to download the eBook.
<urn:uuid:e9fb5a75-277c-4a33-98c2-fca4c615aad2>
CC-MAIN-2022-40
https://blogs.idc.com/2021/04/28/new-missions-at-the-government-edge-channeling-citizen-services-with-local-precision/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00382.warc.gz
en
0.927808
1,311
2.8125
3
Email phishing is one of the most common cyber threats, and yet many businesses still fall for even the most basic phishing techniques. In this article, we are going to answer the question: how do you protect yourself from phishing? Protection from email phishing is a combination of knowing what the common phishing techniques are and implementing defense strategies that would save your inbox (and network) from harmful attacks. What Is A Phishing Email? A phishing email is a type of cybersecurity attack used to steal data or gain access to an organization’s network. Phishing emails are usually targeted to lower-level employees to acquire login credentials, company data, and other sensitive information. Hackers can obtain gateway information that will help them penetrate an institution’s defenses. Phishing emails can also be used to infect endpoint devices with malicious code. Links in these emails can lead to malicious websites that will install malware into the system, which hackers can utilize to initiate attacks. Email Phishing In Numbers - Email phishing is the leading cause of data breaches. Almost one-third of data breaches stem from a phishing attempt. - One of the most common phishing emails involves Dropbox. These emails typically have a 13.6% click-rate - Even Microsoft Office files aren’t safe. 48% of harmful email attachments come as Office files. - 66% of malware is installed by clicking a link in a malicious email or opening an email attachment. Email Phishing: Who Is At Risk - Low-level employees: Clerks, secretaries, customer representatives, and similar members of your organization are common targets of email phishing because hackers assume they don’t know any better. These employees usually have access to customer data, passwords, and other sensitive information. By taking advantage of these employees’ ignorance or laziness, attackers can get the information they need to launch more complex attacks. All it takes is for one unsuspecting employee to click a link, and this malware can take hold of your business’ entire network. - Small business owners: Too many small business owners make the mistake of thinking that their business is too small to be on anyone’s radar. This is especially true for businesses with less than five people or solo entrepreneurs who are eager to open fake customer inquiries, job applications, and other interactions through email. How Do You Know An Email Is Fake? To the uninitiated, phishing emails look completely legitimate and valid, which is exactly why it’s so easy to fall for them. Back when phishing attempts were a little less sophisticated, common tells include: - Grammatical errors or inconsistent grammar - Misspelled words - Incorrect punctuation - Improper formatting (double spaces, weird indentation) - Complicated URLs - No branding or signature - Unknowing sender address and website Nowadays, a phishing email can look completely legitimate, complete with branding (logo) and even a seemingly valid email address. The good news is that there will always be small details that reveal whether or not the email is malicious. One way to do it is to look for inconsistencies in the messaging. For instance, a company like Nokia wouldn’t be so informal with its customers. The line “And you may want to keep any maps, locations, email, music, reviews, or other stuff that is associated with your account” is a clear indication that email was sent by a non-affiliated third party group. Next, take a look at the sender’s address. Even if the sender mentions the brand, keep in mind that companies (even the small ones) use their own domain. Legitimate emails will come directly from email@example.com or firstname.lastname@example.org, not email@example.com. Extended domains such as signin.portal.facebook.com or infouniversityportal.com are designed to trick users into thinking these are legitimate domains. When you see convoluted email addresses, try to verify with the supposed sender (if the email is posing as a familiar contact) or consult your IT department. Finally, the signature reads “The Nokia account team”. Emails concerning customers are sent by customer representatives, who are obligated to sign with their names and their position so they can be properly identified. This may be a small detail but the inconsistency in the lower case ‘a’ and ‘t’ are telling that this was not sent by an official member of the Nokia company. Those are just some ways of telling if the email you received is a phishing email. Ultimately, you need to pay attention to all the details in the email to determine if it’s malicious. Here are some things you can take a look at: - Domain (firstname.lastname@example.org) - Sender name - Inconsistencies in style, body, messaging, formatting - Unknown characters (the greek letter α in place of a to fake domains) - Links and attachments What Are The Common Forms Of Phishing? If you thought phishing emails were a big company problem, think again. Small to medium businesses are among the main targets of fake and spam emails precisely because hackers know these businesses don’t have the security in place to protect their systems from an attack. Regardless of how big your organization is, the phishing techniques are generally the same. The first step to preventing hackers from accessing your system is understanding common phishing techniques: Posing As An Angry Customer A strong sense of urgency can mobilize employees to act rashly. A hacker posing as an angry customer works well in most cases since customer reps may feel the pressure to respond to the inquiry without validating the request. Common tactics include attaching malicious attachments and links masquerading as important documentation, which can include bank statements, receipts, and even authorization letters allegedly authorized by company executives. Installing an anti-malware system that will scan all documents and links is imperative in keeping your inbox safe. Before clicking links, make sure to hover without clicking, which will display the real link address on the bottom left corner of your screen. Angry “customers” may also request password changes and documentation, applying pressure to employees to expedite the process. One of the best ways to verify this kind of request is to put a multifactor authentication in place. Ask customers to verify their identity with government IDs, transaction history, and password to verify that you are working with customers only. Email From An Executive Phishing emails don’t always come from the outside. It’s easier to fall prey to these hacking schemes if you think your boss is asking you to carry out a task. As with the customer scenario, the same psychological trick works here: because the email is coming from an executive, employees rarely want to verify to avoid seeming redundant or unreliable. Executive requests often come in the form of money transfers or sharing of login credential details. Before executing these commands, ensure that the email domain is a valid one. To be safe, it’s always good practice to verify with your boss on-call or in-person before processing huge amounts of money or handing over other sensitive information. Vendor Or Partner Email Emails from other service or product vendors and partners may be subject to even less scrutiny, especially if your business has a longstanding relationship with them. However, it’s exactly this trust that hackers use as leverage in order to penetrate your business’ defense network. Big companies like Apple and Amazon have been used as bait to hook in unsuspecting employees. Hackers typically use scare tactics like account deletion or raising an alarm over account security in order to get an employee to click through a link or download an attachment. When dealing with partners or vendors, use secure platforms for money transfer and documentation. Whenever major problems arise involving payment or logging in, call the vendor representative to verify the request using phone numbers and emails on the official website (not what is suggested in the email), or forward the email to a manager or supervisor for verification. Zombie phishing is a new type of tactic where hackers use compromised email addresses to reconnect with old contacts. Because the sender is familiar, victims instantly trust these email addresses and whatever content that comes with it. Before responding to old contacts, pay attention to the consistency of their message. Look through old email threads to verify whether the email is a phishing hack or a legitimate email. Be especially vigilant of email contacts that have been inactive for more than three years. Phishers will usually send a new email using an old thread in order to reinforce the idea of security. As always, practicing multifactor authentication will help you verify the identity of the person you are supposedly in correspondence with. What To Do When You Get A Phishing Email Phishing emails can’t harm your computer by being simply in your inbox. Some form of interaction — whether it’s clicking the link, clicking the button, or downloading the attachment — is necessary to initiate the malicious attack. When you receive an email that you suspect is some form of phishing, the best thing to do is to get in touch with your IT department. They can help distinguish whether the email is fraudulent or not. As a business owner, implementing cybersecurity measures such as anti-virus and anti-malware installation on all endpoint devices are non-negotiable when it comes to protecting your data. Just as important, however, is regular employee training. At Abacus, we conduct phishing training to give employees on all levels insight on what a phishing attempt looks like. With regular testing, we can instill vigilance on employees and train them to be more watchful when dealing with emails and other forms of electronic correspondence. What Is The Best Defense Against Phishing? A robust cybersecurity network can protect your inbox, but the best defense practices are still dependent on employee behavior. There’s only so much software can do to safeguard your computer from these harmful attacks. At the end of the day, the best defense against phishing is not falling for them in the first place. Verify With The Sender For unusual or unprecedented requests, verifying with the sender is always the best way to go. Contact them through a verified channel (phone, social media) to see whether the email is real or not. Hover Over The Link Links are always not what they appear. In order to shorten complicated URLs, hackers can use services like Bitly to generate anonymous short links, which won’t reveal much about the link’s location. Always hover over the link to ensure that the link address is the same as what has been pasted. Here’s an example: A link may appear familiar and significant like www.facebook.com only to link to a malicious third-party site. If you hover over our example link, you’ll see that the real domain is facebook.notreal.com. Pasting links for “transparency” purposes and hiding the real address is a common tactic to get users to click malicious links. Don’t Open Any Files Don’t open files from untrusted senders. Run emails through an anti-virus program so they can scan the attachments before they are downloaded. For safety purposes, it’s better to ask unknown contacts to attach their photos and similar documents as in-line instead of an attachment. Don’t Trust SSL Certificates Not all phishing websites look gaudy and terrible. Nowadays, phishing websites look exactly like the website they’re copying. To verify the site, most users look at the SSL certificate or the little padlock next to the domain, which supposedly assures that the website is legitimate. In reality, the SSL certificate doesn’t guarantee that a website is not a phishing site. SSL certificates are relatively easy to obtain. Anyone who wants to build a website only has to pay for it as an add-on. So the next time you’re redirected to a suspicious website, check out the little details on the site (without clicking anything) to verify whether it’s real or not. Email Security In A Nutshell At the end of the day, email security starts with knowing the best practices for dealing with electronic correspondence. Diligent employee training and a reliable anti-malware system are what you need to protect your business from malware attacks. Having a disaster recovery plan will allow you to revert to previous versions of your system, should a malware attack prove successful.
<urn:uuid:cd105c46-4e25-44a4-b582-6f25bb983245>
CC-MAIN-2022-40
https://goabacus.com/phishing-emails-everything-you-need-to-know-to-protect-your-business/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00382.warc.gz
en
0.926843
2,618
2.953125
3
Blockchain offers a radical alternative to the data ownership war by creating a public data marketplace, says Lawrence Lundy, head of research at Outlier Ventures. OPINION It’s well understood that there’s little economic incentive for most people to do much more than give their data away for free. However, one of the promises of blockchain is to unpick this problem, by offering a data layer that is capable of fulfilling the original decentralised vision of the internet. Blockchains: the new data infrastructure Arguably, public blockchains are in some ways worse than existing databases. They are slower, have less storage, use more energy, and are less private. But these are design choices made to improve one feature: censorship resistance. This resistance is against both governments and private companies. For cases in which state censorship or corporate control of core infrastructures must be avoided it is worth using public blockchains. For example, is it a good idea to allow Uber to control all public transport, Amazon to control all logistics, or Google to have control over all genomic data? In today’s digital world, data is power, and therefore ownership and control of data is ultimate power. When all communications, money, and healthcare becomes digital, the data infrastructure will be too powerful to be controlled by one nation or company. And the good news is we now have the tools to ensure that no single entity controls data in that way. A data Airbnb Never has so much data been available for collection and analysis. But the challenge is that everyone wants it. As sensors are embedded in everyday objects, and as we move to a world of ubiquitous computing, everybody is fighting to ‘own’ the data. But that is yesterday’s war. Global data infrastructures should be a public good, and they can be using blockchain. Blockchains are an open-source, shared data layer in which everyone can view and edit data based on pre-agreed rules. Creators of data will own their data and use blockchain and other decentralised technologies to rent and sell that data in much the same way they rent spare rooms on Airbnb. At the start of 2018, we are beginning to see the emergence of this new data infrastructure. We aren’t there yet: we still need to process more transactions, at faster speeds, and use less energy in doing so. Data needs to be private, but stored in an accessible way, and shared across different blockchain flavours. This will ultimately provide the foundation for a new, global data marketplace. Individuals, organisations, and machines will be able to buy and sell data on a public market, finally providing a business model for data creators rather than data hoarders. And, via this system, both individuals and organisations will be able to use data exchanges to earn money by sharing data. As more data comes online, we expect increasing amounts of it to be accessible to data scientists and AI algorithms, enabling greater access to AI, and reducing the barriers to entry. Blockchain-based data exchanges will provide the infrastructure for individuals, organisations, and machine data creators to sell their data. The end of digital monopolies? 2018 is seeing the birth of this global data-sharing and monetisation network. Data creators will begin to earn money from uploads, Likes, retweets, and steps. This is a far more profound change than it may seem. Blockchain-based networks won’t just disrupt particular companies; they will go much further by disrupting a digital norm: the assumption that we should be giving away our personal data for free. Digital monopolies, including Facebook, Google, and Amazon, get data from users for free. Every Like, search, and purchase feeds the learning system to further improve the algorithms; in turn bringing in more customers and engagement. In value chain terms, data is supply, and AI algorithms are demand. Digital monopolies are searching everywhere for more and more data to feed their algorithms. This is the real reason Facebook bought WhatsApp and Instagram, and Microsoft bought LinkedIn; and it’s why Google is investing in self-driving cars and Google Home, and Amazon has produced its Alexa Echo and Dot. Blockchains and decentralised public infrastructures change this game. Blockchains reduce the value of hoards of private data. They make proprietary datasets much less valuable, because as more and more machines, individuals, and organisations use a public data infrastructure, a global data commons becomes more attractive to data sellers. As this data commons grows with more datasets, it will attract more data buyers, creating powerful network effects. In other words, data becomes more of a commodity; and it is no longer the more valuable point in a market. As a result, firms that control the supply – i.e control data – no longer dominate markets. As data becomes less valuable to those organisations, the customer relationship will become more important. Startups and incumbents alike will compete for customers’ data based on trust. The global data commons will also mean that individuals can choose where their data is sold or rented. At first, this will attract individuals that care about privacy and self-sovereign data. However, machines will soon follow as their operators and owners look for new revenue streams. Some organisations, especially in the public sector, will be attracted by the non-corporate-controlled nature of this decentralised infrastructure, as well as by the cost and liability reductions in not storing consumer data. Smaller organisations and startups will sign up to access standardised data that would otherwise take too long, or cost too much, to acquire. Disrupting the disruptors Today, most data is siloed with no business model for data creators to monetise it. However, blockchain technology and other decentralised systems are emerging as a new data infrastructure to support machines, individuals, and organisations in getting paid for the data they generate. Disruption comes in many guises. For Google and Facebook, disruption will no longer come from some upstart they can acquire: it will come from the move away from a centralised model of data ownership, by which they have grown powerful, to a decentralised, collective model of data sharing. Ultimately, this will lead to the downfall of digital monopolies that are only powerful because they collect and control more data than anyone else. Blockchain-based data infrastructures, including data exchanges, will commoditise data and help to realise the vision of a global data commons. Internet of business says Lundy makes a persuasive and attractive case, as many are currently doing. The underlying issue is consent – consent as a new currency for the digital age. Whether the mechanism for delivering that is blockchain, or a personal API in which the terms of our consent can be managed as a form of digital rights – citizen data-backed CSR, perhaps – its proponents believe such a system would prevent hyper-companies from owning the data keys to everything. Whether these visions of self-owned digital identities (aka the quantified self) are Utopian, or are being speculated on by investors who fail to hedge against the new system being abused, is a moot point. For just such a radically different view, check out this article in The Atlantic, which suggests that blockchain could beat a path toward forced behaviours and a new authoritarianism. What do our readers think? IoTBuild is coming to San Francisco, CA on March 27 & 28, 2018 – Sign up to learn all you need to know about building an IoT ecosystem.
<urn:uuid:66c215af-479f-4da6-809f-e76c44c99b49>
CC-MAIN-2022-40
https://internetofbusiness.com/use-blockchain-build-global-data-commons/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00382.warc.gz
en
0.935449
1,561
2.578125
3
Western Africa, specifically the Economic Community of West African States, (ECOWAS) could be on the verge of a major change. Fifteen countries in the region have agreed to adopt a single currency, the Eco, as of 2020, after a thirty-year negotiation. While the adoption will not be immediate and absolute for all, the idea is for a gradual rollout that will nonetheless unify the fifteen states and allow for a united advancement of monetary policy and, it is hoped, mutual economic benefits. The world’s only current existing unified currency, the Euro, has hardly been smooth sailing but there is no denying that it has unified much of the European continent’s monetary policy and pushed the contributing economies to a higher level. However, in that situation the participants were half developed and strong economies, and half emerging, so the currency was able to work for all. In the ECOWAS nations, the situation is arguably much different: Ghana and Nigeria are the only two truly advanced economies, and of even these two Nigeria would be expected to dominate and imbalance the arrangement. Could the Eco be a success? Certainly, it would allow members to proof themselves against shocks, and bring the strength of 15 to solve problems and create new opportunities. However, there are existential problems with the currency: only Liberia currently meets all the criteria designed to give the Eco a firm start, and there is concern that the real problem is one of diversification of industry, not of currency itself. However, the Eco is (even if ideological form) a step forward and signifies that the Ecowas area is on track to become Africa’s most important region for trade, development and investment in the years ahead. What remains to be seen is how all the countries can cope. This latest report includes: Economy & Business Environment Oil & Energy How to Succeed in Business
<urn:uuid:f8822474-3553-40cf-9dd5-5bada329d5e8>
CC-MAIN-2022-40
https://kcsgroup.com/political-risk-advisory-briefing-ecowas-october-2019/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00382.warc.gz
en
0.957204
386
2.625
3
The crumbling of network boundaries and the rapid growth in mobile computing has brought with it some serious questions of security and of control. In many cases, existing security programmes just cannot cope with the way mobile computing has developed. It presents a whole new set of security issues. Some mobile users, even though using company equipment, may ignore security policy procedures normal within the network; or they simply may not have the skills or knowledge to make their mobile devices secure. The problems that mobile users can encounter or bring back into the network include identity theft, spyware and viruses. They may install unauthorised software such as Macromedia, Google search bar, instant messaging (IM) or Skype. They may use peer-to-peer for music and film downloads, which has a range of legal and security implications. Security drift is another issue. Anti-virus may not be updated, for example. This is particularly prevalent where machines are used by an employee´s family. With most users having administrator rights, it is easy for them to switch off personal firewalls, decline AV updates, etc. For devices outside the network, it is also often difficult to ensure that they are updated with security patches. Then there are the better-known dangers of using wireless mobile. These include broadcasting log-ins, passwords and key company data; breaching data protection regulations; the illegal use of wireless bandwidth by others (with all the legal implications this entails); theft of personal information (including passwords); identity theft; and opening up the corporate network to data theft and financial fraud. There are a number of solutions to these issues, but I am focussing on one in particular which brings back control of the mobile device to a company´s IT department – usually the owner of the device. This solution protects an organisation from the effects of use, misuse, negligence and abuse by users, which can include use by their families and sometimes even their friends. Endpoint security (EPS) systems control the individual device accessing the network. They come in varying shapes and combinations, but basically they cover three elements: policy management; access rights; network protection. Some solutions combine anti-virus with firewall technologies. Some combine intrusion prevention, standard firewall rules and application protection. Others focus on regulating the applications running on the system. A number also manage access rights based on the security status of the device – e.g. is the connection wireless? EPS solutions can determine the policies that the remote/mobile connection device can be used for and apply these policies. Coupled with central management, they can also ensure that firewall, AV and security patches are used when they should be. Many EPS solutions enable you to decide which level of access to provide, based on the current level of security of the user´s machine. This approach lets you reclaim management of your remote kit, decide what policies to implement, secure it, and protect your network. Some products, such as Sky Recon´s Storm Shield, will allow you to determine the access right you give to users depending on where they´re connecting from (e.g. a wireless hotspot). Or you may control your access depending on the security status of the device. For example, you will probably want to restrict access for someone running a machine that hasn´t applied the latest patches. You also need to consider the level of control you have over remote users. If staff or customers are connecting using their own machines, you will have a different level of control than if they´re using company equipment. In this case, access rights become a more important element than remote policy management. With a range of solutions from companies such as Check Point, SkyRecon and Premeo, EPS is an increasingly important and popular route to securing and managing remote access to the network.
<urn:uuid:c8e2a8cd-bd46-4f6f-9bf0-34a3765f38f5>
CC-MAIN-2022-40
https://it-observer.com/endpoint-security-systems.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00382.warc.gz
en
0.935538
776
2.5625
3
A partitioned index in Oracle 11g is simply an index broken into multiple pieces. By breaking an index into multiple physical pieces, you are accessing much smaller pieces (faster), and you may separate the pieces onto different disk drives (reducing I/O contention). Both b-tree and bitmap indexes can be partitioned. Hash indexes cannot be partitioned. Partitioning can work several different ways. The tables can be partitioned and the indexes are not partitioned; the table is not partitioned but the index is; or both the table and index are partitioned. Either way, the cost-based optimizer must be used. Partitioning adds many possibilities to help improve performance and increase maintainability. There are two types of partitioned indexes: local and global. Each type has two subsets, prefixed and non-prefixed. A table can have any number or combination of the different types of indexes built on its columns. If bitmap indexes are used, they must be local indexes. The main reason to partition the indexes is to reduce the size of the index that needs to be read and to enable placing the partitions in separate tablespaces to improve reliability and availability. Oracle also supports parallel query and parallel DML when using partitioned tables and indexes, adding the extra benefit of multiple processes helping to process the statement faster. Local (Commonly Used Indexes) Local indexes are indexes that are partitioned using the same partition key and same range boundaries as the partitioned table. Each partition of a local index will only contain keys and ROWIDs from its corresponding table partition. Local indexes can be b-tree or bitmap indexes. If they are b-tree indexes, they can be unique or nonunique. Local indexes support partition independence, meaning that individual partitions can be added, truncated, dropped, split, taken offline, etc., without dropping or rebuilding the indexes. Oracle maintains the local indexes automatically. Local index partitions can also be rebuilt individually while the rest of the partition is unaffected. Prefixed Prefixed indexes are indexes that contain keys from the partitioning key as the leading edge of the index. For example, let’s take the PARTICIPANT table again. Say the table was created and range-partitioned using the SURVEY_ID and SURVEY_DATE columns and a local prefixed index is created on the SURVEY_ID column. The partitions of the index are equipartitioned, meaning the partitions of the index are created with the same range boundaries as those of the table (see Figure 1). Local prefixed indexes allow Oracle to prune unneeded partitions quickly. The partitions that do not contain any of the values appearing in the WHERE clause will not need to be accessed, thus improving the statement’s performance. Non-prefixed Non-prefixed indexes are indexes that do not have the leading column of the partitioning key as the leading column of the index. Using the same PARTICIPANT table with the same partitioning key (SURVEY_ID and SURVEY_DATE), an index on the SURVEY_DATE column would be a local non-prefixed index. A local non-prefixed index can be created on any column in the table, but each partition of the index only contains the keys for the corresponding partition of the table (see Figure 2). For a non-prefixed index to be unique, it must contain a subset of the partitioning key. In this example, you would need a combination of columns, including the SURVEY_DATE and/or the SURVEY_ID columns (as long as the SURVEY_ID column was not the leading edge of the index, in which case it would be a prefixed index). For a non-prefixed index to be unique, it must contain a subset of the partitioning key. Global partitioned indexes contain keys from multiple table partitions in a single index partition. The partitioning key of a global partitioned index is different or specifies a different range of values from the partitioned table. The creator of the global partitioned index is responsible for defining the ranges and values for the partitioning key. Global indexes can only be b-tree indexes. Global partitioned indexes are not maintained by Oracle by default. If a partition is truncated, added, split, dropped, etc., the global partitioned indexes need to be rebuilt unless you specify the UPDATE GLOBAL INDEXES clause of the ALTER TABLE command when modifying the table. Prefixed Normally, global prefixed indexes are not equipartitioned with the underlying table. Nothing prevents the index from being equipartitioned, but Oracle does not take advantage of the equipartitioning when generating query plans or executing partition maintenance operations. If the index is going to be equipartitioned, it should be created as a local index to allow Oracle to maintain the index and use it to help prune partitions that will not be needed (see Figure 3). As shown in the figure, the three index partitions each contain index entries that point to rows in multiple table partitions. If a global index is going to be equipartitioned, it should be created as a local index to allow Oracle to maintain the index and use it to help prune partitions, or exclude those partitions that are not needed by the query. Non-prefixed Global non-prefixed indexes should not be used as Oracle does not support them. They do not provide any benefits over normal B-tree indexes on the same columns, so they have no value.
<urn:uuid:6eea2631-bb86-483d-a9a0-55cd29ea8449>
CC-MAIN-2022-40
https://logicalread.com/oracle-11g-partitioned-indexes-mc02/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00382.warc.gz
en
0.84408
1,173
2.734375
3
Keeping an organization’s systems secure is the primary objective of its security team. Security teams implement various measures to achieve this objective and ensure a strong defense against incoming attacks. These measures include applying patches, disabling unnecessary services, and finetuning firewall rules, among others. From the attackers’ perspective, they attempt to gather information about target systems for better planning and executing their attacks. Based on this common understanding of attacker psychology, “security through obscurity” is an existing belief in the information security industry. The proponents of this belief consider that if the attackers are not aware of security measures employed in a target system, security is better. This belief has been around for decades, and there are arguments on both the sides of the spectrum: whether security through obscurity is good or not. LIFARS is an industry leader that develops proactive strategies and tactics against evolving cybersecurity threats. Our services such as comprehensive gap assessment, red-teaming, penetration testing, threat hunting and vulnerability assessment reveal a company’s vulnerabilities. Our vCISOs will ensure your optimal cybersecurity strategy and adequate posture. Definition of security through obscurity (STO) STO, security through obscurity, or security by obscurity, is a well-known approach for securing a system or an application. It relies on hiding crucial security information from the stakeholders and users. This approach enforces secrecy as the primary security measure. In plain words, STO focuses on keeping a system secure by strictly limiting the disclosure of information about the system’s internal mechanisms. STO is popular among bureaucratic agencies, whether they are governmental, industrial, or security. It gives a sense of pseudo-security for IT systems. The core idea of this approach is to run IT systems on a well-defined need-to-know basis. If an individual does not know how to impact the security of a target system, they do not pose a danger to the system. Without a doubt, this approach sounds good in theory. However, with increased sharing of knowledge, the popularity of open systems, better understanding of programming languages, and the availability of average computing power with individuals, its effectiveness has declined over the years. Some of the common examples of STO techniques include: - Security teams assume that the attackers do not read the code. So, they hide user passwords within binary code modules or mix them with script comments or code files. - For decreasing the number of brute force attacks on standard ports, security teams prefer using a different daemon port. However, as soon as an attacker finds the new port, this measure becomes useless. A comprehensive solution is to limit the number of requests from an IP address within a defined period and use two-factor authentication. Configuring firewall rules for allowlisting/blocklisting is also a viable solution. - Another practice is to hide the version number of software. For instance, one can hide version number for Apache servers. However, there are many effective methods to carry out a banner grabbing attack. - We have often come across application folders starting with characters such as _, ^, #, etc. For example, replacing a folder called admin with _admin or ^admin. As soon as the attacker knows about this unique character, they can access restricted areas in the absence of additional security measures. Common myths related to security through obscurity It is a false notion that since ostriches put their heads in the sand, they are not visible. Similarly, coders and programmers think that if they restrict access to their code, attackers will not be able to exploit vulnerabilities. Over the years, there have been multiple incidents that show that restricted access further simplified the exploitation of vulnerabilities. The emperor has no clothes Modern-day development processes involve designers, developers, debuggers, integrators, testers, security analysts, and end-users. This variety of users will have access to proprietary code, and they may be aware of limitations and constraints. If all of them believe that a system or an application is secure, we arrive at a situation when the emperor has no clothes. I have got a secret Transmission of files is a straightforward action these days. Organizations cannot make excuses for poor security practices in 2020 and beyond. There has to be an understanding that security incidents due to lax practices results in financial & reputational losses, along with regulatory proceedings. The shell game This translates to hiding an object from view to prevent identification of issues present. So often, the level of secrecy does not provide an understanding as to the extent of rigor followed in testing. At the same time, launching a code open-source is not a solution. Risk assessment, secure software development practices, and good security culture are crucial. If your organization solely relies on security through obscurity techniques to protect its IT infrastructure, it is most certainly a bad idea. As soon as an attacker gets access to the secret, the security posture falls. However, when used in tandem with other security mechanisms, it can be useful for overall security operations. One cannot deny that STO is an effective way to realize the power of hiding. And when used alongside security measures such as two-factor authentication, IP-based restrictions, and firewall rules, it may give fruitful outcomes. However, organizations must not believe that hiding information is always a good practice. It mostly varies from one system to another. If STO techniques help in minimizing security risks to your organization, then why not?
<urn:uuid:7e4d3b2d-2a1e-4de0-9181-b274c1816a9d>
CC-MAIN-2022-40
https://www.lifars.com/2020/11/security-through-obscurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00382.warc.gz
en
0.939462
1,115
2.953125
3
Guest Editorial by Andrew Deen The healthcare industry has been transitioning into an increasingly digital space for years. Health records are now available online. This has given patients more autonomy than ever, allowing them to easily access their own health information. It also creates new points of vulnerability. In this article, we take a look at how healthcare can keep a stronger lockdown on their cybersecurity. Good Security Culture More than half of all healthcare networks have been investing heavily in their cybersecurity infrastructure. However, even very good security systems are useless against bad practices. The Marriott breach was traced back to a phishing email. Hospitals are just as vulnerable. (A privacy breach at the Marriott chain of hotels may have exposed the credit card, passport, and personal information of up to half a billion people. Courtesy of CBC News and YouTube. Posted on Nov 30, 2018.) Healthcare systems should work on establishing a culture of cybersecurity that extends from the top down. Many people working within the healthcare system may genuinely stand to benefit from regular cyber security reminders. Doctors and nurses—especially older doctors and nurses—are often unaware of cyber security protocol. Simple educational efforts can prevent major breaches. (Tenet Health, which operates St. Mary’s Medical Center and Good Samaritan Medical Center in West Palm Beach, said Tuesday that the company “experienced a cybersecurity incident last week.” Courtesy of WPTV News and YouTube. Posted on Apr 26, 2022.) Recognize the Vulnerabilities of Mobile Devices Mobile technology is vulnerable in ways that desktops are not. In addition to being susceptible to all of the other forms of hacking that desktop units fall victim to they also have to worry about: Loss and theft: A cell phone with important healthcare data that gets lost in a café is vulnerable to breach, either by bad actors or by someone who is inspecting the device to figure out how to return it. Unsecure Wifi Networks: Public wifi networks are frequently used by hackers to access sensitive information. Networks that sound official can easily be set up to funnel data directly to a hacker. For the healthcare professional who is just trying to get in a game of Wordle while they wait for their coffee, it can be hard to tell the difference. Weak Authentication Processes: Mobile technology also often has weaker authentication protection. The reason for this is simple: you don’t want to have to enter a novel of a password just to open up your cell phone. Most smartphones now have multi-step authentication processes that are noninvasive. Using a combination of fingerprints, face scans, and passwords is an easy way to secure mobile technology without incurring much inconvenience. People working within the healthcare system should be very careful with how and where they use their mobile technology. They may also benefit from having separate devices for personal and professional use. Well Maintained Devices Maintenance is also key to a strong network of cybersecurity. Virtually every western healthcare network has some cybersecurity infrastructure. However, if the devices running these networks are not well maintained, the entire effort may be for naught. Update regularly. Look for places that need patches, or upgrades. Computers can be like cars. Chances are something is not at peak performance. Understand what your devices need and make sure they are regularly serviced. Finally, consider the services of cybersecurity professionals. Security consultants can test the vulnerability of your systems, sometimes behaving in the manner of a hacker to determine what your points of vulnerability are. From there, they can recommend tailored steps you can take to fortify your security and protect against future breaches. Large healthcare networks may benefit from full-time cyber security staff, while smaller systems could still see good results from using the services of a freelancer. About the Author Andrew Deen has been a consultant for startups in almost every industry from computer security to medical devices and everything in between. He implements lean methodology and currently writing a book about scaling up businesses. New Jersey City University Takes Platinum for Education in 2021 ‘ASTORS’ Awards American Security Today’s Annual ‘ASTORS’ Awards is the preeminent U.S. Homeland Security Awards Program, and now in its Seventh Year, continues to recognize industry leaders of Physical and Border Security, Cybersecurity, Emergency Preparedness – Management and Response, Law Enforcement, First Responders, as well as federal, state and municipal government agencies in the acknowledgment of their outstanding efforts to Keep our Nation Secure. Best Cyber Security Education Center of Academic Excellence in Cybersecurity Defense is a multidisciplinary Center that supports New Jersey City University’s (NJCU) mission as an institution that provides its diverse population of students, faculty, and community with scholarship and pragmatic approaches to the field of Information Assurance. Information Assurance is generally defined to include cyber security, cyber forensics, data quality, data completeness, data accuracy, privacy, and issues associated with the storage, communication, and sharing of data and information. The Center involves faculty and students from across the University in teaching and learning, service projects, and research. The Center aligns itself with faculty and students at other institutions such as Community and other four-year Colleges, with an overall mission to enhance the security and integrity of information within the geographical region, with future plans to extend nationally and beyond. The Cybersecurity Program at NJCU takes a leadership role within information assurance and cyber defense by contributing to the Homeland Defense initiative. (Explore what New Jersey City University has to offer today! This video provides a great overview and introduction to the campus! The university welcomes you to come and experience firsthand what NJCU has to offer. Courtesy of NJCU and YouTube.) *New Jersey City University was also recognized in the 2020 ‘ASTORS’ Awards, and NJCU Adjunct Professor Ewart Williams was honored in the 2019 ‘ASTORS’ Homeland Security Awards Program. The Annual ‘ASTORS’ Awards highlight the most cutting-edge and forward-thinking security solutions coming onto the market today, to ensure our readers have the information they need to stay ahead of the competition and keep our Nation safe – one facility, street, and city at a time. The United States was forever changed 20 years ago on September 11th, and we were fortunate to have many of those who responded to those horrific tragedies join us at the 2021 ‘ASTORS’ Awards Luncheon.In the days that followed 9/11, the critical needs of protecting our country catapulted us into new and innovative ways to secure our homeland – which is how many of the agencies and enterprise organizations that are today ‘ASTORS’ Awards Champions, came into being. Our 2021 keynote speaker featured a moving and informative address from TSA Administrator and Vice-Admiral of the United States Coast Guard (Ret), David Pekoske; to our attendees who traveled from across the United States and abroad, on the strategic priorities of the 64,000 member TSA workforce in securing the transportation system, enabling safe, and in many cases, contactless travel. Legendary Police Commissioner William Bratton of the New York Police Department, the Boston Police Department, and former Chief of the Los Angeles Police Department was also live at the event, meeting with attendees and signing copies of his latest work ‘The Profession: A Memoir of Community, Race, and the Arc of Policing in America,’ courtesy of the generosity of our 2021 ‘ASTORS’ Awards Premier Sponsors. The 2021 ‘ASTORS’ Awards Program was Proudly Sponsored by AMAROK, Fortior Solutions and SIMS Software, along with Returning Premier Sponsors ATI Systems, Attivo Networks, Automatic Systems, and Reed Exhibitions. The continually evolving ‘ASTORS’ Awards Program will emphasize the trail of Accomplished Women in Leadership in 2022, as well as the Significance and Positive Impact of Advancing Diversity and Inclusion in our Next Generation of Government and Industry Leaders. #MentorshipMattersSo be on the lookout for exciting upcoming announcements of Speakers, Presenters, Book Signing Opportunities, and Attendees at the 2022 ‘ASTORS’ Awards Presentation Luncheon in November of 2022 in New York City! 2022 ‘ASTORS’ Awards Program Welcomes New PLATINUM SPONSOR: NEC National Security Systems (NSS)! Nominations are currently being accepted for the 2022 ‘ASTORS’ Homeland Security Awards at https://americansecuritytoday.com/ast-awards/. |Access Control/ Identification||Personal/Protective Equipment||Law Enforcement Counter Terrorism| |Perimeter Barrier/ Deterrent System||Interagency Interdiction Operation||Cloud Computing/Storage Solution| |Facial/IRIS Recognition||Body Worn Video Product||Cyber Security| |Video Surveillance/VMS||Mobile Technology||Anti-Malware| |Audio Analytics||Disaster Preparedness||ID Management| |Thermal/Infrared Camera||Mass Notification System||Fire & Safety| |Metal/Weapon Detection||Rescue Operations||Critical Infrastructure| |License Plate Recognition||Detection Products||COVID Innovations| |Workforce Management||Government Security Programs||And Many Others to Choose From!| Don’t see a Direct Hit for your Product, Agency or Organization? Submit your category recommendation for consideration to Michael Madsen, AST Publisher at: email@example.com. In 2021 over 200 distinguished guests representing Federal, State and Local Governments, and Industry Leading Corporate Firms gathered from across North America, Europe, and the Middle East to be honored among their peers in their respective fields which included: - The Transportation Security Administration (TSA) - ICE Homeland Security Investigations (ICE HSI) - Customs & Border Protection (CBP) - The Federal Protective Service (FPS) - Argonne National Laboratory (ANL) - DHS Science & Technology (S&T) - The National Center for Disaster Medicine & Public Health (NCDMPH) - The American Red Cross - The InfraGard National Alliance - The Metropolitan Police (MPD) - The U.S. Fire Administration (USFA) - Naval Postgraduate School Center for Homeland Defense and Security (CHDS) - The Federal Air Marshals Service - The San Diego Harbor Police Foundation, and Many More! Corporate firms, the majority of which return year to year to build upon their record of accomplishment include: AlertMedia, Allied Universal, AMAROK, ATI Systems, Attivo Networks, Axis Communications, Automatic Systems of America, BriefCam, Canon U.S.A., Fortior Solutions, guardDog.ai, Hanwha Techwin of America, HID Global, Mark43, IPVideo Corporation, Konica Minolta Business Solutions, Lumina Analytics, NEC National Security Systems, NICE Public Safety, OnSolve, PureTech Systems, Quantum Corporation, Rave Mobile Safety, Regroup Mass Notification, Robotic Assistance Devices, Rajant Corporation, SafeLogic, Senstar Corporation, ShotSpotter, Singlewire Software, SolarWinds Worldwide, Teledyne FLIR, Valor Systems, and Wiresecure, just to name a few! Why American Security Today? The traditional security marketplace has long been covered by a host of publications putting forward the old school basics to what is Today – a fast-changing security landscape. American Security Today is uniquely focused on the broader Homeland Security & Public Safety marketplace with over 75,000 readers at the Federal, State, and local levels of government as well as firms allied to the government. American Security Today brings forward a fresh compelling look and read with our customized digital publications that hold readers’ eyes throughout the story with cutting-edge editorial that provides solutions to their challenges. Harness the Power of the Web – with our 100% Mobile Friendly Publications AST Digital Publications are distributed to over 75,000 qualified government and homeland security professionals, in federal, state, local, and private security sectors. ‘PROTECTING OUR NATION, ONE CITY AT A TIME’ AST Reaches both Private & Public Experts, essential to meeting these new challenges. Today’s new generation of public safety and security experts need real-time knowledge to deal with domestic and international terrorism, lone wolf attacks, unprecedented urban violence, shifts in society, culture, and media bias – making it increasingly difficult for Homeland Security, Law Enforcement, First Responders, Military and Private Security Professionals to implement coordinated security measures to ensure national security and improve public safety. These experts are from Government at the federal, state, and local level as well as from private firms allied to the government. AST provides a full plate of topics in our AST Monthly Magazine Editions, AST Website, and AST Daily News Alerts, covering 23 Vital Sectors such as Access Control, Perimeter Protection, Video Surveillance/Analytics, Airport Security, Border Security, CBRNE Detection, Border Security, Ports, Cybersecurity, Networking Security, Encryption, Law Enforcement, First Responders, Campus Security, Security Services, Corporate Facilities, and Emergency Response among others. AST has Expanded readership into integral Critical Infrastructure audiences such as Protection of Nuclear Facilities, Water Plants & Dams, Bridges & Tunnels, and other potential targets of terrorism. Other areas of concern include Transportation Hubs, Public Assemblies, Government Facilities, Sporting & Concert Stadiums, our Nation’s Schools & Universities, and Commercial Business Destinations – all enticing targets due to the large number of persons and resources clustered together. (See just a few highlights of American Security Today’s 2021 ‘ASTORS’ Awards Presentation Luncheon at ISC East. Courtesy of My Pristine Images and Vimeo.) To learn more about ‘ASTORS’ Homeland Security Award Winners solutions, please see the 2021 ‘ASTORS’ CHAMPIONS Edition Fully Interactive Magazine – the Best Products of 2021 ‘A Year in Review’. The Annual CHAMPIONS edition includes a review of Annual ‘ASTORS’ Award Winning products and programs, highlighting key details on many of the winning firm’s products and services, including video interviews and more. It serves as your Go-To Source throughout the year for ‘The Best of 2021 Products and Services‘ endorsed by American Security Today, and can satisfy your agency’s and/or organization’s most pressing Homeland Security and Public Safety needs. From Physical Security (Access Control, Critical Infrastructure, Perimeter Protection, and Video Surveillance Cameras and Video Management Systems), to IT Security (Cybersecurity, Encryption, Data Storage, Anti-Malware and Networking Security – Just to name a few), the 2021 ‘ASTORS’ CHAMPIONS EDITION has what you need to Detect, Delay, Respond to, and Mitigate today’s real-time threats in our constantly evolving security landscape. It also includes featured guest editorial pieces from some of the security industry’s most respected leaders, and recognized firms in the 2021 ‘ASTORS’ Awards Program. For a complete list of 2021 ‘ASTORS’ Award Winners, begin HERE. For more information on All Things American Security Today, as well as the 2021 ‘ASTORS’ Awards Program, please contact Michael Madsen, AST Publisher at firstname.lastname@example.org. AST strives to meet a 3 STAR trustworthiness rating, based on the following criteria: - Provides named sources - Reported by more than one notable outlet - Includes supporting video, direct statements, or photos
<urn:uuid:3eab8b00-2e6e-46a9-9f06-e7c7d421dff6>
CC-MAIN-2022-40
https://americansecuritytoday.com/how-can-healthcare-keep-a-stronger-lock-down-on-their-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00382.warc.gz
en
0.915171
3,305
2.609375
3
Companies with multiple sites have multiple design drawings that meet geographic and local zoning criteria for these buildings. Many of these are digitized, but too many reside as paper in drawers or storage tubes. Often these need to be accessed to find specific information, which can be a daunting task for companies and architectural firms. For these large-format drawings, many businesses provide scanning services. This option is advantageous because they now get stored on a server. Extracting data from each drawing to make this effort searchable is the best method to ensure that the data needed is always available. Data from title blocks, specific data, and even measurements get extracted using Machine Learning enhanced OCR, allowing for simple search functions. What used to take hours is now attainable in seconds from any location. Paper Drawings Digitization In the engineering sector, a lot of information gets stored in paper documents and drawings. Because retrieving material from such drawings using typical tools is extremely resource-intensive, these get classified as unstructured data. However, you can train artificial intelligence (AI) systems to recognize visual content in drawings and provide a simplified context — enter Machine Learning enhanced OCR. A process drafter who understands the engineering domain and symbology is traditionally skilled at generating drawings. For AI to interpret the drawing, it must have a similar comprehension of standard symbology. Pattern recognition, line-segment recognition, and text recognition are principles that AI can apply to create a model that learns to recognize components of an engineering drawing. The automatic recognition of patterns and regularities in data gets referred to as pattern recognition. When used in photos, pattern detection detects similar visual data of a specific class (such as persons, buildings, or cars) in digital images and videos. The pattern in a drawing could be a symbol, text, or line, with the data being all pixels in the drawing. You could accomplish visual recognition of engineering drawings with the help of a well-trained algorithm. The algorithm gets fed by symbols commonly found in engineering drawings. The AI examines several examples of symbol patterns. After a few cycles, AI learns to associate the graphic design on the drawing with the symbol type it represents. AI can detect the presence of a symbol within a drawing by analyzing symbol-forming pixels and their related locations. Lines in a drawing define the flow of the Piping and Instrumentation Diagram (PID). In contrast to a symbol, a line does not have a fixed shape. As a result, determining the margins of a line necessitates a different approach. For AI to build comprehension of a line, you must present several instances of marked lines. AI can now distinguish the lines and edges on the drawing, thanks to this training. You can use this line-coordinate data to recreate lines on a digital platform in the future. Other information can be retrieved, such as the length of the line or the components on a line. This line recognition can help pinpoint the data help within the drawing, helping it to become multi-functional and digitized with ease. It is just as crucial to read the text content in a drawing. Text on a drawing, such as tag numbers, notes, and holds, gives the drawing context. If a tag number cannot get linked to the matching symbol, image recognition is useless. The mechanical or electrical conversion of images of typed, handwritten, or printed text into machine-encoded text, whether from a scanned document or a photo of a document, is known as Optical Character Recognition (OCR). Text extraction consists of two parts: the position of the text and the content. The precise location of the text on the scanned image can be associated with image recognition to add metadata to the image. The digitized files can then be changed, searched, and stored more efficiently digitally. For this matter, there are numerous ML-enhanced OCR approaches you can use to extract text from a scanned drawing. For one, the text can be filtered using Natural Language Processing (NLP) to retrieve components that comply with a regular expression. A regular expression is a specific notation for describing matching patterns. NLP aids in the pattern-based filtering of text. Thanks to Machine Learning-enhanced OCR, the global economic system is thrust into new digital realms at a breakneck pace. Manual redrafting takes a long time and costs a lot of money. For example, replicating a drawing on a digital platform can take two to three days. However, with the emergence of Machine Learning-enhanced OCR, the paper drawing can be reproduced with AI in a matter of minutes, saving at least 50% of manual labor. For more information on extracting data from large format drawings and blueprints, check out: https://itechdata.ai/solutions/large-format-image-capture/.
<urn:uuid:ea59db41-79bb-4fbb-a7b6-5bcace513eef>
CC-MAIN-2022-40
https://itechdata.ai/capturing-data-from-drawings-to-make-it-usable/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00382.warc.gz
en
0.914976
1,067
2.890625
3
It’s difficult enough for most humans to grasp the idea that our planet is just one of countless others in our galaxy — and a pretty small one, at that. Then, of course, there’s the concept that our galaxy is just one of billions of others in the universe — sure to compound any confusion considerably. It seems safe to say, however, that neither of those notions can compete on the mind-bending scale, so to speak, with an idea that’s currently being investigated at the U.S. Department of Energy’s Fermilab. Ready for a hurting brain? Here goes: Fermilab is working on a device to test the theory that our whole universe is simply a hologram. ‘Help Me, Obi-Wan Kenobi’ Holograms are probably at least somewhat familiar to most, if no other reason then for the role they occasionally play on the silver screen. Princess Leia’s entreaty in the “Star Wars” series, for example, may well be one of the most memorable examples in cinematic history. Holograms are also commonplace today on CDs, DVDs and credit cards. Strictly speaking though, holograms are just diffraction patterns. Essentially, they’re what you get when you record the light scattered from an object and then later reconstruct that light, giving the appearance of a 3D object even when the object is no longer there. From Energy to Matter and Back Again Back to the universe: There’s been a theory kicking around for several decades now suggesting that the universe itself may be a hologram. The 1982 book by Ken Wilber, Holographic Paradigm, for example, tells of the way psychologist Karl Pribram and physicist David Bohm both arrived at the notion of a “holographic universe” in which things that appear to be solid are not necessarily so, Paul Czysz, a professor emeritus of aerospace engineering with St. Louis University, told TechNewsWorld. Pribram, the psychologist, had noted the way “an abstract thing like a picture that you see with your eyes is translated into a molecule in your brain,” Czysz explained. “When that molecule is later activated by your memory, you see the picture again. It’s an abstract thing locked into a physical entity that can repeat the picture.” Bohm, meanwhile, focused on “how energy can transform itself into matter,” he added, drawing on Einstein’s famous equation. Both, in other words, were interested in situations where “things we think are solid may not be,” Czysz noted. “Instead, they may be a projection of something — it’s not physical, and yet in the next instant it is.” A decade after the publication of Wilber’s book, author Michael Talbot went on to publish a similar one entitled The Holographic Universe. ‘Smaller Than the Point of a Pin’ Adding to the compelling nature of such theories is the fact that the vast majority of what we see as solid objects is actually empty space. “An atom is essentially a point with a cloud of electrons,” Czysz noted. “If you took a human and removed all the space between the electrons and nucleus of each atom in his or her body and then condensed it down, the human would be smaller than the point of a pin.” In other words, each of us — like a hologram — is “basically a diffraction pattern that appears to be solid,” he suggested. Solid as we may feel, we’re mostly just empty space. How’s that for mind-bending? Even beyond the bounds of our planet, with all the stars and galaxies out there, it’s estimated that we can only perceive about 5 percent; the rest — the majority — is made up of dark energy and dark matter. “The reason we don’t see the rest of it is that we don’t have access to that part of the hologram,” Czysz said. A Year’s Worth of Energy Then, too, there’s the interchangeability of mass and energy, as Einstein so famously explained. “Only in our minds are they separate entities,” Czysz pointed out. Each and every one of us, in fact, is essentially a vast amount of energy condensed into physical form, he noted. “If there were a way to transform you into your energy base, the ball of energy that would come out of where you are would be equivalent to what a metropolitan power station would generate over about a year,” Czysz asserted. If we are mostly just empty space and energy, then — despite our solid appearance — why shouldn’t the universe be similarly holographic, in other words? A Cosmic ‘Jitter’ The study of black holes has already led to the suggestion that our 3D reality is simply a holographic projection of what exists in two dimensions in the very outer edges of the universe. Generally referred to as the “Holographic Principle,” that idea combines the work of Gerard ‘t Hooft, Charles Thorn and string theorist Leonard Susskind. Interest in the concept of a holographic universe was revived most recently, however, when Craig Hogan, a professor in the department of astronomy and astrophysics at the University of Chicago and director of the Fermilab Center for Particle Astrophysics, launched a project to create an instrument that can help scientists better understand any holographic properties of the universe. Drawing upon the Holographic Principle, the premise behind the Fermilab project is that space is two dimensional, and that the third dimension is inextricably linked with time. If that’s the case, our 3D world is merely an approximate illusion. Assuming that’s true, the illusion is likely imperfect and blurry, just as photographs and videos are, especially when viewed on a granular level. Such imperfection would introduce “a particular kind of noise or jitter into spacetime, as measured by the propagation of light in different directions,” the Fermilab explains. By building an instrument — called a “Holometer” — to detect that cosmic “jitter,” Hogan’s team hopes to find evidence of a holographic universe. Building a ‘Holometer’ “The basic idea is to measure directly whether the fabric of space and time itself shares some of the same quantum uncertainty that we know exists in wave/particles like atoms and photons,” Hogan told TechNewsWorld. “Maybe all of reality has a limited amount of streaming information, like a download at the Planck frequency of 10^44 bits per second. If so, we can measure the sampling noise from that.” Such a measurement “would help us understand how matter, energy, space and time work at the most basic level,” he added. Some prototype tests of the Holometer have already been completed, Hogan said, “but it will be about a year before the Holometer is built, and probably another year of commissioning and debugging after that before we get a result at the theoretical sensitivity.” ‘We Are Starting to Get Tantalizing Results’ There’s at least one encouraging precedent. “Historically, we have had already a situation where something manifesting itself as ‘noise’ turned out to be a huge discovery,” Mario Livio, a senior astrophysicist with the Space Telescope Science Institute, told TechNewsWorld. Specifically, such a detection was what led to the discovery of the cosmic microwave background, Livio explained. It remains to be seen, of course, whether the Holometer will produce equally exciting results. “I am somewhat skeptical about the implications for the holographic universe idea, but the good news is that new experimental tests are being proposed,” Livio said. “These could perhaps take us one step closer to understanding the nature of this noise.” What is “truly encouraging,” meanwhile, “is that we are starting to get tantalizing results from gravitational wave detectors, from the FERMI Gamma-Ray Space Telescope, and even from the Hubble Space Telescope, which appear to be starting to probe the very fabric of spacetime,” Livio pointed out. The European Space Agency, in fact, just recently announced results from its Integral gamma-ray observatory that could cause many theorists to revise their thinking. Different Approaches, Similar Implications In any case, the idea of a holographic universe can be approached and thought about in multiple ways. It’s potentially holographic in a sense by virtue of the fact that the solid objects around us are mostly just empty space, and that their mass is essentially just condensed energy. The world only appears solid, in other words. Viewed through the lens of space scientists, meanwhile, its potentially holographic nature derives from fundamental properties of the fabric of space and time. That two such apparently divergent approaches both suggest such a possibility, however, makes it all the more compelling. Is the world we know and love just an illusion in the grand scheme of things? It seems a distinct possibility. Will that change the way we live our lives? Almost certainly not. There’s no denying, however, that it can be a lot of fun to think about.
<urn:uuid:4a3e3812-f944-45cf-9120-3510c8c26d67>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/the-holographic-universe-is-our-3d-world-just-an-illusion-72804.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00382.warc.gz
en
0.944541
2,035
3.15625
3
The global synthetic fibers market is expected to grow at a CAGR of around 6.5% from 2020 to 2027 and expected to reach the market value of around US$ 67.2 Bn by 2027. Synthetic fibers are man-made fibers made primarily of petroleum raw materials known as petrochemicals. All fabrics are made from fibers that can be obtained from natural or artificial sources. They are made up of a small unit or polymer that is made up of many repeating units known as monomers. Nylon, acrylics, polyurethane, and polypropylene are among them. Every year, millions of tones of these fibers are produced around the world. Synthetic fibers are made up of small molecules that have been synthesized into polymers. The substances used to make such fibers are derived from raw materials such as chemicals derived from petroleum or petrochemicals. These materials are polymerized into a chemical that connects two adjacent carbon atoms. The synthetic fibers market has grown significantly as a result of factors such as increased application in construction, automotive, healthcare, apparel, and household, among others. The growing consumer interest, particularly in home furnishings, is propelling the business segment forward. The rise in home furnishings is being fueled by the expansion of the real estate sector. Another important factor driving the growth of the synthetic fibers market over the forecast period is the increased presence of retail stores selling home furnishings. Polyester fiber is once again in high demand due to its useful properties such as abrasion and chemical resistance. Because of rise in the middle-class population in emerging economies, the demand for polyester as an affordable synthetic fiber in clothing applications is also increasing. Consumers prefer characteristics such as durability, stain resistance, softness, and elasticity in their fiber uses, and manufacturers' ability to provide all such properties at lower costs is one of the primary driving factors for the global synthetic fiber market's growth. But apart from that, the synthetic fabric market is expected to be driven by high demand in the fashion and apparel industries, as well as rapid growth in the construction and automotive industries, particularly in emerging economies. Environmental concerns and the threat of natural substitutes, on the other hand, may restrain market growth during the forecast period. Furthermore, research and development activities in conductive textiles and nanotechnology in textiles are expected to provide potential growth opportunities. Synthetic Fibers in Clothing Contribute to Pollution According to NBC's Mach, clothing made of synthetic materials such as polyester and nylon contributes to microplastic pollution, which can end up in the ocean and the seafood that humans eat. Micro plastics are thought to account for 1.5 million tonnes of the 8 million tonnes of plastic that end up in the ocean each year. According to the International Union for the Conservation of Nature, fibers from synthetic fabrics account for an estimated 35% of micro plastic that enters the ocean. These plastic-based fibers have been discovered as far north as the North Pole and as far south as Antarctica. The microfibers that cause this pollution are frequently released into the environment through loads of laundry. The global synthetic fibers are segmented as product and application. Based on product, the market is segmented as polyester, nylon, acrylics, polyolefin’s, and others. By application, the market is segmented as clothing, home furnishing, automotive, filtration, and others. Polyester led the synthetic fibers market by product, accounting for the lion's share of the global synthetic fibers market. Rising demand for polyester, driven by properties such as chemical and abrasion resistance, is a key factor driving the growth of the global synthetic fibers market. Furthermore, due to its easy wash, shape retention, wrinkle-free, and high perspiration properties, it is much preferred in the manufacturing of clothing materials. Due to the ever-changing fashion trends influencing the demand for clothing around the world, the clothing segment led the segment by recording a dominant market share. The segment is expected to be driven by the increasing population's demand for convenient, protective, and cost-effective clothing during the forecast period. North America is the dominant region in the synthetic fibers market, and it is expected to remain so throughout the forecast period. The region's developed end-use industries are a major driving factor in the growth of the synthetic fibers market. The region's construction sector is expanding significantly as a result of recent investments proposed in Canada and the United States, which is increasing demand for synthetic fibers. Furthermore, North America is regarded as the largest region in the aerospace industry, which is a major end user of synthetic fibre, propelling the growth of the synthetic fibers market. Due to the presence of the world's most populous economies, such as India and China, Asia Pacific is expected to be the fastest growing region in the synthetic fibers market over the forecast period. The region's economic growth has increased consumers' per capita income, which is increasing demand for apparel in the region. Furthermore, industrialization has accelerated the region's construction industry, which is a major factor driving the growth of the synthetic fibers market. The region's various end-use industries for synthetic fibers, such as automotive and aerospace, are growing at a rapid pace, adding to demand. Another factor positively influencing demand is the availability of various clothing brands to meet the specific needs of customers. A growing number of e-commerce platforms in the Asia Pacific region, combined with increased consumer awareness of different clothing brands as a result of the Internet-of-Things (IoT), have boosted online clothing sales in the region. Because of the growing demand for highly aesthetic interiors, the demand for synthetic fibers in the home furnishing segment is expected to grow at a significant rate over the forecast period. The prominent players of the global synthetic fibers market involve Bombay Dyeing, E. I. du Pont de Nemours and Company, Indorama Corp., Lenzing AG, Mitsubishi Chemical Holdings Corp., Reliance Industries Ltd., China Petroleum Corp. (Sinopec Corp.), Teijin Ltd., Toray Chemical Korea, Inc., Toyobo Co., Ltd., and among others Market By Product Market By Application Market By Geography • Rest of Europe • South Korea • Rest of Asia-Pacific • Rest of Latin America Middle East & Africa • South Africa • Rest of Middle East & Africa Synthetic fibers market is expected to reach a market value of around US$ 67.2 Bn by 2027. The synthetic fibers market is expected to grow at a CAGR of around 6.5% from 2020 to 2027. Based on product, polyester segment is the leading segment in the overall market. The growing consumer interest, particularly in home furnishings is one of the prominent factors that drive the demand for synthetic fibers market. Bombay Dyeing, E. I. du Pont de Nemours and Company, Indorama Corp., Lenzing AG, Mitsubishi Chemical Holdings Corp., Reliance Industries Ltd., China Petroleum Corp. (Sinopec Corp.), Teijin Ltd., Toray Chemical Korea, Inc., Toyobo Co., Ltd., and among others. North America is anticipated to grab the highest market share in the regional market Asia Pacific is expected to be the fastest growing market in the forthcoming years
<urn:uuid:d886466b-cf8c-454e-8a8a-31dbb3604e2e>
CC-MAIN-2022-40
https://www.acumenresearchandconsulting.com/synthetic-fibers-market
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00582.warc.gz
en
0.937522
1,624
2.515625
3
Most of us know that electronic waste is nasty stuff. Discarded digital devices clog landfills and leach heavy metals into groundwater when they degrade. Not reusing our devices also wastes lots of energy and other resources. So why don’t more of us recycle our used printer cartridges, dead phones, old hard drives and PCs? Frankly, it seems a lot easier to simply drop a dead device into the garbage than to find a way to recycle it. However, recycling your old electronics is easier than you think. A small amount of effort can actually make a big contribution to a cleaner environment and could even save you money. If you have an old iPhone or Galaxy phone that still works, go to a site such as Gazelle.com to see how much they’ll give you for your device. A 16GB iPhone 4s is worth as much as $40, for example, and if you sell yours to a reseller it will be resold and reused. Here are a few resources dedicated to recycling electronic devices of all types. This site is a one stop shop for all things recycling, from household and yard waste to hazardous waste and electronics. You can find links to municipal resources and community programs, as well as recycling locations for just about anything. The e-Stewards Initiative is a project of the Basel Action Network (BAN), a nonprofit, charitable organization. Its site has extensive information about the growing problem of e-waste, and it offers a tool to help you find Certified e-Stewards Recyclers near you. The Consumer Electronics Association’s Greener Gadgets This site is a great source for information about buying green, and it includes a nationwide list of certified e-cycling locations. It’s also possible to sell your old computer. Apple has a program that will connect you to a vendor who will buy a used Mac or PC. After you tell Apple about your device, the company lets you know what it’s worth, and you get pre-paid shipping materials in the mail. You’re compensated via an Apple Store gift card. If your old computer doesn’t qualify for reuse, Apple will recycle it at no cost. Apple also offers free recycling for all Mac batteries. Just bring your old Mac battery to an Apple retail store, and the company takes care of the rest. If for some reason you can’t remove the battery, bring the laptop with you and a clerk will remove it. Although that’s a bit of trouble, it’s worth it. Not only are batteries filled with heavy metals, they have the potential to catch fire or even explode under certain conditions. Batteries really don’t belong in landfills or the trucks that take them there. Empty printer cartridges are light and small, so they are easy to recycle — if you know a place that takes them. One simple way to handle spent cartridges is to visit this Hewlett-Packard page, follow the directions to print a pre-paid shipping label and then drop the little package in the mail. The whole process takes about five minutes. Staples, Office Max, Office Depot and Walmart all accept HP ink cartridges for recycling, and Staples also accepts LaserJet cartridges. HP says cartridges returned through this program will not wind up in landfills. Other printer companies, including Canon, and retailers such as Best Buy have recycling options as well. (Check out this Re/code article for a list of recycling options for cartridges.)
<urn:uuid:c654462e-ca66-41b9-ae51-35ddbf7bf0a5>
CC-MAIN-2022-40
https://www.cio.com/article/246642/do-the-green-thing-5-painless-ways-to-recycle-old-electronics.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00582.warc.gz
en
0.933638
743
2.84375
3
Recently, ESET and Sophos security researchers found out that hackers are trying to transfer an old backdoor Trojan from Linux to the latest Apple Mac OS X platform. By doing this the hackers are trying to expand their reach of PCs which they will be able to use for botnets. Researchers revealed that the Trojan - Tsunami, gets connected to any IRC channel and then waits for the hackers commands. The commands might vary from instruction to flood any server with unlimited requests. This along with efforts of other computers which are compromised can lead to Distributed denial-of-service attack. The Trojan also has the ability to download files to any computer which is compromised. This can update itself as well as upload additional malware and also gives complete control to the attacker who can command anything to the compromised computer. For a while now, the C source code used in Linux variant has been available for the public. So anyone can change or modify this codes which will further affect multiple platforms. But, the Trojan does not have any method for spreading. Thus, by exploiting a separate vulnerability to upload the malware secretly or by having hands-on access to a target system are only ways the malware can spread.
<urn:uuid:12e6aeb7-b778-4d32-ab8c-d43cfb575a19>
CC-MAIN-2022-40
https://www.itproportal.com/2011/10/27/old-linux-trojan-tsunami-ported-os-x-claims-security-researchers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00582.warc.gz
en
0.95603
241
2.703125
3
Today, open source intelligence is readily available on the Internet. These information sources include social networks, news sites, security feeds, and the dark web. Organizations need to be able use information about cyber threats to improve customer service and cybersecurity. These tools will allow them to identify and consolidate vulnerabilities. This will make it easier for them to exploit new vulnerabilities. As a result, open source intelligence is essential to cyber security. Open source intelligence helps to detect outside threats through the use of publicly available data. However, it’s important to have a strategy before starting an open source intelligence initiative. Cybersecurity professionals can create a customized approach to meet their needs by analyzing and blending data sources. Open source intelligence is valuable for many security disciplines, despite the risks. Open source intelligence is often overlooked but it offers many benefits. Using open source intelligence can help security professionals prioritize their resources. Analysts can create convincing campaigns that trick even well-intentioned users into sharing private information by being able to find unstructured data and identify patterns over time. This type of cyberattack is a powerful tool that allows them to detect vulnerabilities and other potential threats in an organization’s network. It’s important to understand that open source intelligence is a one-stop solution. Analysts can combine open source data with closed-source data to identify potential threats in one analysis. The key to effective threat analysis is to apply a combination of methods to identify vulnerabilities. These methods will give you the most accurate and comprehensive results. When used in conjunction with other types of intelligence, the results of these techniques are more detailed and actionable. Open source intelligence’s primary purpose is to improve security. Open source information is freely available, but it is important to remember that this information comes primarily from public resources. Besides government agencies, commercial entities are also open source intelligence users. They can collect and analyze information from multiple sources and then analyze the information. In this way, organizations can detect and stop vulnerabilities before they are exploited. It can also be used to prevent attacks. There are many types open source intelligence. The most common is open source intelligence. It is free and is available online. It can be obtained from many sources. There are even some classified sources. These sources can still be used to provide information in a non-public environment. This means that the user must not be able to interpret the information he finds.
<urn:uuid:8ab3a206-eb6f-45ff-85b5-62358db81e5a>
CC-MAIN-2022-40
https://commstrader.com/news/217/open-source-intelligence-and-its-possible-effect-on-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00582.warc.gz
en
0.932496
487
2.96875
3
With continued growth in computer and hand-held device use in the lives of many brings increasing technical questions and problems that arise. And so, the field of supporting these systems grows along with it, which brings business opportunities for entrepreneurs, and employment opportunities for support personnel. But there are different types of support needs accompanied by varying required responsibilities and skills. The following are the differences between, and skills required for, three common areas of support: Help Desk, Technical Support, and Desktop/Deskside Support. IT Help Desk This is usually your front-line (sometimes called Level/Tier 1) level of support – the place where your customers will make initial contact; and this contact can be via many different channels: phone, email, social media, company website form, etc. Internal IT departments might have a helpdesk; and computer, handheld device and software vendors will typically have a customer service/call center. Often these can be outsourced as well. Usually, a helpdesk will have some form of tracking software for trouble tickets or issues. Help Desk Support Skills List: - General knowledge of the areas of support, including hardware or software and apps. It is customer service best practices to be able to answer questions at this level, if at all possible - How to communicate regarding those areas, and understanding the customers' needs well, especially when customers are not able to clearly describe their issues - High degree of soft skills, such as good communication and best customer service practices (especially patience and empathy) with excellent interpersonal communication IT Technical Support These people will know most about the technologies involved that relate to the customers' issues (they are sometimes referred to as Level/Tier 2 or 3 support). This can include hardware and software: - Hardware can include: - Computer and handheld device items, such as displays, hard drives, memory, motherboards, etc. - Network items, like routers, switches, networks servers, etc. - Software can include: - Operating systems, such as Windows, Linux, Apple OS, etc. - Applications, such as Microsoft Word, a web browser, conferencing software such as Microsoft Teams or Zoom, etc. - Handheld device apps. IT organizations and hardware/software vendors will need experts in all of the above as related to their business technologies. Technical Support Skills List: - Technical expertise in the specific area, often including both hardware and software or apps - Soft skills, such as good communication and being able to explain issues to non-technical customers, and good customer service interpersonal skills IT Desktop/Deskside Support These people provide direct technical support to customers' computers, either in person or remotely. Internal IT departments as well as computer hardware vendors will often have people that can come out and fix computer problems in person. Remote computer management has helped in allowing this type of person to resolve issues on customers' computers directly on their computers but from a remote location over the Internet. Desktop Support Skills List: - Technical expertise in desktop and laptop computer systems - Soft skills, such as good face-to-face interpersonal skills While the level of technical expertise can vary from role to role, interpersonal people skills is one important area that not only cannot be neglected, but should be highly emphasized in a support organization and a skill highly sought after in support personnel. All customer-facing people represent the company and can help or hurt the business' bottom line; and so, while technical skills are a must, great customer service skills and interpersonal relationship abilities are imperative as well.
<urn:uuid:d34c0f8c-50f1-4181-b896-6a79b7726ddd>
CC-MAIN-2022-40
https://www.givainc.com/blog/index.cfm/2021/1/6/the-difference-between-helpdesk-technical-support-desktop-support-skills-required
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00582.warc.gz
en
0.946388
737
2.6875
3
What is a computer virus?GRIDINSOFT TEAM History of computer virus Computer viruses were one of the firstlings in the malware world. First widespread malware was, exactly, the Creeper virus. It was created by Bob Thomas and designed to infect a DEC PDP-10 - one of the most massive computers of that time (1971). Mr. Thomas created it without malevolent intent - to show his colleagues how such programs work. However, the Creeper virus did the same thing his modern brothers-in-arms do - replicating his code into other programs. That led to making the inflated programs malfunction and the hard disk overflow. Virus did not become a massive malware instantly. Like any other malware, it was failing to become massive because of the spreading problems. Sure, it's quite easy to infect the computers on your uni’s campus, but almost impossible to do that in other places. Until the wide spreading of the Internet connection, viruses were distributed on floppy disks - together with multiple other programs. In the ‘00s, after the Internet boom, viruses became one of the most widespread malware types in the whole world. At that time, people started calling all malware the viruses - just because the chance that it really is a virus was very high. Common cybersecurity knowledge, weak system security, and the absence of automated ways to remove that lead to the situation when having your computer infected were almost normal. That pushed the development of programs that we know nowadays as anti-viruses. So what is the computer virus? Computer virus, as was mentioned in the paragraph above, is a malware that replicates its code into other programs and files. Then, this replicated part replicates itself into other files, so the destruction goes exponentially. At one moment, the program stops working and may even fail to start. Finally, it may turn out that your computer fails to do even a thing without calling the error window. In the saddest cases, when the virus damages critical system files, you will see the BSOD when trying to boot Windows. What is more interesting - viruses were completely unprofitable during this boom. In the ‘00s, there were a pretty low number of malicious programs that could be monetized in any way, but it is very unusual to hear now, in 2022. In those days, the malware was distributed to have fun or to mischief someone rather than gain money. And in fact, the end of the computer virus domination era was the start of the malware-for-profit epoch, which lasts till these days. Why did computer viruses disappear? It is pretty ironic that computer viruses were the stimulus for anti-malware programs to be created, and exactly anti-malware was the force which made the viruses cease to exist. Malware analysts had endured combat with viruses for a decade but finally found a way to stop them all - even the newest ones. They implemented the rule “if it reads a text as a code - it is a virus” - and that was enough to make all attempts of malware creators useless. And they possibly could be more motivated if their job was profitable, but that was much more reasonable to switch on other malware types or even go to a white job. Sure, viruses did not disappear completely. It is possible to bypass the rule mentioned above by some tricks with obfuscation and repacking. However, there is still no way to monetize a stand-alone virus. It makes the programs and system malfunction - and what’s next? It is hard to try to make money on that thing, but crooks still use it sometimes. Viruses are pretty useful when you need to exploit the vulnerability or to make certain apps malfunction. That’s why cybercriminals sometimes apply using specially created viruses for committing cyberattacks on corporations. Computer virus distribution In old times, viruses were everywhere when computer viruses were on the malicious Olymp. You could click on the banner online and get one, install a pirated game - and the virus will be included. Even visiting certain websites was not safe - viruses could stealthily get on your PC and launch. Most of these spreading ways are not possible in modern times just because the software has much more protection. Yes, malware developers may say for sure that earlier was better. Nowadays, as was already mentioned, viruses are mostly used in cyberattacks. Hence, their usual spreading ways are the same as spreading ways of the initial payload for attacks on companies. Spamming of different sorts, RDP exploitation, social engineering or even all together simultaneously - all these things are typical for computer viruses spreading. After the successful penetration of the network, the virus is launched together with keyloggers or other things. More interesting thing is how these viruses are designed. Since anti-malware programs have such a powerful countermeasure, it is impossible to use classic computer viruses. That’s why crooks usually order it somewhere and receive a real Frankenstein child. Most of such malware is ordered somewhere in Asia and then used to commit attacks on the whole globe. These viruses are packed in a very unusual way and have an extremely obfuscated code. Such tricks allow the fraudsters to avoid malware detection. Nonetheless, it is better to use a backdoor - it is much harder to detect and easier to make stealthy. How to prevent computer viruses? Getting a computer virus these days is like finding a gold nugget in a pig trough. There is a small chance of getting one when browsing some ancient pages on your old computer. But if you are afraid of getting one - forget about opening suspicious pages and starting the programs from untrustworthy sources. It is not about preventing viruses - it can help you prevent malware. So let’s check them out - to stay aware. Illegal software is one of the most massive malware sources for individual users. Hacked games from torrent trackers or downloaded directly from the websites with “free” games, hack tools, and keygens for various software are the best carriers for malicious software. In particular, stuff like KMSPico is considered one of the most common sources of ransomware and spyware at the beginning of 2022. Forget about using it - and you will decrease the chances of getting a virus significantly. A new trend in virus distribution is email spamming. Fraudsters send the emails that bait users into clicking on the link or opening the attachment. Whatever it is, you will receive malware on your PC after the malicious script execution. Avoiding such emails is difficult: crooks try to make them similar to the original messages from legit companies, like Amazon or FedEx. Thus, it would help if you remembered the only difference they cannot hide - the sender’s email address. Just keep in mind that delivery messages from Amazon will not be sent from [email protected] - they have an official and genuinely-looking email address for that case. The last piece of advice, which acts rather as a final remedy, is to use anti-malware software. The most effective way to protect yourself is to combine the additional security software and your knowledge. Proper security tools like GridinSoft Anti-Malware will protect your system from computer viruses, spyware, ransomware, or other threats. Frequently Asked Questions Viruses can destroy your system and steal all the valuable data that it holds. To say that there is a single most vicious virus will be inappropriate. Unfortunately, such merciless viruses up to 10 or more. It all depends on their type; the harm they do is terrible, but each in its way. In the group of such scarecrows should be included Cascade, which in the late 80s and early 90s made the characters in DOS sink into the bottom of the screen. In contrast, he made a repetitive process, making useless calculations. AIDS. This is another DOS malware from the early '90s that captured the user's screen and scared its victims that their system was infected with AIDS. In this case, the system should be rebooted, and then deleted all infected files. Skulls, a Trojan, attacked Symbian mobile OS. Masquerading under legal application, it replaced the icons on the screen with images of crossed bones and skulls and made the standard app unusable for use. We also can extend the list of the most dangerous viruses with many more specimens, such as Tequila, Rigel, Gruel, and others. Polymorphic virus. These viruses hide under the guise of altered forms and carefully avoid detection. They do this because of the large number of constantly changing clones. Resident virus. This beau aims to damage your RAM, files, and motherboard memory; more than that, it will not let you delete it. Yes, that's right, once you find and try to eliminate it, it can still store it inside your OS. Boot sector virus. This type of virus extends to your device using a USB drive, CD, and floppy disk to go through your hash sector, which is responsible for your operating system and damages your device. Direct action virus.This virus differs because it is the easiest to create and can spread quickly. It joins many EXE or COM files, then removes itself.
<urn:uuid:83e032b8-1aa6-4830-b54c-c1395b4a2b4f>
CC-MAIN-2022-40
https://gridinsoft.com/virus
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00582.warc.gz
en
0.961007
1,896
3.1875
3
WEP or WPA?GRIDINSOFT TEAM It so happens that wireless networks are more vulnerable to cyber security threats than wired ones meaning they need more robust and user efficient security and safety. A particular non-profit organization, the Wi-Fi Alliance, also owns a Wi-Fi trademark to oversee the implementation and regulation of Wi-Fi security protocols. There are currently four different types of security protocols, with some of them being quite obsolete and others still widely used by Wi-Fi routers around the world. They are WEP, WPA, WPA2, and WPA3. The principle by which all security protocols work is to encrypt the transmitted data so that for some intruder, it won't be recognizable or readable if intercepted. With the help of encryption, key protocols mash up the data to prevent its interception. But still, the question is what those abbreviations all mean and how they work concerning securing your Wi-Fi router. Each of the abbreviations stands for the following terms: - WEP — Wired Equivalent Privacy; - WPA/2/3 — Wi-Fi Protected Access. To know more about each of the terms, read the following paragraphs below. What Does WEP Mean? This protocol became the very first of its kind to be created to secure Wi-Fi security. In September 1999, Wi-Fi Alliance put it to work. Initially, the protocol's key was 64-bit because of the US restrictions on exporting cryptographic technologies. Later, the protocol received 168-bit and 256-bit sizes a key. The most common implementation of key size, though, remains to be the 168-bit size. But in 2004, the protocol was stopped from being revised because of its evident ineffectiveness against much more increased computing powers. So instead, the WEP protocol was substituted by WPA, which took more from it. Specialists advise that systems that still use WEP security protocol should be upgraded or if it's not possible, the device needs to be changed. What Does WPA Mean? Next, after WEP proved its apparent ineffectiveness with numerous vulnerabilities found, WPA came to replace it. This security protocol had much better authentication and encryption features. In contrast to WEP, the protocol used two technologies different from Wireless Equivalent Protection. They were advanced encryption standards and temporal key integrity protocols. In addition to them, WPA supported built-in authentication, which WEP didn't. All WEP devices can upgrade to WPA, but some security implementations will fall to the WEP level. The thing will happen to all connected devices. Unfortunately, some security implementations will fall to the WEP level. But WPA is better than WEP. Currently, the most preferred security protocol for Wi-Fi routers. In 2006, it replaced WPA and had become the most widely used to this day. The protocol uses user-based password protection that eliminates the possibility of unauthorized remote access. You don't need to go straight up to your router and upgrade it to WPA2 because chances are this security protocol is already in place. But if the situation might be different, check your Wi-Fi router security protocol by signing into your router via browser or, if it has a mobile app, then via the app. In the same way, you can change your Wi-Fi password. What Does WPA3 Mean? This security protocol is the last generation of them. The security level gets higher than the WPA2, although the security protocol supports backward compatibility. But as we already mentioned, reversing the old security protocol doesn't come without drawbacks. Cybersecurity specialists think the protocol will dominate others because of its more up-to-date security measures against present cyber threats. The WPA3 security protocol has 3 primary forms: 1️⃣ Wi-Fi Enhanced Open Mode. Encrypts traffic on open networks where password is not used; 2️⃣ WPA3 Enterprise Mode (WPA3 ENT). The same as with WPA2 ENT, this security protocol also needs a management frame protection to be in place. There also exists another stronger 192bit version of this variant; 3️⃣ WPA3 Personal (WPA-3 SAE) Mode. This variant provides security when the set password is weak. Besides having different variants for different security measures, the WPA3 security protocol offers some key features that help to improve the security of Wi-Fi router much better even for WPA2: - Transition mode The feature allows to switch back to WPA2 if a device doesn't support WPA3; - Simultaneous Authentication of Equals (SAE) This particular feature prevents brute force attacks. If some password doesn't respond to password complexity requirements, the feature will provide needed security; - Management Frame Protection (MFP) This feature doesn't allow illegitimate deauthorization of clients from the network. Namely, it counteracts man-in-the-middle attacks or IDS/IPS systems efforts to force clients out. How To Protect Wi-Fi Home Network? In addition to having the appropriate Wi-Fi security protocol, you also need to follow some critical cybersecurity tips concerning your Wi-Fi network security: 💡 Turn off the remote administration feature. If you don't need this feature regularly, it would be better not to have it turned on. Because it's one of the common ways for threat actors to get your wifi settings and change them without you. See the administration section of your router to change this setting. ✨ Turn on MAC address filtering. This setting will allow you to restrict devices connecting to your home network, giving permission only to those you registered. In such a way, you can enable additional security measures for your network. 🧱 Enable Firewall. Most wifi routers have in-built firewalls, but sometimes they can be turned off. Make sure you have one in place, and it's not disabled. Firewalls protect against network attacks from threat actors. 🏠 Place your router in the center of your home. An obvious thing to do. If any hacker doesn't have access to your wifi router signal, they can't attack you by intercepting the signal. Don't place your router near windows or doors and make threat actors' life easier. 🔁 Regularly update the router firmware. While some routers have the auto-update feature - most of them won't, so make sure your router firmware is updated. Because if there's any vulnerability found, threat actors most likely will try to exploit it. 🔕 Hide your network from being seen by everyone. You can use a unique feature to help you hide your network from people in the surrounding area. Changing your network's default name will make it harder for threat actors to hack into your network. Because every router has assigned by the manufacturer its SSID (Service Set identifier) and you can change it and make your network invisible. ❗ Don't use the default password and username. Just saying that anything default can be easily looked up on the internet, and that's the first thing that threat actors will try to do. So be creative and make up your complex and strong password that no one outside your network will easily guess. The same goes for username - also, don't make it something obvious. A quick reminder that your strong password should consist of letters of all registers, numbers, and various characters.
<urn:uuid:8dbc4129-a225-4500-8ab3-9dcbf9195052>
CC-MAIN-2022-40
https://gridinsoft.com/wep-wpa
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00582.warc.gz
en
0.943785
1,556
3.125
3
Elliptic curves are commonly used to implement asymmetric cryptographic operations such as key exchange and signatures. These operations are used in many places, in particular to initiate secure network connections within protocols such as TLS and Noise. However, they are relatively expensive in terms of computing resources, especially for low-end embedded systems, which run on small microcontrollers and are limited in Flash storage, RAM size, raw CPU abilities, and available electrical power when running on batteries. In this post, we are talking about optimizing some of the internal operations used in elliptic curve implementations, in particular (but not only) on small microcontrollers. For the impatient, here are the relevant links: - The core article, which describes the optimized binary GCD algorithm: https://eprint.iacr.org/2020/972 - The implementation on x86 CPU (with inline assembly, for Intel Skylake and more recent cores): https://github.com/pornin/bingcd - Another implementation, this time for ARM Cortex-M0 and M0+ microcontrollers; it also includes the Legendre symbol, and an X25519 implementation which runs in about 3.23 million cycles (new record!): https://github.com/pornin/x25519-cm0 And a summary of the performance results on ARM Cortex-M0 and M0+ CPUs: |Operation||Cost (clock cycles)||Previous records| |Inversion modulo 2255-19||54793||~270000| |Legendre symbol modulo 2255-19||43726||~270000| |X25519 (point multiplication on Curve25519)||3229950||3474201| Elliptic curves run on finite fields; here, we focus on the most basic type of finite field, which is integers modulo a given prime p. An elliptic curve that provides the traditional cryptographic security level of “128 bits” must use a finite field of size at least 250 bits or so; thus, we consider that p is a prime integer of 250 bits (or slightly more). We will use Curve25519 as our main example; this is a curve that uses the specific modulus p = 2255-19 for its finite field. Use of Curve25519 for key exchange is specified in RFC 7748 under the name “X25519”. This specific use case is fast becoming the dominant way to perform key exchange for HTTPS connections. Operations modulo p are not immediate; such integers do not fit in individual registers, and must be represented as several internal words (often called limbs). Appropriate algorithms are then used to compute additions, subtractions, multiplications, squarings and inversions modulo p. In general, additions and subtractions are fast, multiplications much less so. Squarings are a special case of multiplication and, depending on target architecture and implementation technique, may be somewhat faster (cost of one squaring is between 50 and 100% of a multiplication, and commonly between 65 and 80%). The modulus p = 2255-19 was specifically chosen to make multiplications easier and faster. Inversions, however, are much more expensive. The usually recommended method for inversion modulo 2255-19 is Fermat’s Little Theorem, which states (indirectly) that 1/x = xp-2 mod p: this is a modular exponentiation, with a 255-bit exponent; with the best known modular exponentiation algorithms, this is going to require at least 254 squarings, and a few extra multiplications on top of that (11 extra multiplications for this specific modulus). Unfortunately, each basic operation in an elliptic curve, in its classic description, requires an inversion in the field, and interesting cryptographic protocols will need to perform several hundreds of such basic operations. To avoid paying for the cost of a modular inversion several hundred times, the usual trick is to work with fractions: each field element x is represented as a pair (X, Z) such that the true value x is equal to X/Z. Using fractions increases the number of multiplications, but avoids inversions, until the very end of the algorithm. One inversion is still needed to convert the final result from fractional to integral. In a way, this merges all inversions into a final one; however, that final inversion must still be computed. To put some numbers on that: an instance of X25519 requires 1276 multiplications, 1020 squarings, one inversion, and some other cheaper operations. The cost of the inversion will then represent between 7 and 10% of the total cost. On an ARM Cortex M0+, the previous record was held by Haase and Labrique at 3474201 cycles – if the microcontroller operates at 8 MHz, this represents close to half a second, and it must be used twice to make a TLS key exchange. Thus, the computational cost brings the computation time into the range of values that are perceptible to the human user; and human users are quite sensitive to latency and perceived slowness. This highlights the importance of optimizing such operations. Fermat’s Little Theorem is not the only known modular inversion algorithm. In fact, more than 2300 years ago, Euclid described a method to compute the greatest common divisor between two integers that can be readily extended into a modular inversion method (this is called the extended Euclidean algorithm). While Euclid’s algorithm uses divisions (on plain integers), which are complicated and costly to implement, a binary version that involves only subtractions and halvings was described by आर्यभट 1500 years ago. In the case of Curve25519, the binary GCD algorithm was initially deemed not to be competitive with Fermat’s Little Theorem, especially on “large” architectures such as modern servers, laptops, and smartphones, which now have intrinsic support for fast 64-bit multiplications. However, it is possible to considerably speed up the binary GCD by noticing that all operations on big integers within the algorithm only need to use a few bits of the said integers, namely the few least significant bits and the few most significant bits. It is thus possible to execute most of the algorithm on approximations of the values, that fit in a single register (the “middle bits” having been removed), and propagate updates to the real values only a few times within the course of the algorithm. This does not change the overall mathematical description of the algorithm, but can significantly lower the cost. Moreover, if done with some care, these operations can be done safely on secret values, i.e. with an execution time and memory access patterns that do not depend on any secret, and therefore immune to leakage through timing-based side channels (such code is said the be constant-time and it is highly desirable in cryptographic implementations). I had initially devised and used this binary GCD optimization technique in 2018 for the implementation of a step of RSA key pair generation within the BearSSL library; I also used it as part of the key pair generation of Falcon, a signature algorithm (currently part of NIST’s post-quantum cryptography standardization project). In both cases it was used on relatively large integers (1024 bits for RSA, up to 6500 bits for Falcon), and did its job (i.e. the cost of modular inversion became a very minor part of the overall cost). But it was time to properly document the algorithm, and to try it out on smaller fields, especially those used for elliptic curves. This yielded the following article and demonstration code: The article formally describes the algorithm and mathematically proves that it always works within the expected number of iterations. The implementation is for big x86 CPUs; specifically, it targets x86 systems running in 64-bit mode and implementing the ADX and BMI2 extensions (i.e. the adcx, adox and mulx opcodes). On such systems, Fermat’s Little Theorem leads to an inversion cost of about 9175 cycles; the optimized binary GCD uses only 6253 cycles (test CPU is an Intel i5-8295U “Coffee Lake” core), which happens to be a new record for this type of CPU. The gain of about 3000 cycles is not very significant; this is about one microsecond worth of CPU time. It won’t be perceptible anywhere except in extremely specialized applications that do many elliptic curve operations; nothing of the sort will be relevant to an even busy Web server. However, for small microcontrollers, the speed gain is comparatively greater, and the gains can become significant. When implemented on an ARM Cortex M0+, we achieve inversion modulo 2255-19 in only 54793 cycles, i.e. about 5 times faster than Fermat’s Little Theorem method (at around 270000 cycles). This implementation is part of the new X25519 implementation (see next section). Getting substantial speed improvements on the final inversion are only one of the ways in which the optimized algorithm can help with elliptic curve implementations. For instance, when working with generic Weierstraß curves such as NIST’s P-256 (which is also extremely common in TLS, as well as being much used for signatures in X.509 certificates), curve operations split into point doublings (adding a point to itself) and generic point additions, the latter being more expensive (about twice more expensive than doublings). Optimized point multiplication algorithms on such curves use “window optimizations” that reduce the number of generic point additions (down to about one point addition for every four or five doublings), but the cumulative cost of point additions remains significant. These algorithms compute a window of a few multiples of the input point and then use these points repeatedly. A fast inversion routine makes it worthwhile to normalize these points to affine coordinates (i.e. reducing the fractions to make denominators equal to 1), which yields substantial speed-ups for point addition. Such savings are cumulative with those obtained from making the final inversion faster. A New X25519 Speed Record I wrote a new X25519 implementation for ARM Cortex-M0 and M0+ CPUs. These CPUs are popular in constrained applications because they offer “32-bit performance” while using only very low power and silicon area. The implementation is available here: https://github.com/pornin/x25519-cm0 This implementation sets a new speed record at 3229950 clock cycles, i.e. about 7.6% faster than the previous record. The difference is slight, but such savings are cumulative over an application and can matter if they allow getting over a threshold effect (e.g. allowing the hardware to run at a lower frequency to save on battery power). If we go into the details, the use of the optimized binary GCD saves about 215000 cycles. An additional 75000 cycles are also gained in the implementation thanks to the use of pure assembly, as opposed to Haase and Labrique’s code which is mostly written in C. They have assembly implementations of multiplications and squarings (in 1478 and 998 cycles, respectively) and mine offer similar performance (1464 and 997 cycles, respectively), which is expected since I used the same underlying techniques (three layers of signed Karatsuba multiplication). However, keeping to assembly allowed me to use a different internal ABI in which functions do not save the registers they modify; such operations are necessary to interoperate with external code (e.g. written in C) but most of it is wasted in practice, since many registers do not contain values that need preservation across calls. Thus, Haase and Labrique’s additions modulo p require 120 cycles, while mine use only 71 cycles. These are small savings, but they add up to about 75000 cycles over the course of an X25519 execution. On the other hand, my implementation spends an extra 46080 cycles to stick to pure constant-time discipline. Haase and Labrique assume that accesses to RAM in small microcontrollers does not leak address-related information, because such CPUs do not have caches. This is a correct assumption in most cases, but it is hard to verify in practice. The microcontroller vendor assembles the CPU core with some RAM and Flash blocks, and other elements such as I/O connectivity and timers, and may put a data cache in there; also, memory accesses go through an interconnection matrix that arbitrates between concurrent accesses, and it is possible that some address-dependent effect may happen, especially if some I/O activity with DMA from a peripheral runs concurrently. Microcontroller vendors hardly ever document these aspects in all their fine details. Therefore, the safest way is to assume the worst and, defensively, insist on pure constant-time discipline in the implementation. Apart from the optimized binary GCD in ARMv6-M assembly, the implementation also includes an implementation of the Legendre symbol, which is not actually used in X25519, but may be useful to other operations adjacent to elliptic curves. This is described in the next section. The Legendre symbol, denoted (x|n) for two nonnegative integers x and n (with n an odd prime integer) is defined as follows: - If x = 0 mod n, then (x|n) = 0. - Otherwise, if x is a quadratic residue modulo n (i.e. there is an integer y such that x = y2 mod n), then (x|n) = 1. - Otherwise, (x|n) = -1. Computing the Legendre symbol is used in particular when hashing data into a curve point in a constant-time manner (it is often used as an “is_square” test function). The usual method to compute Legendre’s symbol, especially when a constant-time implementation is needed (e.g. because the data which is hashed into a curve point is a secret value, such as a password), is again a modular exponentiation, using the fact that (x|n) = x(n-1)/2 mod n. As in the case of inversion, the exponent has about the same size as the modulus, and the cost is very close to that of inversion with Fermat’s Little Theorem. There again, other algorithms are known. The Legendre symbol can be extended to the case of non-prime n (in which case it becomes the Jacobi symbol), and then again to a generic case that includes negative integers as well (this is the Kronecker symbol). These extended symbols allow the use of algorithms that are very close to the extended Euclidean algorithm and the binary GCD. And, indeed, we can use a variant of the optimized binary GCD to compute Legendre symbols. Namely, we perform a binary GCD. In the binary GCD, we have two variables a and b whose initial values are x and p, respectively; throughout the course of the algorithm, both values slowly shrink, down to a final state where a = 0, and b contains the GCD of x and n (i.e. 1, if we use a prime n, unless we started with x = 0 mod n). At each iteration, the three following steps happen: - If a is odd and a < b, then a and b are exchanged. - If a is odd, then b is subtracted from a. - a is divided by 2. Throughout the algorithm, b is always odd, and all divisions by 2 (in step 3) are exact since step 2 makes sure that if a is odd, then it is replaced with a–b, which is then necessarily even. The optimized binary GCD uses the same outline, but it uses approximations of a and b most of the time, which can lead it to “miscompute” step 1, i.e. swap values when it should not, or not swap them when it should. The consequence is that a or b may become negative. However, this is fixed later on in the algorithm. The critical remark here is that while a or b may become negative, they cannot be both negative at the same time. If a subtraction makes a negative, then subsequent steps will have one value negative and one positive, until a fixing step (not shown above) occurs and makes them both positive again. The Legendre/Jacobi/Kronecker symbol is obtained by doing the same computations, but also keeping track of the expected symbol value by applying the following rules: - If x = y mod n, then (x|n) = (y|n), as long as either n > 0, or x and y have the same sign. This means that step 2 above does not change the Kronecker symbol (a|b), since either b is positive, and/or a and a–b are both positive. - If x and y are not both negative, then (x|y) = (y|x), unless x = 3 mod 4 and y = 3 mod 4, in which case (x|y) = -(y|x). Thus, when exchanging a with b, we just have to look at the two low bits of each in order to apply the corresponding change to the Kronecker symbol. - (2|n) = 1 if n = 1 or 7 mod 8, or -1 if n = 3 or 5 mod 8. Since the Kronecker symbol is multiplicative, this means that when a is divided by 2 (in step 3), the Kronecker symbol must be negated if and only if b = 3 or 5 mod 8 at that point (which, again, uses only the low bits of b). This adaptation of the binary GCD to the Kronecker symbol is not new, but it shows that it is compatible with the optimized binary GCD algorithm, with a few extra operations in the inner loop. A contrario, we do not need to keep permanent track of the Bézout coefficients (values u and v such that a = xu mod p and b = xv mod p, which are necessary to compute a modular inversion), which saves some of the cost. In total, the implementation of the Legendre symbol modulo p = 2255-19 uses only 43726 cycles on an ARM Cortex-M0+. One specific case where this optimized Legendre symbol is useful is Haase and Labrique’s AuCPace protocol. This is a protocol for key exchange with password-based authentication, immune to offline dictionary attack. On the client side, this protocol requires two evaluations of X25519, as well as one Elligator2 map. The latter needs a modular inversion and a Legendre symbol computation. When using modular exponentiation methods, it is possible to combine both into a single exponentiation, which will then (with p = 2255-19) require 254 squarings and 23 extra multiplications; Haase and Labrique find a total Elligator2 map cost of 289276 cycles. With the optimized binary GCD and the variant for Legendre symbol described above, the cost of the Elligator2 map can be expected to be lowered to about 102000 cycles, i.e. savings of about 187000 cycles. Combined with the savings on X25519 itself, we may hope for lowering the total client cost of AuCPace from 7.35 million cycles down to about 6.67 million cycles, i.e. about 10% faster. That kind of gain is not a complete game changer, but it is large enough to be significant and make it worth the implementation effort.
<urn:uuid:d5c1ba5f-813c-4c2f-8121-a29286ae184c>
CC-MAIN-2022-40
https://research.nccgroup.com/2020/09/28/faster-modular-inversion-and-legendre-symbol-and-an-x25519-speed-record/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00582.warc.gz
en
0.922594
4,154
3
3
WASHINGTON, D.C. — The U.S. Department of Energy (DOE) has announced a plan to provide $84 million for new observational, modeling, and simulation studies to improve the accuracy of community-scale climate research and inform equitable climate solutions to minimize adverse impacts caused by climate change. Research will focus on three tightly related scientific topics—atmospheric and environmental observations; modeling of climate change and impacts across urban regions; and simulating the climate benefits of deploying climate solutions and technologies in historically underserved communities across the U.S. “Urban regions are expected to face some of the most adverse effects of climate change, such as extreme heat and flooding,” said Geraldine Richmond, Under Secretary for Science and Innovation. “Establishing Integrated Field Laboratories (IFLs) in urban regions will enable scientists and local communities to work closely together to better understand the factors that contribute to urban climate impacts and to develop equitable adaptation solutions informed by science.” Supported research will improve scientific understanding of how climate change affects microclimates and micro-environments across all types of urban communities; how biogeochemical cycling and atmospheric composition vary across urban regions; and how equitable solutions may be identified as a means to minimize impacts, especially on the most disadvantaged urban communities. Teams of scientists will combine experimental, observational, modeling, and simulation research to unravel complex process interactions and improve scientists’ ability to understand urban climate change. Urban IFLs will require multi-disciplinary teams that bring together the skills and talents of investigators from multiple research institutions. Academic and nonprofit research institutions, national laboratories, other federal agencies, and the private sector are all eligible to apply as Urban IFL team members. The lead organization of each proposed Urban IFL team must be an academic institution or a national laboratory. Locally-based team members and minority serving institutions (MSI) are expected to have significant roles in each Urban IFL. Funding is to be awarded competitively, on the basis of peer review, and is expected to be in the form of five-year awards. The Department anticipates that $17 million will be available for this program in Fiscal Year 2022, pending availability of funds. The DOE Funding Opportunity Announcement, issued by the Office of Biological and Environmental Research within the Department’s Office of Science, can be found here. An informational webinar will be held on Wednesday, March 30, at 12:00 PM EDT. Registration for the webinar can be found here. Source: DOE Office of Science
<urn:uuid:c51ec155-5c6a-4b5a-b817-3bd7bce4f516>
CC-MAIN-2022-40
https://www.hpcwire.com/off-the-wire/doe-to-provide-84m-for-new-research-involving-urban-integrated-field-laboratories/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00582.warc.gz
en
0.923339
517
2.515625
3
Engineers at Cloudflare and Apple have created a new internet protocol, ODoH, to fill one of the biggest internet security gaps many people don’t even know existed. The protocol named Oblivious DNS-over-HTTPS (ODoH), will make it much more difficult for ISPs to track user activity on the Web. Each time a user visits a site on the Internet, the browser uses a DNS resolver to convert the web address into an IP address, which it uses to find the requested page on the Internet. However, this process is not encrypted, which means that every time the site is loaded, the DNS request is sent in clear text, and the DNS resolver (the Internet service provider, if the user has not selected another DNS resolver) can see which resource is being visited. New protocols like DNS-over-HTTPS (DoH) encrypt DNS requests, making it more difficult for attackers to intercept them and redirect users to malicious sites instead of legitimate ones. However, DNS queries are still visible to ISPs who may sell browser history to advertisers and other interested parties. The ODoH protocol presented by Cloudflare and Apple engineers is based on previous developments by specialists from Princeton University. In short, ODoH decouples the DNS request from the user, and the DNS resolver does not see which site is visited. How does it work? ODoH wraps the DNS request in an encryption layer and sends it through a proxy server that acts as an intermediary between the user and the site that is requested. Since the DNS request is encrypted, its contents are not visible to the proxy server. At the same time, the proxy server acts as a kind of shield that prevents the DNS resolver from seeing who originally sent the DNS request. In other words, only the proxy server knows the user who sent the request, and only the requested site is known to the DNS resolver. Several partner organizations are already using proxy servers, allowing some users to already use the new technology through the existing Cloudflare 18.104.22.168 DNS resolver. But most will have to wait until ODoH is built into browsers and operating systems. This can take months or years, depending on how long it takes for ODoH to be certified as a standard by the Internet Engineering Board. Let me remind you about the fact that Vulnerability in OAuth Protocol Allows Hacking Any Facebook Account.
<urn:uuid:e1d586bc-c6b9-4b0a-8e52-09cb6888f6c8>
CC-MAIN-2022-40
https://gridinsoft.com/blogs/new-internet-protocol-odoh-will-hide-websites-visited-by-users-from-isps/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00582.warc.gz
en
0.945108
502
3
3
While the trend of big data breaches is set to continue, with organisations that hold personal data topping the target list, ransomware aimed at cloud services is likely to be a new development, the MIT Technology Review predicts. While the biggest and oldest cloud service providers such as Google, Amazon, and IBM have the resources and experience to make it difficult for attackers to succeed, the MIT Review points out the smaller cloud providers are likely to be more vulnerable and more likely to pay up if customer data were encrypted and held for ransom. Although 2017 has seen the emergence of an AI-driven arms race, with artificial intelligence (AI) being used by cyber attackers and defenders alike, MIT predicts that 2018 will see greater adoption of machine learning models, neural networks and other AI technologies by cyber attackers. Machine learning can process massive quantities of data and perform operations at great scale to detect and correct known vulnerabilities, suspicious behaviour and zero-day attacks. However, the McAfee Labs 2018 threats predictions report warns that adversaries will certainly employ machine learning themselves to support their attacks, learning from defensive responses, seeking to disrupt detection models and exploiting newly discovered vulnerabilities faster than defenders can patch them. Machine learning models can also match humans in generating convincing phishing emails, but can do it at scale, and attackers could use AI to help design malware that can circumvent malware detection software. To win this arms race, McAfee believes organisations must first augment machine judgment and the speed of orchestrated responses with human strategic intellect. Only then, according to the security firm, will organisations be able to understand and anticipate the patterns of how attacks might play out, even if they have never been seen before. Cyber threat predictions Cyber-physical attacks have long been a concern and first hit the headlines on 23 December 2015, when cyber attacks plunged half the homes in Ukraine’s Ivano-Frankivsk region into darkness for several hours. But such attacks are becoming more common and MIT predicts that more cyber attacks targeting electrical grids, transportation systems and other types of national critical infrastructure are likely in 2018. Cyber-physical attacks are expected to be designed to either cause immediate disruption or to threaten to shut down vital systems to extort money from operators. MIT also predicts that 2018 will see researchers and attackers uncovering cyber vulnerabilities in older planes, trains, ships and other modes of transport. Another trend expected to continue and expand in 2018 is the hijacking of computing power to mine cryptocurrencies by solving complex mathematical problems. According to security firm Malwarebytes, it blocked 11 million connections to cryptocurrency mining sites in a single day in 2017. MIT warns that cyber attackers hijacking computers for cryptocurrency mining could have a devastating effect if they target computing resources at hospitals, airports and other similar locations. Read more about cloud security - Amazon CISO shares secrets to building secure cloud products. - Cloud likely to be top cyber attack target, says McAfee. - Research highlights cloud security complacency in organisations that ditch on-premise tech. - Public sector IT buyers could be duped into paying more than they need for cloud systems because of confusion over the levels of protection required for data they create and process. Finally, another threat that is expected to continue and expand in 2018 is cyber attacks aimed at influencing democratic elections. It is widely accepted that Russian-speaking attackers targeted voting systems in 21 US states ahead of the 2016 presidential election. Despite efforts to address vulnerabilities ahead of the midterm elections in November 2018, MIT warns that determined attackers still have plenty of potential targets, including electronic voter rolls, voting machines and the software usxed to collate and audit results. In June 2017, it emerged that online voting is being held back in the UK because of fears that cyber criminals could influence the results, with 40% of UK voters concerned about the issue, according to a survey just ahead of the UK elections. “Claims that Russian hackers had some influence on last year’s US presidential elections have sparked a wave of scepticism around the safety of electronic voting here in the UK,” said Pete Turner, consumer security expert at Avast, which carried out the survey. In the light of the European Union’s (EU’s) General Data Protection Regulation (GDPR) compliance deadline on 25 May 2018, big data breaches are likely to be the top priority for any organisation that handles the personal data of EU citizens, with fines of up to €20m or 4% of global revenue, whichever is greater. Data brokers who hold information about things such as people’s personal web browsing habits are likely to be among the most popular targets for compromise, MIT warns.
<urn:uuid:301e01dc-bb77-4ac8-9939-60a30f427fa2>
CC-MAIN-2022-40
https://www.computerweekly.com/news/450432488/Ransomware-to-hit-cloud-computing-in-2018-predicts-MIT
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00582.warc.gz
en
0.942685
951
2.65625
3
Many people, companies, and technology experts think cloud computing is a trend or simply a "thing of the future." But what many people don't realize is that cloud computing is very much here, and is very much embedded in everyday business technology life. With so many different broad definitions and vague descriptions, it can be hard to grasp everything that cloud computing is and what it entails. The basic idea behind the cloud computing phenomenon is that data and resources can be accessed--privately, publicly, or a hybrid of both--via a virtual datacenter called "the cloud." This allows providers to deliver applications that can be accessed through web browsers via personal computers or mobile devices, while the data and resources are stored at a remote location. The most important aspect of cloud computing is that it is a means of delivery computing as a service rather than as a product. This allows for greater accessibility, faster running applications, less maintenance, and overall easier manageability. Cloud computing can be arranged into four distinctive models: public, community, private, and hybrid. Each model can be beneficial depending on the respective use, though they also have some drawbacks. A public cloud is accessible to the general public. The provider allows for accessibility to resources over the internet and can either offer it free of charge or on a pay-to-use basis. Of course, using a public cloud service can pose some privacy and security issues. Being open to the public, a public cloud system can create easy access for anyone to the data, resources, or services offered by it. A community cloud offers cloud services that are shared with various organizations and companies. Shared costs and convenient interplay between groups are the leading advantages. Similar privacy and security threats are applicable to community clouds as public clouds, as a substantial amount of information is reachable to others as well as the possibility of alteration or even deletion at the hand of the service provider. Private clouds are accessible only by a single, typically a company or an organization. This allows for maximum security and privacy. However, because only one group is allowed access to a private cloud, there is dispute as to whether using the cloud model is feasible at all because many feel that private clouds lack the economic model that makes cloud computing such an intriguing concept in the first place. Hybrid clouds are a mix of any two or more services and take advantage of what each of those services has to offer. Companies and individuals are able to obtain degrees of fault tolerance combined with locally immediate usability without dependency on internet connectivity. Basically this means that if part of the system were to fail, it would continue to run though on a reduced level and have no reliance on internet signal strength or connection whatsoever. In my next blog we will talk about the security on the different types of cloud.
<urn:uuid:a10f2b5f-93ca-4f58-b12c-87d3f6a87839>
CC-MAIN-2022-40
https://info.focustsi.com/topic/cloud-computing/Cloud-Services-Boston/Cloud-Computing-What-It-Is-Along-With-the-Pros-Cons
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00582.warc.gz
en
0.955238
570
3.328125
3
Privacy concerns will ratchet up further around IoT and 5G. Even if the industry manages to secure the billions of IoT devices already deployed, they permeate so many aspects of life that it will be nearly impossible to keep personal and private information out of the public domain. The rollout of 5G will further accelerate the proliferation of IoT technology as manufacturers rush to produce low-cost devices with integrated connectivity. All Mobile Network Operators (MNOs) are keen to adopt 5G, with IoT and Enterprise services being primary drivers, providing operators with access to new revenue opportunities from new services and applications. The proliferation of private data in the public domain will expand hackers’ capabilities. Social engineering is the most effective method cybercriminals use to breach secure systems. They know consumers will continue to connect more devices in their homes, offices, and cars, not to mention public spaces, allowing them to create a more complete picture of a person’s activities, locations, likes and dislikes. Even when these gadgets use encryption to transfer data, the backend systems with which they communicate may have their own flaws. And, even anonymized data can be used to infer a lot when cross-correlated. The Princeton University IoT Research Project had this to say about the phenomenon: “Let’s say you have a Roku TV and that you are live-streaming the Bloomberg Channel without interacting with the TV otherwise. Do you know that the Bloomberg Channel could be communicating with 13 different advertising and tracking servers in the background? Or let’s say you have a smart Geeni light bulb. Are you aware that it could be communicating with a Chinese company every 30 seconds even while you are not using the bulb?” One might recall the loyalty card craze of the 80s which spurred the IT storage market and opened the door to the broad adoption of data science technologies. Customers began to feel more and more uneasy about the level of detail companies were tracking and able to infer about them. IoT may take this to a whole new level. Smart connected devices are making the idea of Big Brother much more real; businesses can know what time their customers wake up in the morning, when they brush their teeth, when they put the baby to sleep, when they vacuum the living room, and what they watch on TV. Customers might not feel violated today, but all this data could come back to haunt them in the future as more and more complete models of our lifestyles are built and used within algorithms that could make decisions that profoundly affect us e.g. banks could deny loans, insurance companies could increase their premiums. The data that represents our interactions with the connected world is undoubtedly valuable, and regulatory frameworks rightly exist to ensure it is used responsibly and stored / transferred securely; however, the speed of innovation and the range of information are changing the game. The time is now to design systems with visibility, transparency, and security integrated from the start.
<urn:uuid:411ff904-12e1-4d6b-8adc-0e9ed34cc9aa>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2019/12/16/privacy-security-trends-2020/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00582.warc.gz
en
0.952159
600
2.578125
3
"What do you mean, 'Not Secure,' Google Chrome?" As you browse websites online, especially on Chrome web browsers, you may be alerted that the website you’re looking at is not secure. “Not secure? How can a website not be secure?” In this piece, we’re going to answer some frequently asked questions related to SSL certification and how it can impact your online experience. “What does SSL stand for?” The acronym “SSL” stands for “Secure Sockets Layer.” The term is referencing certain cybersecurity protocols in relation to how information is handled on a particular website. An SSL Certificate is a form of proof that the website you’re visiting has a level of encryption in place for data collected on the site. Data encryption is a security measure that makes the information you submit safer from third-party data gathering and tampering. To put it in another way, you wouldn’t want to talk to someone on a telephone line where nefarious parties could be listening in on the call. Data encryption would, by comparison, make the conversation only comprehensible to the parties on either end. An SSL Certificate is a form of proof that such security measures are in place. “Is it dangerous to visit websites without an SSL certificate?” If you’re worried that hackers and online thieves can somehow pry into your computer just by visiting a website without an SSL certificate, find some relief in knowing that’s not how an SSL certificate works. Visiting a website for research purposes without an SSL Certificate is not inherently dangerous. The riskiest aspect of visiting a non-SSL website is if you are submitting sensitive data such as credit card information or personal details to such websites. Because of a lack of encryption, the website may not be able to guarantee that your data was not intercepted on the way to its intended destination. “How can you tell if a website has SSL certification or not?” For the most part, the majority of internet users don’t know a lot about what’s happening under the hood of the website they’re visiting. While this is true, there are fairly clear indicators for determining if a website is secure or not. One easy method is by looking at the web address, also known as the URL (Uniform Resource Locater). Just before where the “www” is placed, there is an “http…” section. If a website URL simply has an “http://”, the website probably does not have an SSL certificate that covers the entire website. If the website’s URL has an “s” after the “p”, as in “https://”, the website most likely has SSL certification. There are some websites that may only have an “https://” for certain pages, such as for where you may fill out a data form. Others may have a link to an “http://” website but may redirect you to an “https://” website for security reasons. Another handy way to tell if a website has an SSL is by using a Chrome web browser. In July of 2018, the Chrome web browser installed a new feature that will say “Secure” or “Not secure” before the URL of any website in the top bar. “In addition to security, what else is impacted by not having SSL certification?” Even more than just web security, not having an SSL certification may impact the search engine ranking of a website. The main selling proposition of search engines is serving up the very best websites for search engine users. More than just matching keywords, search engines also take a variety of website specifications into consideration when delivering search engine results including content relevant to your search, ease of navigation, site speed, and yes, website security. This doesn’t mean that a website without an SSL certificate won’t rank at all, but simply that search engines such as Google have said they prefer to list websites with SSL certification for the safety of their users’ data over websites that do not. Whether you’re a webmaster or a casual peruser of the internet, it pays to be aware of the security of the websites where you submit information.
<urn:uuid:ff5fe06e-11d6-41df-9c05-d0eaf2975121>
CC-MAIN-2022-40
https://www.jdyoung.com/resource-center/posts/view/149/%E2%80%9Cwhat-the-heck-is-an-ssl-certificate%E2%80%9D-jd-young
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00582.warc.gz
en
0.938971
895
3.03125
3
Dubai has issued a new law regulating the dissemination and exchange of data in the Emirate. This is one of the first open data initiatives in the Middle East and is being promoted by the Prime Minister’s office as a significant step forward in Dubai’s cyber legislation and smart city ambitions. What is the law and when did it come into force? Dubai Law No.26 of 2015 (the Open Data Law) was formally published in the Official Gazette of the Government of Dubai on 27 December 2015 after being announced earlier in the year. It came into force on the date of publication. What does it cover? The Open Data Law regulates the use and sharing of “Dubai Data”, which is defined as any data related to the Emirate of Dubai and available to data providers. For these purposes, “data” means any set of organised or unorganised information, facts, concepts, instructions, observations or measurements in any form that are collected, produced or processed through data providers. A “data provider” is any UAE federal government entity, Dubai government entity (including authorities supervising special development zones and free zones) or any other person specified by the competent authority (see also ‘Who will enforce the Open Data Law?’ below). Who does it affect? The Open Data Law is stated to apply to UAE federal government entities in possession of data relating to Dubai, local (i.e. Dubai) government entities and any other persons specified by the competent authority that produce, own, publish or exchange data relating to Dubai. Accordingly, the potential application could be very wide depending on the competent authority’s approach to classifying entities as “data providers”. Article 3(3) states that specified persons may include individuals, establishments or companies existing anywhere in Dubai, including Dubai International Financial Centre and other free zones. What are the key implications of the Dubai Open Data Law? According to Article 15, Dubai Data is deemed to form part of the assets of Dubai Government. Dubai Data cannot be disposed of by data providers or users other than in accordance with the Open Data Law and any supporting regulations. This is potentially very significant for commercial entities that are deemed to be data providers by the competent authority as they will be required to classify their data as “open” or “shared” (see below) and to meet the other requirements on data providers relating to the sharing of this data. UAE government ministries and departments will become obliged to make certain data sets available. The stated intentions of the Law include helping Dubai to achieve its vision of becoming a smart city, enhancing transparency, increasing the efficiency of government services and consolidating a culture of creativity and innovation. Other open data programmes around the world have focused on similar objectives and it will be interesting to monitor how the Open Data Law increases the availability of government datasets for personal, academic and commercial re-use. The means through which Dubai Data will be made available will be determined by the competent authority. The Open Data Law envisages dissemination and exchange of the data via an electronic platform, bulletins, reports and other methods. The authority will approve policies for the provision of data and establish criteria and rules regarding data sharing, including technical protocols. Article 10 states that data providers must supply the “fundamental infrastructure” specified by the competent authority for the sharing of Dubai Data, including IT systems, data protection and security measures, and links to the electronic platform and other systems specified by the competent authority. This may place an immediate burden on certain data providers to upgrade their systems to meet the authority’s requirements and the costs will presumably be borne by the data providers. The Open Data Law does not refer to any sharing of costs between providers and the authority or government. Local government entities in Dubai must commit to a number of detailed obligations including classifying their data according to the Dubai Data Directory (to be published by the competent authority), preparing a data sharing plan and timetable to be approved by the authority, adopting all measures necessary for data sharing according to the authority’s policies, identifying potential constraints to data sharing, ensuring data quality, and providing the authority with information or reports upon request. Other data providers (i.e. federal government entities and corporates or individuals identified by the authority) will have different – and presumably less onerous – compliance requirements. These are to be specified by the competent authority. Another key feature of the Open Data Law is the power given to the authority to specify certain “reference records” and to determine the entities who will be responsible for the same. A “reference record” is any record identified by the authority that contains a specific and consistent type of Dubai Data. It appears that the intention of this part of the law is to create a single reliable source for certain information, which would be consistent with the objectives of increasing the efficiency of government services and supporting government decision making. It may also assist other users by reducing duplication and inconsistency across datasets. How will Dubai Data be classified? Dubai Data will be classified into one of two categories: - Open Data: information that may be published without restriction or with the minimum restrictions specified by the competent authority. - Shared Data: information that may be exchange between data providers according to conditions and criteria specified by the competent authority. It is difficult to assess the impact of these classifications until the relevant restrictions, criteria and supporting policies are published by the authority. However, it is notable that the Open Data Law suggests that any information deemed to be “Dubai Data” will either be made open or available for sharing; there is nothing in the law that appears to allow data providers to refuse to make available any information that they produce or collate information concerning the Emirate if they are deemed data providers by the authority. Article 9 does acknowledge that data providers should not prejudice any rules of confidentiality or intellectual property rights, which may provide a route for commercial providers to retain some control over certain datasets. This issue will need to be assessed once the relevant supporting guidance is published and the competent authority begins to enforce the new law. Article 13 relates to the protection of data subjects. It states that the provisions of the Dubai Open Data Law shall not contravene the legal protection granted under applicable data legislation and that data providers should take all necessary measures to maintain the confidentiality and privacy of users’ data throughout the data sharing process. This is an important recognition of personal rights. Although the UAE does not currently have a federal data privacy law, there are criminal laws preventing the unauthorised disclosure of certain information and free zone regulations that protect certain data types. It appears that the Open Data Law is not intended to override these personal rights. Who will enforce the Open Data Law? Transitional provisions state that the Dubai Open Data Committee will have the powers and obligations of the competent authority under the Open Data Law until such time as a permanent authority is established. The Committee was established in 2014 and comprises representatives from a number of government entities in Dubai. It was originally tasked with guaranteeing ease of information flow and data security in the Emirate, as well as coordinating with concerned entities in Dubai to define the scope of an open data programme. The Open Data Law notes that officials of the competent authority have the capacity of judicial officers in policing the law and will be entitled to produce violation reports and coordinate with police officials for assistance in enforcing its provisions. To subscribe for updates from our Data Protection Report blog, visit the email sign-up page.
<urn:uuid:fb05bcf9-1f15-40f2-ac46-48c449afd069>
CC-MAIN-2022-40
https://www.dataprotectionreport.com/2016/03/dubai-issues-open-data-law/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00782.warc.gz
en
0.930877
1,542
2.515625
3
Google will reward the discoveries of flaws found in its open source software projects, such as Golang, Angular and Fuchsia. Bug bounty programs can be invaluable, but without the proper resources in place, they will fail hard. During Barack Obama's second term, some top administration officials began looking at bounties as a potential way to jump-start the effort to upgrade federal government's security programs. The idea was a radical one, so they decided to start slowly, by hacking the Pentagon. Following the success of the bounty programs started by companies such as iDefense, Zero Day Initiative, and Mozilla, technology companies and platform providers began rolling out bounties of their own. Among the big players to enter the game were Google, Facebook, Yahoo, and eventually, Microsoft. Bug bounties have grown from a niche idea to encourage independent security research into a massive business and a legitimate career path for bug hunters in less than 15 years. This is the story of the hackers who made that happen.
<urn:uuid:ae43d02b-1bb4-4bf9-8c84-c249fa446ed5>
CC-MAIN-2022-40
https://duo.com/decipher/tag/bug-bounty
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00782.warc.gz
en
0.965271
203
2.75
3
Artificial intelligence (AI) could increase global GDP by $15.7 trillion by 2030, according to PricewaterhouseCoopers. The prevalence of AI in modern society is growing at a rapid pace–and the Federal government needs to keep up. On April 24, the Brookings Institution released a deep dive report on AI. After examining the multitude of sectors AI is impacting, the report offers steps the Federal government should take to get the most out of AI while still protecting important human values. Investing in AI In the realm of AI, China is outspending the United States. If the U.S. government wants to be an AI leader, it needs to ramp up spending dramatically, the report says. Well-funded research programs will put the United States back in a leadership position, but research funding isn’t enough, it says. The Feds need to make sure there is a trained talent pool that can lead research projects and eventual AI deployments amid an ongoing tech shortage that has been well documented. Heavily investing in STEM education from kindergarten to PhD will ensure that enough workers are there as research and deployments increase. Research and regulation need to go hand-in-hand when it comes to AI. Researchers need access to data, but also need careful oversight and regulations to ensure citizens’ rights are protected, the report says. The report further touches on the importance of careful regulation, but took issue with AI regulations in the European Union (EU) that may be overly strict and work to limit innovation. “It makes more sense to think about the broad objectives desired in AI and enact policies that advance them, as opposed to governments trying to crack open the ‘black boxes’ and see exactly how specific algorithms operate,” the report says. “Regulating individual algorithms will limit innovation and make it difficult for companies to make use of artificial intelligence.” To ensure that the right regulations are in place, Washington needs to get on the same page. Several members of Congress are trying to make that happens with the “Future of Artificial Intelligence Act.” The bill, designed to establish broad policy and legal principles for AI, proposes that the Secretary of Commerce create a Federal advisory committee on the development and implementation of AI. Feds also need to work with their state and local counterparts. States and cities are frequently leading the way in emerging technology–both in deploying technologies and creating necessary regulations and oversight. With AI capabilities evolving rapidly, regulations must be forward-looking. Creating a Federal council would allow multiple perspectives and skill sets to come together to create regulations that safeguard citizens while encouraging innovation. Technology with a Human Touch While there are many benefits to AI, there are also potential pitfalls. The report highlights potential issues with bias, discrimination, and malicious behavior. Extending existing statutes governing discrimination in the physical economy to digital platforms could keep bias and discrimination at bay. However, there still has to be human oversight to ensure that algorithms haven’t gone astray. Malicious behavior is a serious concern when it comes to AI. The report highlights a few potential types of malicious or illegal behavior, including cyberbullying, stock manipulation, and cyber threats. Overall, the report suggests that all laws regarding human behavior should extend to AI. To combat issues such as cyberbullying, experts at the Institute of Electrical and Electronics Engineers suggest that AI technologies be programmed with consideration for widely accepted human norms and rules for behavior. To mitigate cyber risks, serious resources need to be committed to security, and malicious behavior should be heavily punished. Government regulations on AI shouldn’t hamper innovation. But when it comes to malicious behavior, policymakers must make sure they’ve thought of every possible prevention scenario. Regulating a new technology is never easy and getting AI right will be difficult. But, with trillions of dollars on the line, Feds need to make sure they give it their best shot.
<urn:uuid:176ac5f6-b61d-4d53-9c06-18487d15e820>
CC-MAIN-2022-40
https://origin.meritalk.com/articles/governing-artificial-intelligence-a-15-trillion-proposition/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00782.warc.gz
en
0.947393
810
2.859375
3
ICLEI is an acronym you may or may not have heard before. "The International Council for Local Environmental Initiatives," founded in 1990, is a nonprofit organization focused mainly on environmental sustainability. They work toward movements in local governments to build a sustainable future. ICLEI is a large organization that, according to their website, branches to "12 mega-cities, 100 super-cities and urban regions, 450 large cities, and 450 small and medium-sized cities and towns in 84 countries dedicated to sustainable development." In 2003, in an effort to reflect their further embrace of wider sustainability issues, they changed the name to "ICLEI - Local Governments for Sustainability". With headquarters in Bonn, Germany, they extend to different areas of the world including Asia, the Caribbean, the Unites States of America, Africa, Canada, and beyond, covering areas ranging from small towns to large cities. Though it stems from Germany, it is not a United Nations based organization. It has different types of centers all over the world including Capacity centers that focus on climate change and Renewables centers focusing on renewable energy. This group works with movements in local governments to deal with environmental and people-based issues such as climate change, energy conservation, natural resource conservation, poverty, pollution, etc. Their resources are used by the local governments, but they do not impose on the government or gain any actual access to it . They simply work toward awareness and ways to make the world more sustainable for the future. The ICLEI encourages new members and even partnerships with the organization. To become a member, one simply fills out the application form; and, based on a local government's gross national income per capita, a yearly dues rate is decided. The dues rate is based on money and the population of an area. The website also provides those interested in the organization with news and updates around the world that connect to the group's interests. For further information, please see https://www.iclei.org/.
<urn:uuid:2ba24318-8a06-4936-adce-ab012f2cc18a>
CC-MAIN-2022-40
https://www.givainc.com/blog/index.cfm/2015/7/8/iclei--local-governments-for-sustainability
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00782.warc.gz
en
0.950414
407
2.671875
3
Criminals and hackers will always exploit vulnerabilities, but software companies try to stay ahead of them. Tap or click here to see how malware can expose your browser passwords. A big problem is that malware is constantly being adapted to circumvent any security efforts. Companies like Microsoft and Google can only patch what they know about, and sometimes hackers circle around to exploit old vulnerabilities. Keep reading to find out how malware is now attacking a flaw in Windows that Microsoft patched years ago. Here’s the backstory Malware can be designed to accomplish many things, with the most lucrative goal being able to steal your banking details. A popular malware tool called Zloader has been used in various cyberattacks for years. Focused on banking, the malicious code is used to steal credentials and personal information through compromised documents, email attachments, and even Google ads. The attacks can also be converted into ransomware, where the victim needs to pay to have their files unlocked. Several patches and vulnerability fixes have been released against ZLoader in the past. But a new version of the malware is attacking a flaw that Microsoft patched in 2013. Check Point Research detailed how the updated campaign uses a patched flaw in Microsoft’s digital signature verification system to bypass detection. To gain access to a system, hackers must trick a user into installing a real remote IT management tool called Atera. But the dynamic-link library file (or .dll) of the tool has been compromised with ZLoader. Any computer will automatically check the file’s digital signature, but because of the vulnerability, the malware won’t be flagged. The file will get a clean bill of health from Windows Defender as it has Microsoft’s genuine signature attached. What you can do about it Check Point Research notes that 2,170 unique IP addresses have downloaded the compromised Atera file. The majority (864) is located in the U.S., while Canada has around 300 infections, and India has 140. You would need to have downloaded the compromised Atera file for your PC to be impacted by this malware. A patch to the Windows flaw has been available since 2014, but it isn’t easy to install manually. Another problem with the patch is it has a high possibility of triggering false positives on legitimate files. That’s why we don’t recommend installing it. If you’d like to see the steps to install the patch manually, click here and scroll to the Safety Tips section.
<urn:uuid:a1138c85-b55b-48e1-8c98-54772202088b>
CC-MAIN-2022-40
https://www.komando.com/security-privacy/zloader-malware-attacking-old-windows-flaw/821552/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00782.warc.gz
en
0.940456
507
2.546875
3
Telcos are Joining the Green Revolution Climate change is one of the most significant issues of our times and a real threat to our global security. If we look specifically at the carbon footprint of the Information and Communication Technology (ICT) sector alone, it is currently making up around 1.5% of total global greenhouse gas (GHG) emissions. A paper published in the Journal of Cleaner Production showed that this amount is expected to grow to account for as much as 14% of global emissions by 2040, which is about half the emissions of the global transport sector. What drives the GHG emissions? Dr. Lotfi Belkhir, the paper co-author, explains this increase in GHG emissions: For every text message, every phone call, every video you upload or download, there’s a data center making this happen. Telecommunications networks and data centers consume a lot of energy to serve you and most data centers continue to be powered by electricity generated by fossil fuels. It’s the energy consumption we don’t see. It has never been more important for telecom companies to invest in the use of eco-friendly renewable energy sources when implementing their telecom tower sites. We see more telecom operators with energy costs amounting to 25% or more of their cell tower operations are becoming large buyers of so-called “green electricity.” The new era of 5G services is coming A new research study published by Vertiv, together with technology analyst firm 451 Research, showed that moving towards 5G will increase energy consumption by 150% – 170% by 2026. In terms of costs for 4G operators today, energy costs represent around 30% of total OPEX. With 5G, the costs of the cell tower operations are expected to double, as more energy is required to power existing and new base station sites. The energy increase would be largely due to the rise in the number of small cells and massive multiple-input/output antennas. Many telco providers are taking additional energy efficiency measures, as 94% of survey respondents indicated that they expect overall energy costs to increase along with 5G/MEC deployments. Saving costs with renewable energy Powering cell tower operations with renewable energy is an excellent opportunity to improve efficiency and save costs, especially for towers in remote locations. Typically, these towers run on diesel gensets that require high costs for operation and maintenance. The recent price decrease of renewables makes onsite solar and wind energy often less expensive than electricity from the grid. However, the main issue with renewable energy is its complete reliance on weather, and therefore it cannot be deployed as an independent source of continuous power. In terms of costs and network resiliency, the telecom tower equipment solution that works the best is a hybrid solution. The solution can store and supply energy from other sources when the sun or wind is not producing power. You’ll find that large-scale batteries are employed to store the energy generated, but they have a limited lifespan. For back-up power, you’ll discover diesel generators deployed at a site. Telecom operators worldwide are reducing their GHG emissions Balancing renewable energy solutions with better monitoring of the telecom tower equipment and improved exercising of the generators are becoming the key to increase efficiency and reduce costs for the telecom site monitoring systems. In our White Paper on GHG Emissions, more and more telecom companies worldwide are committing to reducing their carbon footprint and becoming more environmentally-friendly. Here are a few examples: - Deutsche Telekom uses energy-efficient technology not just for their networks but also for lighting, monitoring, and cooling their systems. - AT&T is leveraging a network automation platform to make intelligent decisions that safely allow a subset of a cell site’s capacity to temporarily go into a sleep mode and reduce its energy footprint. - MTN Group is deploying hybrid solutions at cell sites to reduce its GHG emissions, resulting in close to 30% of savings. Looking ahead at tackling climate change issues, will continue to be on top of the concerns for the telecom industry. By utilizing innovative and eco-friendly solutions to reduce GHG emissions, you may also increase the efficiency of the telecom site monitoring systems at the same time. Are you ready to join the green revolution? As the digital ecosystem keeps growing worldwide, it’s no surprise that technology will play a fundamental role in addressing climate change and reducing GHG emissions. The ICT sector is only at the beginning of its sustainability journey, and telecom companies are starting to join the green revolution by implementing low-carbon and sustainable practices. Check out our free GHG Emissions White Paper on how telecom site automation can help you reduce your carbon footprint.
<urn:uuid:4b1cb097-997c-4684-ae72-536b4f54758d>
CC-MAIN-2022-40
https://www.asentria.com/blog/telecos-joining-green-revolution/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00782.warc.gz
en
0.946493
967
2.734375
3
A solid-state drive (SSD) is one of the most popular, fastest, and energy-efficient solutions for… Hard drives are an essential component of modern computing. Without a hard drive, your device will be hard-pressed to complete any commands, let alone support or save your important information. But sometimes, this hardware doesn’t operate as designed and computers may return an error like, “hard drive not being detected.” Unfortunately, an hdd not being detected is a common issue that happens to many computers. The marriage of hardware and software required to make a hard drive work well has quite a few spots for potential failure. Even so, a hard drive not detected error actually isn’t an insurmountable issue. You just have to determine why the computer isn’t recognizing the hard drive and know-how to troubleshoot a computer not detecting the hard drive. hard drive not being detected? There are many reasons why a computer does not recognize the internal hard drive. The trick is identifying the symptoms and diagnosing the problem so you can pursue a workable solution. If you’re wondering, “why is my computer not detecting my hard drive?” Look no further than this list to help you access your hard drive! The hard drive isn’t turned on This sounds simple, and it is. For most computers, all you have to do to confirm the hard drive is turned on is tap F2 to start and bring up your system settings. A disconnected or damaged connection Every hard drive is connected to a device with a series of cables. Visually inspect the connections to make sure they’re all secure and appear to be undamaged. The hard drive isn’t spinning Sometimes a hard drive is not being detected because the disk or device isn’t firing up in the first place. To see if this is the issue, remove the cover from your computer and boot your machine. Pay close attention to the hard drive. Even if you don’t see the spindle moving, but you certainly should hear it with the cover off. Another potential reason an hdd is not being detected is because the data stored on it has been corrupted. The most common cause of bad data is a virus or malware, but there are numerous other culprits of corruption, as well. troubleshoot a hard drive not being detected If your hard drive is not being detected, fixing the problem yourself is no easy task. But there are a few steps you can take at home before calling in a hard drive specialist. Before attempting to troubleshoot a computer not recognizing a hard drive, be sure to turn off your external device and disconnect the computer from all power sources. The last thing you want is to get shocked while you’re trying to fix a hard drive not being detected! Once you’ve done that, here are three troubleshooting tips to try at home if you’re thinking, “My computer can’t detect my hard drive.” Check your system settings If you think your settings may be incorrectly configured and are stopping the hard drive from being detected, look into your Windows PC or Mac system settings. Tap the F2 key to bring up the settings, and make sure the hard drive is marked as “On.” You may also want to consult the manual provided by the manufacturer to ensure the device is properly set up. Check for damaged cables Your connector cables may look fine, but you should still check to make sure they’re in good working order. Switch the cables to see if new ones fix your issue. If the hard drive is detected, it’s clear the problem is the connection and damaged or worn out cables. But if you’re still experiencing the error, you’ll need to move on to Tip #3. Run your antivirus software If my computer is not recognizing my hard drive because of corrupted data caused by a virus or other hostile attack, try running your antivirus software. This should clear out any bad sectors and remove any data that is preventing the hard drive from being recognized by your device. Running your antivirus should clear out any bad files and remove any data that is preventing the hard drive from being recognized by your device. If none of these tips resolve the issue, the time has come to consult with a data recovery specialist to save the data on your hard drive. Contact DriveSavers to ensure your hard drive has the best possible chance of recovery.
<urn:uuid:7c95edff-1faf-414f-9b76-34b98321ead1>
CC-MAIN-2022-40
https://drivesaversdatarecovery.com/hard-drive-not-being-detected-heres-what-to-do/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00782.warc.gz
en
0.938641
951
2.78125
3
Sinking its teeth deeper into the health care industry, IBM this week agreed to let Mayo Clinic use its Blue Gene supercomputer to research diseases. The goal is to tap into the abundance of new data to foster medical breakthroughs, according to a statement. Financial terms of the pact were not made public, but IBM and the Mayo Clinic said they would spend ”millions of dollars per year in manpower, research and technology.” The Mayo Clinic will be the first medical institution to use the power of Blue Gene, accessing specialized algorithms to perform molecular modeling. IBM and the clinic will also aim to map current and historical patient records and link them to new types of medical information. ”Our collaboration with IBM is focused on advancing the Mayo Clinic mission in the areas of patient care and research,” said Denis Cortese, M.D., president and CEO of Mayo Clinic. ”We are at a point with standards in technology and new genomic-based analytic techniques where we can achieve more in the next 10 years than we’ve achieved in the last 100, and we see in IBM a partner with a very unique capacity to deliver expertise and innovation.” IBM and Mayo Clinic have already integrated 4.4 million patient records into a unified system, making it easier for physicians and researchers to call up info. Going forward, both Big Blue and Mayo will integrate genomic and proteomic data with clinical records and public databases for use by physicians. The companies will also use Blue Gene to compare patient data to the data of other patients with similar disease characteristics. For example, a doctor might be able to pinpoint the exact location of a patient’s cancer, as well as its gene characteristics, and make a prognostication based on the outcomes of therapy in the last 500 patients with similar cancer. The multi-billion-dollar medical field is a lucrative one for IBM and other high-tech companies, which are all jockeying for position in the world’s largest life sciences companies. IBM has locked down the Mayo Clinic since 2001 and Blue Gene, one of the company’s most popular supercomputers, has been used by several government organizations for research. Last November, IBM introduced a smaller version of Blue Gene. This article was first published on internetnews.com.
<urn:uuid:84347adf-a0c3-4bf1-8b11-a0156f15da58>
CC-MAIN-2022-40
https://www.datamation.com/erp/mayo-clinic-to-use-ibms-blue-gene/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00782.warc.gz
en
0.947015
476
2.96875
3
Penetration testing aims to uncover potential security vulnerabilities which could in turn lead to a cyber security breach, enabling remediation of issues to be undertaken before they are exploited by a real intruder. Here we answer the most common questions about penetration testing: What does ‘pen test’ stand for? Pen test stands for penetration testing. What is penetration testing? A penetration test is a methodology for evaluating the effectiveness of an organisation’s cyber security controls. Testing is undertaken in a controlled environment to identify security flaws. Simulated attacks are performed on a network, just as if it’s a real cyber attacker attempting to find security gaps. What are the goals of pen testing? The goal of penetration testing is to verify the effectiveness of a network’s existing security measures. However penetration testing is more advanced than basic vulnerability scanning. It identifies how a cyber attacker would breach a network and how they would then gain access to sensitive information such as client/staff data, financial data or even research findings. What gets tested? Pen testers attempt to “break into” an organisations systems, networks, and software by looking for vulnerabilities across the following areas: - Network endpoints - Network security devices - Web applications - Wireless networks - Mobile and wireless devices How do pen testers simulate an attack? Pen testers will act like malicious actors, they simulate an attack using the same approaches as hackers would, including: - Operating system backdoors - Misconfigurations in cloud-based applications and services - Social engineering tactics - Weak passwords or unencrypted passwords Why is penetration testing needed? There are many reasons why an organisation may undertake pen testing: - To protect their ‘crown jewels’ such as intellectual property, customer and staff data, financial information - To protect their brand and reputation - To decrease downtime in the event of a true security incident - To ensure compliance with regulatory standards such as Payment Card Industry Data Security Standard (PCI DSS) - To identify vulnerabilities during infrastructure change programmes i.e. system upgrades, new software releases, new applications, new hardware - As part of a due diligence process for contracts, mergers and acquisitions - To proactively identify emerging or new vulnerabilities that were not previously known In the 2020 Pen Testing Report by Core Security, 97% of respondents noted that penetration testing was important to their security posture. Why is pen testing important? Undertaking penetration testing is a cyber security best practice designed to improve your cyber security strategy. A controlled and managed simulation of an actual system intrusion provides a proactive and realistic experience of a security breach, enabling you to plug any security gaps before a real attacker finds them. When should pen tests be run? Penetration testing should be run as frequently as possible, especially when significant changes or updates to your infrastructure or digital strategy are made. How often should pen tests be undertaken? The frequency of undertaking penetration tests will depend on a number of organisational factors, including: - Budget availability - Network changes, as tests should be undertaken as part of and to coincide with an organisational change programme - Size of the network as you may wish to undertake a rolling programme to ensure coverage of all systems, software, hardware, applications etc The timing of a testing programme should be adaptable and balanced to ensure that risk is minimised whilst enough time is allowed between recurring tests for remediation work to be undertaken. What are the different types of pen testing? Penetration tests can be conducted in several ways. The most common difference is the amount of knowledge of the implementation details of the system being tested that are available to the testers. - Black box penetration testing: assumes no prior knowledge of the infrastructure to be tested. The testers must first determine the location and extent of the systems before commencing their analysis. Black box testing simulates an attack from someone who is unfamiliar with the system, such as a malicious actor trying to break in and cause havoc. - White box penetration testing: provides the testers with complete knowledge of the infrastructure to be tested, often including network diagrams, source code, and IP addressing information. White box testing simulates what might happen during an “inside job” or after a “leak” of sensitive information, where the attacker has access to source code, network layouts, and possibly even some passwords. - Grey box testing: is a combination of white box testing and black box testing. Typically a grey hat hacker will have permission to test the system, but not have prior knowledge of the system. The aim of this test is to discover defects resulting from improper structure or improper use of applications. Can I undertake pen testing inhouse? Possibly. This is of course dependant on your available resources: - You have experienced and trained in-house expertise - To be impartial and objective, the testing resource should be ‘independent’ and not part of a project or build team, they should not be testing their own work - Testing resources must undertake ongoing training and monitoring of emerging threats and vulnerabilities, as well as keeping up to speed with the latest testing methodologies - Penetration testers require access to a dedicated test lab for pre production work and to penetration testing tools Considering the cost of investment required for in-house penetration testing, it may be more cost effective to outsource penetration testing to a third party. How do I choose a pen test provider if I outsource? A penetration testing provider should be professional and reputable within the industry: - Relevant expertise: ensure that the pen testing provider’s expertise matches the scope of your project requirements - Appropriate certifications: pen testers should be knowledgeable and experienced with appropriate training. Ask about their industry certifications. - Trusted staff: penetration testers should be adequately vetted by their employers with their backgrounds checked - References and recommendations: should be available if requested - Sample reports: should be made available in advance. Make sure their reporting is understandable and well-organised, with clear actionable recommendations of how vulnerabilities can be remediated Infosec Partners provides a full spectrum of security penetration tests resulting in reports and recommendations that executive management as well as technicians can all gain the information they need to secure their systems and networks. What do I need to do to prepare for pen testing? In advance of the testing the scope of the project will need to be agreed. As the purpose of testing is to assess security controls at that moment in time so in essence there is no need to change anything within your network specifically for penetration testing. How much time is needed to undertake pen testing? The timescale depends on the size and complexity of the pen testing project. Rigorous and detailed planning for penetration testing is required, as is time for review and remediation measures.However the actual ‘testing window’ should be ideally 1-3 weeks. Will pen testing disrupt our network? Should we expect a system crash? Your systems will not be disrupted by well-planned and coordinated penetration tests. It’s important that all stakeholders are aware of the timeline and that all relevant teams are kept up to date. With the right expertise and plans in place you won’t have to worry about operational systems crashing, business as usual is to be expected. What should we expect from the results of pen testing? A penetration test report will contain detailed sensitive information about your organisations security vulnerabilities, it is highly confidential and should not be widely circulated. How often should we re-run penetration testing? Depending on the size and complexities of your network, and your organisation’s change programme, we would recommend that you implement a programme of recurring pen tests to counter emerging threats and vulnerabilities. We also recommend re-tests on found vulnerabilities to ensure that remediation has been successfully completed.
<urn:uuid:ce7eb238-1b0f-45d1-9516-4dcf7c2b12df>
CC-MAIN-2022-40
https://www.infosecpartners.com/a-quick-guide-to-penetration-testing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00782.warc.gz
en
0.935032
1,625
3.296875
3
Fog computing provides more flexibility for storing & processing data One of the most exciting developments of the IoT is the innovation it fosters. While cloud computing existed prior to the IoT really taking hold, new types of computing are becoming increasingly relevant in the IoT ecosphere: Edge Computing and Fog Computing. Let’s take a look at the similarities and differences between cloud computing, Edge Computing, and Fog Computing. Cloud technologies themselves have existed for years, enabling trends such as Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS). When data is stored, managed, and processed in the cloud via remote servers hosted on the Internet, it’s called Cloud Computing — and it can save enterprises a lot of time, money, and resources. The cloud has democratized computing, giving organizations an alternative to large data centers. Cloud computing represents a step away from legacy networking, but transferring data to and from the cloud is still expensive, even if it’s less expensive than the traditional architecture. Edge Computing pushes data processing out to the edge of the network, where the sources of the data are generated. All edge devices — routers, sensors, smart devices, and much more — can do Edge Computing. Depending on the situation, Edge Computing may or may not be affiliated with a cloud or server; it can exist as a standalone machine, for example. Edge Computing helps address the challenge of data build-up, mostly in closed M2M/IoT systems. Companies use it for data aggregation, denaturing, filtering, data scrubbing, and more — with the ultimate goal to minimize costs and latency and control network bandwidth. With a shared AP and communication standard, Fog Computing aggregates data at its original source, before it hits the cloud or any other kind of service. It enables intelligence and processing closer to where the data is being created, rather than the other way around. Fog Computing and Edge Computing devices perform the same tasks, but fog offers the ability to spread out computational tasks — things like data filtering, data removal for privacy, data packaging, and real-time analytics — between a cloud provider and the edge. It does this in a simple manner such that the programmer developing a solution has a seamless experience designing in the cloud or pushing code to the fog using the same framework and APIs. In essence, the fog allows you to move compute to where the data is, versus moving the data to where the compute is. Applying Fog Computing Fog Computing fosters a more flexible environment for storing and processing data, which helps enterprises address cost, bandwidth, and latency issues. Latency is a good example, especially in mission-critical situations. If a city needed to issue an intelligent Amber Alert, it could enact a citywide deployment of cameras, sensors, and other remote tools. Hypothetically, the alert may be for a missing 4-foot-tall child last seen wearing a red hat. The traditional way to utilize remote tools would be to have all of the cameras and sensors begin streaming data into the cloud. However, this carries high costs and presents a significant latency problem. Running that much data through various algorithms and engines — especially when most of the data contain neither the image of a child nor the image of a red hat — can prove burdensome. It’s imprudent to stream all that redundant data to the cloud, burdening the WAN networks. It would be better to process the data close to the source, filtering out irrelevant data. With a programmatic interface available via Fog Computing, the city can instead tell the remote cameras to do a first-pass inference of 4-foot-tall children wearing red hats. If they find a possible match through this filter, they can pass the data back up into the cloud for a second-level integrity check. From there, further analysis can determine whether there is a match. That analysis can be done more quickly and more cost-effectively than if the entire batch of raw data were being streamed up to the cloud for analysis. The broader need for Fog Computing is becoming increasingly apparent. Industry trends and research show that exponentially larger amounts of data are being generated globally. The ability to store and move this data, however, is becoming problematic. In 2017, for every 150 bytes of data that are produced, 149 bytes of data have to be filtered or thrown away. It is simply impossible to move that amount of data around in real time. As the IoT continues to develop, some estimates predict up to 50 billion connected devices will be in use. Individual users and enterprises will not be able to move the massive amount of data generated by the IoT through network infrastructures and into the cloud, much less normalize the cost of doing so. Companies must innovate how to push intelligence and computing farther out toward the source of the data itself. Fog Computing With Router SDK With the right solution in place, Fog Computing allows IT teams to design and deploy software to edge devices through the cloud, giving cloud developers more flexibility than ever. Read this Knowledge Base article for a deeper dive into leveraging Router SDK.
<urn:uuid:1cd04324-c89a-4454-ad0a-95d124b1c158>
CC-MAIN-2022-40
https://cradlepoint.com/resources/blog/untangling-fog-computing-cloud-computing-edge-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00182.warc.gz
en
0.908188
1,067
2.78125
3
Understanding Packet Loss in Network Monitoring and Analysis Appliances The key to zero packet loss lies in understanding the four key sources of loss. By Daniel Joseph Barry Network monitoring and analysis has grown in importance as the Internet and IP networks have become the de facto standard for a range of digital services. The commercialization of the Internet has only aggravated this need and extended the range of applications to include network testing, security, and optimization. Common for all these applications is a need to analyze large amounts of data in real time. What distinguishes the task of network analysis from communication is the amount of data to be analyzed. In a typical communication scenario, the endpoints are only interested in the packets that are related to their conversation. The other packets sharing the same connection are simply filtered out. In a network analysis scenario, on the other hand, we are interested in all the packets traversing the point we are monitoring. At 10 Gbps this can be up to 30 million packets per second that need to be analyzed in real time. For an analysis to be useful, every packet needs to be analyzed. That missing packet could be the key to determining what is happening in the network. Waiting for the packet to be re-sent is not an option either because we are trying to perform analysis in real time. Packet loss is, therefore, unacceptable for analysis applications. There can be many causes of packet loss, which can relate to how we get access to the data, the kind of technology used to capture packets, the processing platform, and the application software used to analyze the data. Let’s take a look at each of these in turn. Source #1: Receipt of packet copies The first source of packet loss can be the method for receiving copies of the packets for passive, off-line analysis. Switches and routers provide Switched Port ANalyzer (SPAN) ports, which are designed to provide a copy of all packets passing through the given switch or router. Network monitoring and analysis appliances can thus receive the data they need from the SPAN port directly. In most cases, this works well, but there is the potential for packet loss if the switch or router becomes overloaded. In such cases, the switch or router will prioritize its main task of switching and routing and down-prioritize SPAN port tasks. This will result in packets not being delivered for analysis or, in other words, packet loss. It is for this reason that many prefer to use test access points (TAPs), which are simpler devices installed on the connection itself. A tap simply copies each packet received to the TAP outputs. The advantage of TAPs is that they can guarantee that a copy of each packet received is available. On a typical TAP, two outputs are provided per connection; one for upstream traffic and one for downstream traffic. Therefore, two analysis ports are required to capture and merge this data. Source #2: Packet-capture technology The second source of packet loss is the packet-capture technology used. Many appliances are based on standard network interfaces, such as those used for communication. However, these are not designed to handle the large amounts of data that need to be captured. As we said, up to 30 million packets per second need to be captured, but standard network interfaces cannot handle more than five million packets per second at the time of writing. Another way of looking at this is in relation to what packet sizes are supported. Many of the vendors of standard network interfaces will claim full throughput for 512 bytes and larger packets. With larger packet sizes, there are inversely fewer packets per second to handle. Unfortunately, the Internet and IP networks don’t start at 512 bytes, and it is far from a rare occurrence that smaller packet sizes are used. If we just look at typical TCP traffic, we can see two distinct breakpoints when analyzing traffic profiles. The first noticeable breakpoint is at 1500 bytes corresponding to the maximum transmission unit (MTU) of the Ethernet protocol. The next breakpoint is at 576 bytes, corresponding to the maximum segment size (MSS) of the transmission control protocol (TCP). Below 576 bytes, there can be a large number of smaller packet sizes correspond to TCP acknowledge packets, control segments, etc., which can be as small as 40 bytes. This knowledge is often used in test methodologies, where reference is made to the Internet mix or IMIX to simulate internet traffic. A typical IMIX model will use a mix of 40-byte, 576-byte, and 1500-byte traffic corresponding to the breakpoints above. It is therefore clear that discounting traffic below 512 bytes is not providing a realistic and complete picture of what is happening in the network. To guarantee packet capture, use products that are designed specifically for this task. They must ensure that all packet traffic is captured with zero packet loss even at 100 percent load. Otherwise, the analysis is incomplete. An example of this type of product is Napatech intelligent network adapters (full disclosure: I work for Napatech), which are designed specifically for packet-capture applications. These adapters are also designed for use in standard servers, which are the most common platform for appliance design. Source #3: Servers The third source of packet loss is the standard servers that are used as hardware platforms for appliances. If these servers are not configured properly, packets can be lost due to processing congestion. As general-purpose processing platforms, standard servers support many applications simultaneously as well as various adapters. Sharing processing, memory, and data bus (PCIe) resources between these various applications can lead to congestion if not configured properly. Because analysis is performed in real time, the analysis data will be lost unless it is buffered on the network adapter itself. In addition, modern servers often provide “green” profiles, where power consumption is minimized. This means that very little airflow is provided to the PCIe slots where adapters are installed, so adapters will have difficulty in dissipating heat and can thus lead to the adapter failing (which of course guarantees packet loss). This needs to be considered in the design of the packet capture adapter. Source #4: Analysis application software The fourth source of packet loss is the design of the analysis application software defining the network monitoring and analysis appliance. Many applications are implemented using a single thread, meaning that they can only execute on a single CPU core. This is sufficient for lower bit rates but becomes a source of packet loss at higher bit rates, such as 10 Gbps. The analysis application just cannot keep up. A best practice in such situations is to use a multi-threaded design that can take advantage of the multiple CPU cores available in standard servers. This in turn requires a packet capture adapter that can distribute to the multiple CPU cores in a way that fits the analysis application. A Final Word As can be seen, there are multiple sources of packet loss, but with careful consideration of how the data is provided to the appliance, the packet capture adapter used, configuration of the standard server hardware platform and application analysis software design, it is possible to guarantee zero packet loss analysis. Daniel Joseph Barry is VP of marketing at Napatech.
<urn:uuid:237b750a-bccc-4c60-b44f-499966bcc36b>
CC-MAIN-2022-40
https://esj.com/articles/2012/12/13/understanding-packet-loss.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00182.warc.gz
en
0.941429
1,475
2.671875
3