text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Simply put, Wi-Fi spectrum analysis is performed to determine the strength of a Wi-Fi signal and what might be interfering with that signal strength. Wi-Fi is broadcast on either the 2.4 or 5 GHz bands, which are further split into channels designed to separate networks and prevent interference. However, the sharp increase in network and IoT devices, security cameras, cordless phones, and car alarms competing for bandwidth, as well as BLE devices attempting to pair, increases the chance that interference will occur. A spectrum analyzer can pinpoint the source of interference and allow the end user to make network modifications to diminish or eliminate the interference. For example, if a channel on the 2.4 or 5 GHz bands is crowded, you can move your device to another channel. In instances where the network contains security access points, spectrum analysis can identify rogue devices on the network, allowing the access point to isolate and eliminate the device from the network.
<urn:uuid:e8ef38ac-0f9d-4a09-ad65-dd450dfca698>
CC-MAIN-2022-40
https://helpcenter.engeniustech.com/hc/en-us/articles/4412211063323-What-is-Wi-Fi-Spectrum-Analysis-
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00738.warc.gz
en
0.922388
192
3.390625
3
Ready to learn Data Science? Browse Data Science Training and Certification courses developed by industry thought leaders and Experfy in Harvard Innovation Lab.TT This post covers these topics related to Statistical Learning and their significance in data science. - Prediction & Inference - Parametric & Non-parametric methods - Prediction Accuracy and Model Interpretability - Bias-Variance Trade-Off Statistical learning is a framework for understanding data based on statistics, which can be classified as supervised or unsupervised. Supervised statistical learning involves building a statistical model for predicting, or estimating, an output based on one or more inputs, while in unsupervised statistical learning, there are inputs but no supervising output; but we can learn relationships and structure from such data. One of the simple way to understand statistical learning is to determine association between predictors (independent variables, features) & response(dependent variable) and developing a accurate model that can predict response variable (Y) on basis of predictor variables (X). Y = f(X) + ɛ where X = (X1,X2, . . .,Xp), f is an unknown function & ɛ is random error (reducible & irreducible). Prediction & Inference In situations where a set of inputs X are readily available, but the output Y is not known, we often treat f as black box (not concerned with the exact form of f), as long as it yields accurate predictions for Y . This is prediction. There are situations where we are interested in understanding the way that Y is affected as X change. In this situation we wish to estimate f, but our goal is not necessarily to make predictions for Y . Here we are more interested in understanding relationship between X and Y. Now f cannot be treated as a black box, because we need to know its exact form. This is inference. In real life, will see a number of problems that fall into the prediction setting, the inference setting, or a combination of the two. Parametric & Non-parametric methods When we make an assumption about the functional form of f and try to estimate f by estimating the set of parameters, these methods are called parametric methods. f(X) = β0 + β1X1 + β2X2 + . . . + βpXp Non-parametric methods do not make explicit assumptions about the form of f, instead they seek an estimate of f that gets as close to the data points as possible. Prediction Accuracy and Model Interpretability Of the many methods that we use for statistical learning, some are less flexible, or more restrictive. When inference is the goal, there are clear advantages to using simple and relatively inflexible statistical learning methods. When we are only interested in prediction, we use flexible models available. Assessing Model Accuracy There is no free lunch in statistics, which means no one method dominates all others over all possible data sets. In the regression setting, the most commonly-used measure is the mean squared error (MSE). In the classification setting, the most commonly-used measure is the confusion matrix. Fundamental property of statistical learning is that, as model flexibility increases, training error will decrease, but the test error may not. Bias & Variance Bias are the simplifying assumptions made by a model to make the target function easier to learn. Parametric models have a high bias making them fast to learn and easier to understand but generally less flexible. Decision Trees, k-Nearest Neighbors and Support Vector Machines are low-bias machine learning algorithms. Linear Regression, Linear Discriminant Analysis and Logistic Regression are high-bias machine learning algorithms. Variance is the amount that the estimate of the target function will change if different training data was used. Non-parametric models that have a lot of flexibility have a high variance. Linear Regression, Linear Discriminant Analysis and Logistic Regression are low-variance machine learning algorithms. Decision Trees, k-Nearest Neighbors and Support Vector Machines are high-variance machine learning algorithms. The relationship between bias and variance in statistical learning is such that: - Increasing bias will decrease variance. - Increasing variance will decrease bias. There is a trade-off at play between these two concerns and the models we choose and the way we choose to configure them are finding different balances in this trade-off for our problem. In both the regression and classification settings, choosing the correct level of flexibility is critical to the success of any statistical learning method. The bias-variance trade-off, and the resulting U-shape in the test error, can make this a difficult task.
<urn:uuid:0de3aba3-9e84-468b-a6c4-1556937aa5cb>
CC-MAIN-2022-40
https://resources.experfy.com/bigdata-cloud/statistical-learning-for-data-science/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00738.warc.gz
en
0.90365
1,039
3.15625
3
Researchers at IBM have taken a large step toward creating nano-scale processors. They have built an electronic integrated circuit by combining conventional silicon technology with a carbon nanotube molecule. The resulting hybrid circuit is still slower than a conventional component, but it is an important proof of concept, said Joerg Appenzeller, research staff member at IBM’s Thomas J. Watson Research Center in Yorktown Heights, N.Y. IBM released the news Thursday in a paper published in the journal Science. The research could have a large impact on the design of faster processors. Forty years after Intel cofounder Gordon Moore predicted that the number of transistors on a chip would double every two years, engineers are finding it hard to keep up with “Moore’s Law.” Industry leader Intel has announced plans to move from 65-nanometer chip geometry to a 45-nanometer plan by 2007. But many engineers warn that chips cannot get much denser without running into problems like overheating. A hybrid processor would avoid that problem because electrons flow through nanotubes at high speed and low friction. Analysts said IBM’s achievement was an important step toward reaching the goal. “Given that they’re looking at a 10-to-15-year time horizon until this can become a game-changer in the industry, there is a tremendous amount they have accomplished,” said Vahe Mamikunian, an analyst with Lux Research, New York City. Many researchers have noticed the great physical attributes of carbon nanotubes before, such as offering higher speed, lower power consumption and higher densities, he said. One company—Nantero, in Woburn, Mass.—is using nanotubes to create high-density nonvolatile memory. Other scientists are using the thermal conduction properties of nanotubes to draw heat away from processors. That work is happening at Fujitsu, in Tokyo, Japan, and NanoConduction, in Sunnyvale, Calif. “IBM’s work is different, because no one has really looked at using carbon nanotubes as a replacement for silicon,” Mamikunian said. “Now they have to scale that up to gigahertz and terahertz levels. They haven’t found a switching speed limit yet, so theoretically it shouldn’t be a problem.” A nanotube is like a straw built of carbon atoms, frozen in a form between a crystal and a molecule. That allows it to be very small, 50,000 times thinner than a human hair. The IBM team took a carbon nanotube and attached standard wires to it, sticking out like the teeth of a comb. That allowed the team to use a single molecule as the base for all components in the circuit, using standard semiconductor processes. “So we’ve combined the nanotube with standard architecture, similar to what’s used today in a silicon chip. The difference is that this would be smaller and faster,” Appenzeller said. The resulting circuit is called a ring oscillator, a tool used by silicon engineers like a speedometer to measure the performance of a transistor. The IBM researchers used it to measure the speed of their nanotube using alternating current power. They reached a top speed of 50MHz, which is 100,000 times faster than anything done previously with nanotubes in a circuit. “It is still not as fast as today’s silicon chips, but we can make it orders of magnitude better,” said Appenzeller. “We have a very clear picture of how to move on. Developing the first car is much harder than making a car that can go faster.” -Ben Ames, IDG News Service Check out our CIO News Alerts and Tech Informer pages for more updated news coverage.
<urn:uuid:7da52159-c8f5-48e2-98c8-c46c6e6f7cd4>
CC-MAIN-2022-40
https://www.cio.com/article/254391/infrastructure-ibm-shrinks-circuit-with-nanotube.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00738.warc.gz
en
0.944145
814
3.703125
4
An Introduction to Lean and the Six Sigma Principles Lean is about removing complexity from your processes. Your processes as they stand may have grown organically according to historical criteria that may no longer be relevant. Lean will reduce the number of steps in a process by identifying those steps that do not add value to the customer, and if possible removing them (some maybe remain necessary). This makes your pro-cesses faster, and your customer happier. Lean is about speed. We can illustrate the basics using a simple example. Let’s say a factory produces bolts that need to be a certain dimension to meet the needs of the customer. They can be no smaller than x, and no bigger than y. In Six Sigma we call x the Lower Spec Limit (LSL) and y the Upper Spec Limit (USL). Bolts produced that fall outside of these spec limits have to be recycled. These bolts have eaten both time and material, and have incurred a recycling cost for no return. We can plot this using a bell curve. As the curve above shows, a good number of bolts are produced with the appropriate dimensions, but falling away on both the larger and smaller sides are an infinite number of segments in both directions. In statistical theory, these segments are called ‘Sigma’ (represented by the Greek letter σ, and represent the deviation from the mean (the peak of the curve) from the LSL and USL. The chart above shows a normal distribution, which is a range of -3σ and +3σ covering 99.73% of bolts produced. This is the current state, as illustrated in the final diagram. Six Sigma, by definition, considers a range of -6σ and +6σ (see figure above). Statistically, this is 3.4 defects out of every 1,000,000 opportunities, or a covering of 99.9997% of bolts produced. This is the future state, as illustrated below. Any Six Sigma project therefore starts from a baseline of high variation. If successful, the Six Sigma project will deliver the improved variation range typically within 4-6 months. As you can see in the third graph, the improved state continues to reduce variance, and in partnership with the Lean aspect, make delivery quicker and more streamlined. Learn Lean & Six Sigma with Good e-Learning Good e-Learning offer two Lean training programmes; Foundation & Management Overview (level 1), and Master Belt Practitioner (level 1 & 2). As well as two Six Sigma certifications; Green Belt Foundation (level 1), and Green Belt Advanced (level 1 & 2). These courses are designed to equip students with the required skills to manage and support a Lean Enterprise approach in any organization.
<urn:uuid:e8cb6a2f-4a65-4a9d-8302-f5b50336b4cf>
CC-MAIN-2022-40
https://blog.goodelearning.com/subject-areas/lean-six-sigma/what-is-lean-six-sigma/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00738.warc.gz
en
0.933836
560
3.125
3
I spent the weekend with my parents, who come from the generation of folks who used typewriters — not word processors; sent letters — not email; and used accounting books — not excel. They lived in a world where you did not lock your house or your car at night. They felt safe. Most of all, they trusted everyone. I guess I knew this, but did not truly understand the extent of their vulnerability. When my parents got their first computer, we had carefully installed the security features, provided them with Norton Anti-Virus, and made sure they had Spybot. My husband, Rich, carefully explained (trying not to scare them) that as long as they kept their security protections up-to-date, there shouldn’t be any problems. Rich then proceeded to tell them about some of the dangers lurking out on the Internet. First he explained about Spyware. Some sites, he told them, download spyware without the user having any knowledge about it. Strictly defined by Wikipedia, ‘Spyware is a broad a category of malicious software intended to intercept or take partial control of a computer’s operation without the user’s informed consent. While the term, taken literally, suggests software that surreptitiously monitors the user, it has come to refer more broadly to software that subverts the computer’s operation for the benefit of a third party.’ Since my parents’ computer runs Microsoft Windows, they are a target. That’s why Rich had loaded SpyBot Search and Destroy. He left instructions on how to periodically execute the software. Then Rich explained viruses to my parents. Again, according to Wikipedia, a virus in computer security is a self-replicating program that spreads by inserting copies of itself into other executable code or documents (for a complete definition: see below). Thus, a computer virus behaves in a way similar to a biological virus, which spreads by inserting itself into living cells. Extending the analogy, the insertion of the virus into a program is called an infection, and the infected file (or executable code that is not part of a file) is called a host. Viruses are one of several types of malware or malicious software. In common parlance, the term virus is often extended to refer to computer worms and other sorts of malware. This can confuse computer users, since viruses in the narrow sense of the word are less common than they used to be — compared to other forms of malware such as worms. This confusion can have serious consequences, because it may lead to a focus on preventing one genre of malware over another, potentially leaving computers vulnerable to future damage. However, a basic rule is that computer viruses cannot directly damage hardware, only software. Because of this, Rich had loaded Norton anti-Virus on their computer, and left instructions on how to periodically update the virus signatures. The difference between viruses and spyware is that that spyware usually does not self-replicate. We explained to Dad that the main purpose of spyware was to exploit the computer for commercial gain. This exploitation comes in the form of unsolicited pop-up advertisements, theft of personal information and monitoring the user’s Web-browsing activities. The viruses could directly affect his software, slowing down his computer, or in a worse case scenario, causing his computer to crash. Dad said he understood all the security implications, and how he can avoid them. He promised he would follow the directions. We left satisfied that my parents — and their computer — would be safe. This brings us back to our latest visit. My Dad was telling us his computer was running slowly, so we checked it out. As suspected, he only updated the Norton Anti-Virus software when he got the pop-up reminder — every 30 days or so. Additionally, he had not run Spybot Search and Destroy. When we ran Spybot for him, it found a lot of spyware on his computer. Oh yes… I also found out he clicked all the ‘remember passwords’ boxes. His comeback to my concerns was that he only went to ‘good sites’. Therefore it should be fine. After all, he said, he didn’t browse porn, or any of those other shady sites. That is when I realized some people from that earlier generation, although they now lock their houses and cars, still have a high trust factor built in. That makes them very vulnerable, especially since many of them are fairly new to using computers. And, unfortunately, many people will take advantage of that. I guess I will have to come up with a better explanation of the issues so I can make them thoroughly understand the dangers out there. But then I run the risk of scaring them so they won’t use the Internet at all. Maybe I just remember that they kept me safe as a child, and now it is my turn to ensure their safety to the best of my ability and knowledge of the Internet. I also will remind my friends to do the same for their families. More information on adware and spyware can be found in Intranet Journal’s Spyware Guide.
<urn:uuid:4393f3fd-2c1d-4992-8fcc-6ab14d0917df>
CC-MAIN-2022-40
https://www.datamation.com/security/you-cant-be-too-wary-about-malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00738.warc.gz
en
0.965178
1,079
2.953125
3
There’s no shortage of clever code injections to keep IT admins awake at night. After all, hackers recently managed to use a simple SQL injection to steal information of roughly 200,000 voters in the state of Illinois. But cybersecurity researchers have never seen anything like “AtomBombing,” a threat that puts all versions of Windows at serious risk of malware infections. Is There No Fix? Every supported version of Windows up to this point uses a function called atom tables, which allow applications on the operating system to share data. According to ZDNet, researchers have learned that a malicious code injection into the atom table could prove disastrous. If a legitimate application is forced to retrieve the malicious code, there’s really nothing security software could do about it. Typically, application control software will work by blocking unauthorized executables that may contain malware. However, that executable can still run, if it finds a way to manipulate an authorized application – AtomBombing is that way. It basically tricks your secure applications into running malware on your IT environment. The worst part of all is that because atom tables are a significant component of how Windows functions, there’s no way to circumvent this threat. Once your atom tables have been over-written with malicious code, malware can execute freely. Be Careful and Be Ready to Restore At the moment, there are no known threats in the wild that exploit this vulnerability. However, this is hardly a silver lining, especially given the prolific nature of ransomware and other cyberthreats that are introduced through clever social engineering schemes. In the wake of the discovery of AtomBombing, a Microsoft spokesperson pointed out that “A user’s system must already be compromised before malware can utilize code-injection techniques.” That provides two glimmers of hope for Windows admins. The first is that you reduce the chances that AtomBombing will ever be an issue through traditional best practices – i.e. being careful what links you open and what attachments you download. The second is that with proper computer maintenance, the initial code injection needed to run malicious executables under your firewall’s radar won’t be able to survive on a system long enough to cause damage. Specifically, restoring your systems on a regular basis to the preferred system configurations, will help maintain operational standards while effectively wiping away the nefarious code changes. To that end, Faronics Deep Freeze provides an invaluable service. With patented reboot to restore functionality, you need to only boot down your computers at the end of the day. The next morning, any and all malicious code injections will be erased from your atom tables. Contact Faronics today to learn more.
<urn:uuid:aead203d-4f7b-4039-ad6d-02f3930d9b19>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/atombombing-the-latest-malware-threat-terrifying-it-admins
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00738.warc.gz
en
0.914251
551
2.53125
3
A couple of decades ago, when Wi-Fi was first made available, it relied on the Wired Equivalent Protocol, or WEP. As Wi-Fi gained popularity, it became clear that WEP contained a massive security flaw that could allow a hacker to effortlessly gain access to a wireless network. As such, the WEP protocol was quickly abandoned in favor of the Wi-Fi Protected Access (WPA) protocol. Over time, WPA gave way to the WPA2 protocol that most people use today. The point is that security standards evolve over time. The WPA protocol that was once considered to be cutting edge is insecure by today’s standards. And while the evolving nature of security standards may seem quite obvious, there is another issue that must be considered--the semi-permanent nature of Wi-Fi hardware. Now, please do not misunderstand me. There is of course no rule stating that once a wireless access point has been installed it must become a permanent fixture. The reason why I referred to Wi-Fi hardware as semi-permanent is because it tends to have greater longevity than other types of IT hardware. Most large organizations have adopted a hardware refresh cycle for network servers, storage, desktop PCs and that sort of thing. Desktop PCs, for example, are commonly placed on a five-year refresh cycle. Such a schedule helps to simplify IT budgeting by making expenditures easier to predict. It also helps to prioritize the replacement of an organization’s oldest hardware. Wi-Fi hardware is often treated differently, though. Wi-Fi is one of those things that people tend not to think about unless it stops working. As such, there may not be any sense of urgency associated with periodically replacing Wi-Fi hardware. In some ways, retaining Wi-Fi hardware for an extended period of time makes sense. From a security standpoint, a wireless access point that was purchased last year may not be all that different from one that was purchased a decade ago. Believe it or not, WPA2 enabled Wi-Fi hardware first became available way back in 2004, and is still in use today. Given WPA2’s longevity, it would be easy to assume that the protocol has been proven to be so secure and reliable that manufacturers have decided to take the “if it isn’t broke, don’t fix it” approach. However, this simply is not the case. The WPA2 protocol is at least 15 years old at this point, and it is really beginning to show its age. There are a multitude of documented vulnerabilities related to WPA2. For example, there are several password cracking attacks that have been proven to be effective against WPA2. There are also techniques that a hacker can use to hijack a TCP connection and inject malicious packets into the conversation with a host. Thankfully, the WPA protocol is getting a new lease on life. Late last year, the third generation of the protocol (WPA3) was introduced. WPA3 improves Wi-Fi security in a few different ways. For starters, the protocol uses 128-bit encryption, which is a definite improvement over WPA2. Perhaps more importantly, WPA3 introduces a new feature called the Simultaneous Authentication of Equals. This security feature, which is more casually known as SEA or as the Dragonfly Handshake, is designed to prevent hackers from being able to perform a dictionary-based login. Unfortunately, WPA3 isn’t perfect. Several vulnerabilities have already been discovered. The largest vulnerabilities, however, stem from WPA3’s backward compatibility with WPA2. This backward compatibility is designed to allow older devices to be used with newer access points, but can potentially allow a hacker to perform a downgrade attack. Fortunately, there are a few things that you can do to keep your wireless network secure. First, go ahead and start replacing aging Wi-Fi hardware with newer WPA3 devices. Even if your wireless clients are not yet running WPA3-capable devices, the newer access points may offer other security features that your current access points do not have. Once the new hardware is in place, set a date by which users will be required to use only WPA3 devices. Doing so will eliminate any possibility of a downgrade attack (assuming that the access point lets you disallow WPA2 traffic). Most importantly, force the use of long, complex, and unique Wi-Fi passwords. Many of the attacks being used against Wi-Fi today (the so-called Dragon Blood exploits) are ineffective against sufficiently long and complex passwords, so long as those passwords are unique.
<urn:uuid:f5175f43-98b3-4576-bbb7-f3996afb76bc>
CC-MAIN-2022-40
https://www.itprotoday.com/security/why-its-time-refresh-wi-fi-hardware-wpa3-devices
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00738.warc.gz
en
0.961586
957
2.6875
3
In this Cisco CCNA training tutorial, you’re going to learn about CIDR, which is Classless Inter-Domain Routing. Scroll down for the video and text tutorial. CIDR Classless Inter-Domain Routing Video Tutorial A problem with the original implementation of the classful addresses was that when the Internet authorities give out addresses, they always give out complete: - Class A with a /8 subnet mask - Class B with a /16 subnet mask - Class C with a /24 subnet mask This gave a problem that if a company had more than 254 hosts, let’s say 500 hosts, is too big for a Class C. So, they would be given a Class B and they would get allocated addresses for 65,534 hosts. That’s way too much and this led to huge amounts of the global address space being wasted. As a solution, Classless Inter-Domain Routing or CIDR was introduced in 1993. CIDR removed the fixed /8, /16, and /24 requirements for the different address classes and allowed them to be split in smaller networks called subnetting. For example, the Internet authorities could allocate the address, 18.104.22.168 /20. The first octet, 175, is a Class B address which would normally be /16. Rather than allocating the entire /16, the Internet authorities could now assign /20 which means the other networks in the 175 range would be available to give to other companies. Rather than giving a huge range, we split the classes into smaller networks that could be given to different organizations. Companies can now be allocated in an address range that matches what they need, therefore, there would be fewer addresses getting wasted. CIDR and Route Summarization Route summarization is another benefit of CIDR. In the example below, we’ve got ISP A and they have allocated the address blocks that you see on the left. One company got 22.214.171.124 /24, another one got 126.96.36.199 /24, one got 188.8.131.52 /24, all the way up to 184.108.40.206 /24. ISP A has given out 255 address blocks. We’ve also got ISP B and they’ve given out 220.127.116.11 /24, 18.104.22.168 /24, and so on, all the way up to 22.214.171.124 /24. ISP A and ISP B get connected. If we didn’t have CIDR and route summarization, ISP A would advertise all of its 256 address blocks to ISP B and vice versa. But when we’ve got CIDR and route summarization, the two ISPs can advertise just an aggregate block. Rather than advertising all 256 /24s, ISP A advertises 126.96.36.199 /16 which is a superset of all those 256 smaller networks. ISP B then learns one route to all the networks behind ISP A, rather than learning 256 routes. ISP B will similarly advertise one route of 188.8.131.52 /16 to ISP A. Route Summarization Benefits The benefits we get from route summarization is that ISP A doesn’t know about all 256 networks behind ISP B. It only gets a single summary route that covers all of them. One route compared to 256 routes, is a lot less information, more efficient, and it takes up less memory in the router. If an individual link goes down in ISP B, it doesn’t have any impact on ISP A because that one summary route doesn’t change. It’s going to be different in ISP B though. Whenever one of their routes goes down, the other routers are going to have to recalculate that. The benefit we get from this is we’re compartmentalizing the different parts of our network. If we’ve got an issue in ISP A of the network, it’s not going to impact ISP B. It makes things a lot more stable and reliable. It also makes things more logical, which is better for us humans because it makes it easier to troubleshoot related problems. Subnetting practice questions. Understanding CIDR Subnet Mask Notation: https://docs.netgate.com/pfsense/en/latest/book/network/understanding-cidr-subnet-mask-notation.html Classless Inter-domain Routing (CIDR): The Internet Address Assignment and Aggregation Plan: https://tools.ietf.org/html/rfc4632 IP Addressing and Subnetting for New Users: https://www.cisco.com/c/en/us/support/docs/ip/routing-information-protocol-rip/13788-3.html
<urn:uuid:eef35983-9a5f-4166-ae93-9aa9241f4139>
CC-MAIN-2022-40
https://www.flackbox.com/cisco-cidr-classless-inter-domain-routing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00738.warc.gz
en
0.918236
1,067
3.328125
3
A business continuity plan is a mandatory document in corporate management. A well-designed list outlining crisis response actions has helped many companies avoid downtime during the COVID-19 pandemic. If your organization does not yet have such a document, now it is time to draw it up. This article covers the essential elements that must be included in a business continuity plan and what you should pay attention to when preparing it. A crisis can strike unexpectedly. To avoid risks, it is important to take necessary actions in time. That is the main task of the Business Continuity Plan (BCP). Such a plan allows companies to design and agree on the main crisis management procedures, maintain the company operation and quickly resume it in case of any failures. In fact, it is a set of rules that the entire company must follow to ensure the continuity of business processes and to protect its assets. Sometimes it includes disaster recovery planning. What is a Business Continuity Plan? Business Continuity Plan is a set of documents that helps your company to continue running after various incidents. It covers the step-by-step process you need to do to prevent business from the impact of potential crises – from natural disasters like floods or fire, to power failures, IT system theft, or illness of key staff. BCP focuses on protecting critical business functions so that you can continue to operate and implement a recovery strategy. The strategy differs depending on the kind of crisis the business is facing. There are three main types of risks: - Office or production facility loss – for example, when a natural disaster destroys an office or as with the pandemic, employees cannot go to work. In this case, the plan should include options for quickly transferring employees to remote work. - Infrastructure losses – when a power failure or hacker attack disrupted the operation of critical systems: accounting, information systems, etc. Consider and outline in your plan alternative technical solutions to restore normal operations or support the most critical processes manually. Every business is unique and the incident response will also be different. A business continuity plan provides a framework for making decisions, as well as a clear indication of responsible employees who will handle them. Key steps to a successful business continuity planning You must make a business continuity plan that outlines in writing how you will cope if a crisis occurs. It includes the following elements: Establish an emergency team During an emergency, it is important to act quickly and cohesively. Employees should not question who is responsible or has the authority to make a particular decision. Create an underlying business continuity team comprised of employees from across the organization, including senior management, IT staff, facilities and real estate, as well as physical security, communications, HR, finance, and other departments. Ensure all employees are aware of what they have to do. Draw up a plan Identify potential business process threats that could affect any branch of your organization, such as power outages. In the plan, consider worst-case scenarios rather than options for an individual incident so that the number of scenarios is kept to a minimum. Prioritize the key business functions you need to get operating as quickly as possible; define who will perform them, and determine how operations will be reassigned when key employees are unavailable. To make sure the key steps are followed, arrange the plan in the form of checklists. Test your business continuity plan Once your BCP is prepared, test how it will perform in the event of an emergency. Consider the situations that most likely can happen and cause disruptions. Be sure to measure and record test results and strive for constant improvement, whether the goals are to ensure application availability or to guarantee personnel safety. Conduct emergency simulation each year. This will help you identify any weaknesses and make changes if necessary. Keep your plan updated BCP is a living document – update it regularly to address the changing circumstances of your business. For example, even when you move to a new office or adopt new technologies and processes, you may face a completely different range of risks. For companies that cannot afford downtime, the cloud is an essential component of business continuity. Cloud4U cloud disaster recovery and business continuity services provide continuous availability, automatic backups, fast and reliable support. Our cloud offerings include more redundancy and resiliency against potential failures than if your company could build and maintain in-house. If you would like to start your cloud journey, let us know!
<urn:uuid:ebefcd08-014b-4665-b432-955ed555471e>
CC-MAIN-2022-40
https://www.cloud4u.com/blog/business-continuity-planning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00738.warc.gz
en
0.945245
895
2.515625
3
Accidental death stats show that over 160,000 people die from unintentional injuries in the US every year, and preventable accidents are the third leading cause of death. If you still think that “accidents happen” and can’t be foreseen, we’ve compiled a list of accidental death facts that will change your mind. Even if they are called accidents, prevention is more than possible in most cases. Awareness and precaution are the interventions needed to save lives. Leading Causes of Accidental Death (Editor’s Choice) - In the US, unintentional injuries were the fourth leading cause of deaths in 2020 - Every ten minutes, three Americans die in avoidable accidents - Drug overdose makes up 94% of all poisoning deaths - Yearly, more than a million Americans injure themselves by falling down the stairs - In the US, 98,680 people lost their lives in motor vehicle crashes in 2020 - Every year, more than 27,000 people visit the hospital after unintentional gun injuries - Every year, almost 500 people die due to accidental shootings - Every 30 seconds, a child dies from preventable accident injuries Accidental Deaths Statistics: Overview 1. In 2020, unintentional injuries were the fourth leading cause of death in the United States. That year, there were 200,955 unintentional injury deaths. That translates to 61 per 100,000 residents. In general, the most common accidental injuries in the US include drug overdose, falls, and motor vehicle crashes. 2. Three people die in avoidable accidents every 10 minutes in the US. Around 900 people suffer an injury severe enough to require medical assistance every 10 minutes in the US. Most preventable deaths and injuries are home accidents, according to statistics. The cost of these accidents is about $20.89 million per year, and every hour, 20 people die and 5,510 are injured. 3. West Virginia has the highest accident mortality rate in the US. West Virginia registered 1,859 unintended deaths in 2019, resulting in a 96.9 death rate per 100,000 citizens. New Mexico follows with a 77.8 death rate and 1,687 deaths. On the other hand, even if California accounts for 35.9 deaths per 100,000 residents, it also registered 15,116 accidental deaths: the highest number of deaths from all American states. 4. Approximately 6,000 people die across the UK from preventable accidents every year. Accidental death stats in the UK show that home accidents cost society about £45.63 billion annually. Falls are the most likely cause of injuries at home at any age, but studies prove the risk increases with age. More women than men over the age of 65 die due to an accident in the home. Among children, however, more boys than girls have accidents at home. 5. Over 3,000 Swedish people died in accidents in 2020. Unlike the US, road accidents are not the number one cause of accidental death in Sweden. Falling caused the majority of fatalities with nearly 1,000 reports, while poisoning accounted for 476 deaths. Road accidents represented just 273 fatalities. 6. Unintentional injury deaths increased by 11.1% in 2020. Drug overdose caused the highest number of unintentional injury deaths that year. Globally, falls make up the second leading cause of accidental injury death, with over 680.000 fatal falls occurring every year. Accidental Death Statistics by Cause 7. Vehicle accidents are the most common cause of preventable fatalities outside the house. Unintentional deaths can occur for various reasons. Most commonly, studies report road accidents, construction-related accidents, work injuries, medical malpractice, and criminal activity as the main reasons. In most American states, families can request compensation if an accidental death happens. 8. The most common cause of accidental deaths in the home is poisoning. Poisoning accounts for half of all foreseeable home deaths, followed by accidental falls, representing 29% of deadly injuries. The study attests that 4% of deaths are caused by choking, 3% by drowning, 2% by fire and smoke, and just 1% account for deaths by mechanical suffocation. “Other causes” represent the remaining 11%. 9. Drug overdose accounts for 94% of all poisoning deaths. (Injury Facts/NSC, CDC) Accidental poisoning statistics determine that the most common poisoning deaths are due to opioid addictions. According to the National Safety Council (NSC), opioid preventable deaths increased in 2019 by 7%. The 25–34 age group registered the most opioid overdose deaths in 2019 with 12,537. Over two-thirds of avoidable opioid overdose victims are male. The highest number of overdose deaths ever recorded in the US in 12 months totaled 81,000 and ended in May 2020. 10. Yearly, over one million Americans injure themselves by falling down the stairs. Falling down stairs death statistics show that stairway accidents account for about 12,000 deaths. Another 3,000 people get injuries in such mishaps every day. Studies reveal that distractions are the most common cause of stairs-related injuries. Falls down the stairs are the second leading cause of workplace fatalities. 11. Falls are the leading cause of fatal injuries in the 65+ age group. One in five falls causes injury, such as head injury or broken bones. Over 95% of hip fractures are caused by falling. In the US, about 36 million older adults fall each year, resulting in over 32,000 falling deaths per year. About 3 million people require medical assistance and go to the Emergency Room. 12. The falling death rate among older people is about 64 per 100,000 US residents. Wisconsin registered the highest rate of 157 deaths per 100,000 older adults. On the flip side, Alabama accounts for just 28 fatal fall injuries per 100,000 people. CDC estimates that the number of falls and injuries will rise in the next decade, potentially reaching 52 million falls and 12 million injured people over 65 in 2030. 13. People over 65 are the most at risk of choking to death. Choking deaths statistics show that, on average, 33 people die of choking in the UK every month. Stats reveal that 72% of them are over 65. The same study proves that the elderly are more likely to choke at the hospital than at home. Deadly choking happens because patients are often unsupervised while eating. Care homes carry a similar risk. 14. Nearly 500 people die from unintentional shootings every year. Accidental death by firearm statistics shows that unintentional shootings account for 37% of nonfatal gun incidents and another 2% of fatalities. The same stats indicate that American citizens are four times more likely to die from gun injuries than people from other high-income countries. 15. Most unintentional gun deaths occur by playing with a gun. Playing with a gun accounts for 28.3% of deaths, while the second leading cause is believing that the gun was not loaded, accounting for 17.2% of deaths. The third leading cause is hunting accidents accounting for 13.8% of gun-related fatalities. Accidental death by gun statistics shows that approximately a quarter of those who died from an unintentional shooting had consumed alcohol, and half of them were from the 20–29 age group. 16. Annually, over 27,000 people go to the hospital after unintentional gun injuries. Just two out of every 100 unintentional firearm injuries are fatal. Notably, 26,000 people survive their injuries each year. Still, in 2019 there were 486 accidental firearm deaths, and in the same year, 90% of victims were male, and only 10% were female. Accidental Child Deaths 17. Unintentional injuries account for approximately 12,000 deaths in children in the US every year. Another 9.2 million children go to the ER due to home accidents. On average, five children present to emergency departments each hour for medicine poisoning. Falling dressers or televisions are the likeliest objects to cause fatal injuries. A child dies from a television tip-over every three weeks. 18. Each year, 40,000 kids under five go to the hospital following preventable accidents. Common accidental deaths among babies and toddlers are caused by choking on food or small objects. Suffocation and strangulation are the second and third leading causes of small children’s injuries and deaths. Falls in both babies and toddlers are also a common cause of avoidable injuries. 19. Drowning is the leading cause of unintentional home deaths for small children. (Stanford Children’s Health) Kids aged one to four are most likely to die due to drowning. Child accidental death statistics show that most drownings and near-drownings occur in swimming pools, ponds, and sometimes open water sites. Children can drown in as little as one inch of water. 20. Children aged 0–14 usually die of drowning or mechanical suffocation. Mechanical suffocation is the leading cause of children’s deaths, accounting for 1,110 fatal injuries. Drowning claims another 710 lives. Poisoning and falls are reasonably uncommon causes of accidental deaths among the young. Poisoning causes only 80 deaths, and falls account for just 40. 21. Each day, around 40 kids under five years old go to the hospital after choking or swallowing a foreign object. Choking deaths per year statistics reveal that about 14 kids died from choking in the past four years. Food is the most likely cause, but small objects are also a risk. All child deaths from choking occur at the hospital. According to ONS data, the leading cause is food not chopped up correctly. 22. Burns and fires are the fifth leading cause of accidental deaths in kids and adults. (Johns Hopkins Medicine) An estimated 3,500 accidental deaths in the home are due to burns. Most often, children get burned by scalding or flames. 65% of kids aged four and under who get to the ER for burn-related injuries suffer from scalding burns, and studies note that 75% of these burns are preventable. One of the most dangerous hobby activities is fractal wood burning. 23. A child dies every 30 seconds of injuries from preventable accidents. In Australia, injuries have replaced disease as the leading cause of child deaths. More than half of injuries in kids occur at home. Car Accident Death Rate 24. Road crash injuries are estimated to be the 8th leading cause of death globally. Traffic safety facts from the Centers for Disease Control and Prevention show that more people die in car crashes than HIV/AIDS. The road crash fatalities rate is over three times higher in low-income countries than in high-income countries. Low-income countries account for more than 90% of the global road accident deaths but account for just 60% of the world’s registered vehicles. 25. About 1.35 million people die in road accidents each year. On average, there are 3,700 car accident deaths per day in the USA. Yearly, between 20 and 50 million more people suffer injuries. Over half of traffic deaths occur among pedestrians, cyclists, and motorcyclists. Road accidents are the leading cause of death among people aged 5–29. 26. In the US, more than 38,000 people die every year in road crashes. The fatality rate of motor accidents is 12.4 deaths per 100,000 residents. Another 4.4 million people are injured severely enough to need medical attention, and pedestrian and cyclist fatalities continue to rise. Car accident deaths per year in the US statistics reveal there were over 6,720 pedestrian deaths in 2020. 27. There were 38,680 victims in motor vehicle crashes in the United States in 2020. That was a 7.2% increase from 2019, when there were 36,096 fatalities. Moreover, the vehicle miles traveled in 2020 dropped by 430.2 billion miles, equating to a 13.2% decrease. The 2020 fatality rate was 1.37 per 100 million vehicle miles traveled. 28. Wyoming had a 25.4 car accident death rate per 100,000 residents in 2019. Twenty seven states surpassed the national death rate of 11 per 100,000 citizens, but only a few exceeded the 20 mark. Mississippi closely followed Wyoming with a 21.6 death rate and New Mexico with 20.2 deaths per 100,000 people. On the other hand, Massachusetts and New York registered the lowest rate, at 4.8 deaths per 100,000 residents. 29. The US registered a sharp increase in road deaths in 2020. Accidental deaths in the US in 2020 stats show that, despite the pandemic, which kept many drivers off the roads, those who did venture out became more reckless. An estimated 42,060 people died in car crashes in 2020, an 8% increase compared to 2019. 30. Most children under 13 who die in car accidents are passengers. 73% of child motor vehicle crash deaths were passenger vehicle occupants in 2019. 16% (138) of children were pedestrians, and another 4% (30) were bicyclists. The Summary of Accidental Death Stats Unintentional injuries are a worldwide cause of concern, and while traffic accidents cause most fatalities, we can’t overlook home injuries. Leading causes of injuries at home are poisoning, falling down the stairs, choking, drowning, and fire-related injuries. The stats and facts discussed here show that people can prevent many of these potentially fatal accidents with proper awareness and precaution. People Also Ask According to the latest data, the odds of dying from an accidental injury are one in 1,334. Drug poisoning odds are one in 71, while the odds of dying from opioids are lower with one in 98 people. That being said, one in 106 people die in motor vehicle crashes. Firearm assaults claim about one in 298 lives, and about one in 1,400 people die from exposure to fire or smoke. One in 1,675 die from falls, and drowning claims significantly fewer lives, with odds of one in 5,573. There are over 3.9 million accidental deaths per year worldwide, accounting for almost 7% of global mortality. Approximately 1.3 million are traffic deaths, while various other kinds of accidents that could happen at home, at work, or simply on the street account for the remaining 2.6 million fatalities. For most regions, unintentional injuries involve a significantly higher proportion of years of life lost than total deaths. Injuries occur primarily in young people and result in more years of life lost than diseases that typically affect older people. In the US, an estimated 150,000–170,000 people die from unintentional accidents yearly. Poisoning accounts for about 50,000 of them, about 40,000 people die in road accidents, and another 34,000 lose their lives because of falls. Unintentional injuries account for just about 6% of all deaths. There are an additional 3 million nonfatal injuries per year, most requiring medical assistance. The term “accidental” refers to situations that people can’t control, such as car crashes, falls, choking, drowning, machinery, etc. Insurance companies typically exclude things like acts of war and death from any kinds of illegal activities. Dying from any illness is also excluded from the accidents category. Dangerous hobbies, such as race car driving, bungee jumping, escalating, or any other similar activity, are excluded as well. Unintentional injuries are the leading cause of death for Americans in the 1–44 age group. The leading causes of death for accidental injury include unintentional poisoning (for instance, drug overdose), unintentional motor vehicle crashes, accidental drowning, and accidental falls. Incidental death studies indicate that road accidents are the leading global cause of accidental deaths. As many as 1.3 million people die in road crashes every year. More people now die in motor accidents than from AIDS. Every hour, 20 people die, and 5,510 are injured as a result of accidents. National Highway Traffic Safety Administration (NHTSA) data shows nearly 38,000 people are killed in approximately 35,000 motor vehicle crashes yearly. America’s crash death rate is more than twice the average of 19 other high-income countries. Each year, 1.35 million people lose their lives on roadways worldwide. Every day, around 3,700 people are killed in traffic accidents involving buses, cars, motorcycles, trucks, bicycles, or pedestrians. More than half of those killed are pedestrians, motorcyclists, or cyclists. On US roads, about 100 people die daily. Associations such as Mothers Against Drunk Driving (MADD) show that adults drink too much and drive about 121 million times per year and cause about 300,000 drinking and driving incidents a day. In 2020, cars killed over 42,000 people, which represented an increase from the year before, when there were 39,107 fatalities. So, there was an increase despite the fact that the number of miles traveled by car dropped by 13% from 2019. Moreover, between January and June 2021, deaths increased again from the same period in 2020, by 16%. Accidental death stats reveal that many people were injured seriously enough to require medical attention.
<urn:uuid:f457ec4f-2e39-48c1-b636-48f335cbf5b8>
CC-MAIN-2022-40
https://safeatlast.co/blog/accidental-death-stats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00138.warc.gz
en
0.945742
3,720
2.6875
3
A new generation of supersonic (faster than the speed of sound in level flight, also called Mach 1) passenger aircraft is now under development. Using “low-boom” technology developed through NASA research to minimize sound signatures on the ground, advanced engines, and alternative fuel concepts, these new supersonic transports (SSTs) advertise the ability to fly over populated areas with minimal disruption, cruise more economically, and avoid some of the potential negative environmental effects of carbon-based fuels in high-altitude flight. These new aircraft have attracted interest and some investment from the U.S. military, and have on occasion been proposed for military missions by their developers. The potential roles differ with the size and capabilities of each aircraft.
<urn:uuid:1dbc9965-ba1c-48f8-965c-43663a67bae1>
CC-MAIN-2022-40
https://govwhitepapers.com/whitepapers/potential-military-roles-for-supersonic-transports
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00138.warc.gz
en
0.953822
154
2.984375
3
In the past, cyberattackers largely ignored operational technology (OT) systems, such as industrial control systems and SCADA systems, because it was difficult to get to the proprietary information, or OT systems not connected to external networks and data could not be easily infiltrated. But that’s no longer the case. Today, many industrial systems are connected to company networks with access to the Internet and which use everything from connected sensors and big data analytics to deliver operational improvements. This convergence and integration of OT and IT has resulted in a growing number of cyber-risks, including effective and impactful cyber incidents across both IT and OT. Cybersecurity threats in the world of OT are different from IT, as the impact goes beyond the loss of data, reputational damage, or the erosion of customer trust. An OT cybersecurity incident can lead to loss of production, damage to equipment, and environmental release. Defending OT from cyberattacks requires a different set of tools and strategies than used to protect IT. Let’s look at how cybersecurity threats commonly find their way into OT’s protected environment. 2 Main Vectors into OT There are two main vectors where malware can enter into a secure production facility in an OT environment: through the network or through removable media and devices. Attackers can enter an OT system by exploiting cyber assets through firewalls across routable networks. Proper OT network best practices like network segmentation, strong authentication, and multiple firewalled zones can go a long way to help prevent a cyber incident. BlackEnergy malware, utilized in the first recorded targeted cyberattack on an electrical grid, compromised an electrical company via spear-phishing emails sent to users on the IT side of the networks. From there, the threat actor was able to pivot into the critical OT network and used the SCADA system to open breakers in substations. This attack is reported to have resulted in more than 200,000 people losing power for six hours during the winter. While the term “sneakernet” may be new or sound awkward, it refers to the fact that devices such as USB storage and floppy disks can be used to upload information and threats into critical OT networks and air-gapped systems just by the cyberattacker physically carrying them into the facility and connecting them to the applicable system. USB devices continue to pose a challenge, especially as organizations increasingly rely on these portable storage devices to transfer patches, collect logs, and more. USB is often the only interface supported for keyboards and mice, so it cannot be disabled, which leaves spare USB ports enabled. As a result, the risk exists of inserting foreign devices on the very machines we are trying to protect. Hackers have been known to plant infected USB drives in and around the facilities they are targeting. Employees will then sometimes find these compromised drives and plug them into a system because that is the only way to determine what is on one of them – even without any labels like “financial results” or “headcount changes.” Stuxnet may be the most infamous example of malware being brought into an air-gapped facility by USB. This extremely specialized and sophisticated computer worm was uploaded into an air-gapped nuclear facility to alter the programmable logic controllers' (PLCs) programming. The end result was that the centrifuges spun too quickly for far too long, ultimately causing physical damage to the equipment. Now more than ever, production environments face cybersecurity threats from malicious USB devices capable of circumventing the air gap and other safeguards to disrupt operations from within. The "2021 Honeywell Industrial Cybersecurity USB Threat Report" found that 79% of threats detected from USB devices had the potential to cause disruptions in OT, including loss of view and loss of control. The same report found that USB usage has increased 30%, while many of these USB threats (51%) tried to gain remote access into a protected air-gapped facility. Honeywell reviewed anonymized data in 2020 from its Global Analysis Research and Defense (GARD) engine, which analyzes file-based content, validates each file, and detects malware threats being transferred via USB in or out of actual OT systems. TRITON is the first recorded use of malware being designed to attack safety systems in a production facility. A safety instrumented system (SIS) is the last line of automated safety defense for industrial facilities, designed to prevent equipment failure and catastrophic incidents such as explosions or fire. Attackers first penetrated the IT network before they moved to the OT network through systems accessible to both environments. Once in the OT network, the hackers then infected the engineering workstation for SIS with the TRITON malware. The end result of TRITON is that an SIS could be shut down and put people within a production facility at risk. Physical Devices Can Also Lead to Cyber Incidents It is not just content-based threats that we need to look out for. A mouse, cable, or other device can be weaponized against OT, too. In 2019, malicious actors targeted a trusted person with access to a control network. This authorized user unknowingly swapped a real mouse for the weaponized mouse. Once connected to the critical network, someone else took control of the computer from a remote location and launched ransomware. The power plant paid the ransom money; however, they did not get their files back and had to rebuild, affecting the facility for three months. It’s imperative that you know where your devices come from before using them. 3 Steps to Defeat Cyber Threats Cyber threats are constantly evolving. First, set a regular time to review your cybersecurity strategy, policies, and tools to stay on top of these threats. Second, USB usage threats are on the rise, so it is important to evaluate the risk to your OT operations and the effectiveness of your current safeguards for USB devices, ports, and their control. Last but not least, a defense in-depth strategy is highly recommended. This strategy should layer OT cybersecurity tools and policies to give your organization the best chance to stay safe from ever-evolving cyber threats.
<urn:uuid:a3001a20-0ce3-49b7-a2ec-d64756af714a>
CC-MAIN-2022-40
https://www.darkreading.com/edge-articles/how-threat-actors-get-into-ot-systems
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00138.warc.gz
en
0.954742
1,254
2.578125
3
How to Reduce Risk and Secure Your Internet of Things Devices Choosing a Smart Solution That Doesn’t Leave You Vulnerable What is the Internet of Things? Chances are, you already own or have used an Internet of Things (IoT) device. You might know them as smart devices, internet-connect devices, or by another name, but these are all part of the Internet of Things. The term came into being long before these devices were ubiquitous in our daily lives. The Internet of Things is a network of physical objects (“things”) that connect over the internet and collect and share data with other devices and systems. IoT devices provide a service to the user, but also provide a glut of information for developers. Developers state that the information collected is a tool for honing services and enhancing user experience, but this information is also worth a lot of money to them for ad targeting and consumer behavior patterns. The number of IoT devices in the United States continues to grow exponentially. Back in 2016, Symantec reported the existence of 6.4 billion devices, and, while numbers for 2020 are still shaking out, they currently exceed 20 billion. This number isn’t surprising when you consider just how many types of IoT devices we encounter daily. They may include: - A refrigerator that takes visual stock of food and alerts the user to buy replacements or places the order on its own - A smart speaker that records audio, answers questions, and performs tasks on demand - Smart homes that monitor for fire, carbon monoxide, and break-ins, and can even control when a door can be opened - Self-driving and internet-assisted vehicles - A smart TV that connects directly to streaming services and shows advertisements or suggestions based on user patterns - A payment device that plugs into any mobile device to process credit card payments - A water bottle that sends a push notification to remind users to drink their suggested daily amount - Smart thermostats that “learn” from user input, occupancy, and seasonal adjustments - Bluetooth-enabled healthcare devices, that send data directly to monitoring applications or doctors - And so much more In the minds of many users, IoT devices fall into a separate mental category than computers, servers, or mobile phones. The latter devices are subject to rigorous cybersecurity protections that are often ignored or missed completely in their IoT counterparts. “A lot of people in their homes, a lot of organizations in their offices and other buildings are rushing in and applying these IoT devices to their network. These can include things like monitors, sensors, some of them are everyday products like your kettle. These are providing benefits to employees and a lot of times they’re saving costs, they’re saving energy, and organizations really want to make efficiencies and make savings like that. But like every product on the internet, if it’s not secured properly it can mean a way in for attackers, and unfortunately, many IoT devices are built with almost no security at all. If the device is discoverable on the internet, and it’s connected to the rest of the network, it’s an easy to use gateway for attackers.”—Danny Palmer, Senior Reporter with ZDNet What Risks and Problems Can IoT Devices Introduce to a Network? There is no doubt that IoT devices introduce risks; however, the type and scope of risk can vary hugely between casual use and corporate integration. The scale of potential benefit vs. potential problem will be different for every situation. The important thing is to weigh this scale carefully with all of the information available. In September 2020, a new scandal hit the IoT world. In order to illustrate just how vulnerable household IoT devices are, Martin Hron of the security company Avast reverse engineered a smart coffee maker to “turn on the burner, dispense water, spin the bean grinder, and display a ransom message, all while beeping repeatedly.” Ars Technica covered this amusing and terrifying experiment. The takeaway from this story isn’t to expect ransoms from all of your connected devices. In fact, you may never see any direct effects from the most common uses hackers have for IoT devices. Visible risks, such as leaked security camera footage, strangers viewing your baby monitor, or a criminal taking remote control of your Jeep, provide a tangible scare-factor. And if you can open your smart home’s garage from across the country, so can savvy criminals. But the most common and pervasive attacks involve using your IoT device as part of a much larger botnet. A botnet consists of internet-connected devices that have been breached and are controlled by a third party through malware. Botnets accomplish cybercrime through sheer numbers, with each device adding power and another threat vector. In 2006, the Mirai botnet was discovered. This botnet primarily targeted consumer IoT devices such as IP cameras, making it one of the first and most noticeable IoT attacks. The botnet was used primarily for Distributed Denial of Service (DDoS) attacks—essentially overloading a targeted network with traffic and shutting it down to legitimate traffic. Targets of the Mirai botnet included computer security journalist Brian Krebs (krebsonsecurity.com) and the servers for the popular game Minecraft. Mirai successors are still active today. In addition to DDoS attacks, botnets can be used for stealing data, sending spam, and generally providing increased access to the cybercriminal. Roughly 98% of all IoT traffic is unencrypted, exposing confidential data on the network. Despite the proven risks, consumers love connectivity and ease-of-use far more than they are concerned about security vulnerabilities. Since a large number of IoT device users are unable or unwilling to add additional security for themselves, the onus lies with companies, and furthermore, with regulatory agencies to ensure standard protections. Given the freedom to choose to offer consumer protections or continue on the path of unchecked data collection and cheap security options, most companies have shown little to no interest in investing in security improvements. With the United States’ House passing IoT regulation, the experience of purchasing and using IoT devices may soon change. In the meantime, there are a variety of solutions available for consumers and IT partners alike. Practical Tips for Purchasing and Setting Up Internet-Connected Devices The Planning Stage The Purchasing Stage The Protection Stage - Patch devices and run updates regularly - Avoid exposing IoT devices to unsecured internet connections - Segment internet networks, and keep IoT devices separate from users and private data - Consider segmenting IoT devices using VLANs - Turn off any ancillary services not required for core functionality of IoT devices - Consider turning off reporting and automatic sending of data if possible - Change factory-set credentials or remove remote access capabilities completely - Log and monitor all devices on your network - Physically secure IoT devices against in-person tampering - Use multi-factor authentication (MFA) to ensure that you are the only one accessing back-end controls In 2017, Mark Anderson wrote for Clutch.co about the potential impact of IoT devices on small businesses. The benefits available as well as the risks are all still in play today. The Proactive Stage - Trend Micro, “The IoT Attack Surface: Threats and Security Solutions.” - National Institute of Standards and Technology (NIST), “NIST Releases Draft Security Feature Recommendations for IoT Devices.” - Palo Alto Networks, “2020 Unit 42 IoT Threat Report.” - Nozomi Networks, “OT/IoT Security Report.”
<urn:uuid:0db723b9-eccf-43b3-bb81-7cd5114413a7>
CC-MAIN-2022-40
https://andersontech.com/learn/how-to-make-smart-investments-in-the-internet-of-things/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00138.warc.gz
en
0.9299
1,600
3.109375
3
Although SNMP agents provide essential information for effective network monitoring and troubleshooting, SNMP alone does not provide all the information you need to stay on top of your network. For comprehensive analysis of many issues, a network analyser with packet capture capabilities is required as well. This white paper describes how SNMP works, the advantages of SNMP monitoring, and how SNMP continues to remain a critical part of a complete network analysis solution. SNMP (Simple Network Management Protocol) is the common language of network monitoring. It is integrated into most network infrastructure devices today and many network management tools include the ability to pull and receive SNMP information. SNMP extends network visibility into network-attached devices by providing data collection services useful to any administrator. These devices include switches and routers as well as servers and printers. The following information is designed to give the reader a general understanding of what SNMP is, the benefits of SNMP and the proper usage of SNMP as part of a complete network monitoring and management solution. What is SNMP? The Simple Network Management Protocol (SNMP) is a standard application layer protocol (defined by RFC 1157) that allows a management station (the software that collects SNMP information) to poll agents running on network devices for specific pieces of information. What the agents report is dependent on the device. For example, if the agent is running on a server, it might report the server’s processor utilisation and memory usage. If the agent is running on a router, it could report statistics such as interface utilisation, priority queue levels, congestion notifications, environmental factors (i.e. fans are running, heat is acceptable), and interface status. All SNMP-compliant devices include a specific text file called a Management Information Base (MIB). A MIB is a collection of hierarchically organised information that defines what specific data can be collected from that particular device. SNMP is the protocol used to access the information on the device the MIB describes. MIB compilers convert these text-based MIB modules into a format usable by SNMP management stations. With this information, the SNMP management station queries the device using different commands to obtain device-specific information. There are three principal commands that an SNMP management station uses to obtain information from an SNMP agent: 1. The get command collects statistics on SNMP devices. 2. The set command changes the values of variables stored within the device. 3. The trap command reports on unusual events that occur on the SNMP device. The SNMP management console reviews and analyses the different variables maintained by that device to report on device uptime, bandwidth utilisation, and other network details. Why use SNMP? SNMP delivers management information in a common, non-proprietary manner, making it easy for an administrator to manage devices from different vendors using the same tools and interface. Its power is in the fact that it is a standard: one SNMP-compliant management station can communicate with agents from multiple vendors, and do so simultaneously. Illustration 1 shows a sample SNMP management station screen displaying key network statistics. Another advantage of SNMP is in the type of data that can be acquired. For example, when using a protocol analyser to monitor network traffic from a switch´s SPAN or mirror port, physical layer errors are invisible. This is because switches do not forward error packets to either the original destination port or to the analysis port. However, the switch maintains a count of the discarded error frames and this counter can be retrieved via an SNMP query. Where should you use SNMP? SNMP can be used in any environment where constant monitoring of key devices is required. Many SNMP management stations offer long-term reporting capabilities, allowing an administrator to watch network trends develop over time and to take appropriate action before problems can seriously affect users. Illustration 2 shows a sample report illustrating maximum, minimum and average router utilisation. What is missing from SNMP? While SNMP provides excellent statistics on the macro level, it does not provide the level of detail that is often required to completely resolve many network issues. For example, while SNMP may show high utilisation on the router’s Internet interface, it may not show what kinds of traffic are using up the bandwidth or who is responsible for the traffic. This leaves the administrator knowing what the problem is (high bandwidth consumption to the Internet), but not knowing the cause, and therefore, lacking the ability to quickly resolve the issue. Illustration 3 shows how a network analyser’s Top Talkers view with detailed analysis capabilities can assist in in-depth problem solving scenarios. By reviewing the network’s Top Talkers (who is causing the traffic), the network administrator can isolate the cause of the excessive utilisation and take steps to resolve the issue. This deeper level of detail is not found inside an SNMP management console. However a network analyser with SNMP management capability can offer the full view of the fundamental network issue. SNMP – A Component of Total Network Management Make no mistake-SNMP monitoring should be a part of any network management solution. But effective administration of enterprise networks requires more than SNMP management. Only a comprehensive network analyser can deliver both in-depth analysis along with the ability to manage and view statistics from SNMP compliant devices. When selecting a network analyser, choose a solution that provides full network coverage for multi-vendor hardware networks including a console for SNMP devices anywhere on your LAN or WAN. Also, look for a solution that includes a network mapping program that can help you visualise the network by continually monitoring and displaying device and route statuses. In addition, the network analyser should report information about services running on the primary devices. This information is important to an administrator of a single site, and invaluable to an administrator who is responsible for multiple sites. Often, the network mapping program is integrated with the SNMP management station, allowing the two systems to share information. This is accomplished by using the network mapping tool as a first step, SNMP as a high-level drill down, and finally a network analyser for deeper level statistics and information. A comprehensive network analyser also includes a packet decoding and analysis tool. Providing the additional depth that SNMP management lacks, a network analyser allows you to look beyond simple statistics into the actual frames being transmitted across the network. While network analysers vary greatly in their feature sets, some of the primary functions you should look for in addition to packet capture and decode is some form of Expert analysis for advanced problem identification and resolution, long-term reporting capabilities, and triggered notifications. These features can provide ongoing insight into the day-to-day operations of the network, at a level beyond the scope of SNMP. Figure 1 is a checklist designed for any network administrator to review when choosing a comprehensive network management solution. SNMP management provides valuable insight to any network administrator who requires complete visibility into the network, and it acts as a primary component of a complete management solution. However, SNMP was never intended as a comprehensive network monitoring solution. It therefore must be complimented by a complete suite of network monitoring and management tools. You should not have to choose whether you want to review network traffic or network devices. For complete visibility, choose a solution that provides both. When shopping for the right network analyser for your network, consider a comprehensive solution for complete coverage.
<urn:uuid:7cf77af5-120d-4795-9a65-bc7c565952ba>
CC-MAIN-2022-40
https://it-observer.com/snmp-monitoring-one-critical-component-network-management.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00138.warc.gz
en
0.895603
1,527
2.65625
3
2020 will witness entirely new ways of teaching and learning, and advances in education tech promise to accelerate both. From faster 5G internet connections to quicker integration of complex subject matter through AR (augmented reality) and AI (artificial intelligence), technology trends are taking academics to new physical – and virtual – realms. Here’s a look at how ten technologies are reimagining how we learn in the new millennium. 1. Artificial Intelligence Once reserved for “Jetsons” episodes, AI is quickly becoming commonplace in the classroom. In fact, some experts predict educational AI will increase 47.5% by 2021. “Some experts predict educational AI will increase 47.5% by 2021. Consider it the teacher’s aid to end all other teacher aids. AI amplifies the role of the educator with occupational perks like personalized tutoring sessions and targeted practice exercises, real-time tailored lesson plans based on overall student performance and even instant plagiarism review. Thanks to AI education technology, teachers spend less time on minutia like grading, leaving room for a more meaningful approach to whole student teaching through social emotional learning and soft skill cultivation. 2. Augmented Reality Hitting your “flow state.” Being ”in the zone.” Creating an “optimal experience.” All are different names for the same immersive, transcendent space that happens when the human brain is fully engaged in seamless productivity. To the delight of educators everywhere, Augmented Reality (AR) is captivating young minds in the educational environment. It’s facilitating the synthesis of complex material while offering students a taste of total absorption. Forget 2D images of flatly-colored diagrams in thick textbooks. With AR, students can have a sensory experience with virtually any subject. Be it a journey through the streets of the Roman Empire during its heyday, a jaunt through the Milky Way Galaxy, a plunge into the molecular realm of an electron, or the visualization of an involved algorithm, students can experience lessons like never before right from their cell phones. 3. Cloud Technology Cloud technology has been making its way around the public sector for years. Schools are harnessing its power at impressive rates, with the aim of taking advantage of a variety of efficient, effective processes. According to one source , 96% of leading research institutions use Amazon Web Services (AWS), a popular cloud service among academics. There are several reasons it’s becoming an institutional go-to, including its ability to help: - Identify at-risk students, by pinpointing those who may be falling behind based on their grades. - Reduce expenditures, by scaling back data needs during slow times like summer break and reducing on-site data center maintenance costs. - Secure student data, by protecting it from cyber attacks and breaches. - Save time, by allowing teachers to access resources from the cloud and upload completed assignments for instant grading. Coding may soon be a dual language most everyone speaks. In fact, many students are fluent in it by the time they enter the school system. And school staff would be wise to follow suit. Even basic coding skills allow instructors to customize applications used in the tech-savvy classroom. They can also help better understand the data and research collected from educational technologies. That way, they’re able to make immediate changes to the material and meet the student body where they are. “Coding may soon be a dual language most everyone speaks…many students are fluent in it by the time they enter the school system. And let’s not overlook the importance of disseminating basic coding lessons. It’s a skill whose demand is only going to increase, continually co-opting every possible field of study in its wake. 5G stands for the fifth generation network, and it is set to take over completely by next year. It purports to make our internet connections and download speeds 1000% faster than their 4G counterparts. It’ll also allow more devices to connect without impacting performance. This is fantastic news for schools relying on these high-powered networks to access huge swaths of streaming content, as well as the AI and AR lessons that reliably capture the interest of video-loving Gen Z. It also has the opportunity to provide additional support to special needs learners who benefit from robotic physical assistance applications. 6. ADA Accessibility Accessibility is a huge topic of conversation in the school environment, and this goes double for websites. According to Abilitynet.org, fewer than 10% of websites today are accessible – even though almost a quarter of the American population lives with a disability. Digitally, educational institutions have a responsibility to ensure they level the playing field for students, staff and visitors of all ability levels. This means designating an office that evaluates the systems and softwares used for everything from content management to payroll. 7. Appealing to Gen Z with Gamification According to a 2018 Pearson study, 59% of Gen Z feels that technology will drastically impact the next generations of learners. When it comes to engagement, one model seems to incentivize participation more than almost any other: gamification. Before you write it off, realize that seemingly frivolous rewards like points, stickers or badges actually lead to preferred outcomes like improved overall performance and better retention rates. “59% of Gen Z feels that technology will drastically impact the next generations of learners. It seems with a modicum of competition, students are more motivated and regard learning as fun – an element to which our brains respond. 8. Student Analytics The data-driven classroom is an important part of today’s education system. Advances in student analytics give teachers invaluable insights about the performance of their class at a macro and micro level. The automation of assignments, and the grading of them, free teachers’ valuable time for more productive pursuits like planning complete course modules and providing in-depth guidance in areas where an individual or the entire student body may be struggling. 9. Cyber Security Cyber attackers are taking aim at schools. Why? They’re filled with with sensitive personal data like social security numbers, addresses and birth dates. Since the cloud allows for multi-user access, the shared storage method is used to conveniently house student records in schools across the country. However, these systems must also be backed by strict, foolproof security measures to protect the data that exists and the students who supply it. 10. Personalized learning/blended learning According to the Public Schools Review, the national average public school student:teacher ratio is approximately 16:1 for the 2019-20 school year. However in many parts of the country that figure is easily double. Point being, it’s almost impossible for a single teacher to customize a unique set of curriculum for each of their many students throughout the year. This would mean taking into account the individual aptitudes and interests of each and every learner. However, with the help of AI applications, personalized learning and blended learning environments can engage different students with different types of content at any time – in or outside the classroom. Education, like most other industries, is being transformed and supported by today’s ever-changing technology. Though the adoption of some on this list can be initially seen as disruptive, the benefits of Ed Tech are poised to significantly impact education in 2020 and beyond.
<urn:uuid:4498080f-fe87-4d0b-aee7-bdeb3b7c9f24>
CC-MAIN-2022-40
https://www.docutrend.com/blog/top-10-what-tech-is-impacting-education-in-2020/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00138.warc.gz
en
0.944732
1,537
3.203125
3
This is the last article in the Wireless series. Just to remind you, the first article introduced the reader to the Wireless world and discussed Wireless devices and protocols. The second article went deeper into Wireless networks, provided general info on WLAN and discussed IEEE standards for them. This article deals with WLAN security, explains the most common attack techniques and introduces some useful tools. Few words on Wireless network topology. Each Wireless network has two major components, either stations (STA) or access points (AP). Wireless network operates in one of two modes: ad-hoc (peer-to-peer) or infrastructure mode. In the ad-hoc mode each client (STA) communicates directly with other clients within the network. In the infrastructure mode each client (STA) sends its communication requests to a central station, which is the Access Point (AP). The access point acts as an Ethernet bridge. A client and an access point must establish a relationship prior to exchanging data. Once established the client-access point relationship could be in any of the following three states: 1. Unauthenticated and unassociated 2. Authenticated and unassociated 3. Authenticated and associated The exchange of “real” data is only possible in the third state. Until then the parties communicate using management frames. Access point transmits beacon management frames at fixed intervals. Client receives this frame and starts authentication by sending an authentication frame. After successful authentication the client sends an association frame and the access point responds with an associated response frame. Wireless Networks Security Mechanisms The 802.1 standard for wireless networks provides several mechanisms for achieving secure network environment. This section explains five widely used mechanisms. # Wired Equivalent Protocol Wired Equivalent Protocol, or WEP, was first designed by the authors of the 802.1 standard. WEP was designed not to provide a secure network protocol similar to IPSec, but rather to provide an equivalent level of privacy. WEP aims to provide security by encrypting data over radio waves. WEP is used to prevent unauthorized access to the wireless network. WEP is disabled by default. If it is turned on any outgoing package is encrypted and packed. The WEP protocol relies on a secret key that is shared in a basic BSS (Basic Set Service). This key is used to encrypt data packets before they are transmitted, and an integrity check is run on them. WEP uses the RC4 algorithm, which is a stream cipher. A stream cipher expands a short key into an infinite pseudo-random key stream. WEP Encipherment Algorithm * Plaintext message is run though an integrity algorithm to produce integrity check value, also known as ICV. The 802.11 standard specifies the use of CRC-32. * The integrity check value is appended to the end of the plaintext message. * 24-bit initialization vector (IV) is generated and the secret key is concatenated to it. Then it is used to generate a seed value for the WEP pseudo-random number generator (PRNG). * PRNG outputs a key sequence. * The data is encrypted by XORing with the key sequence generated. * The IV is appended in the clear to the protected frame (with the ciphertext) and transmitted. The algorithm involved in deciphering can be easily guessed from the above algorithm. The IV is used to elongate the life of the secret key. WEP uses a RC4 key stream; therefore it uses a 64-bit key to generate it, which is XOR´ed to the data/ICV combination, 24-bits IV. The secret key is 40-bits long. # WEP 2 The IEEE proposed changes to the WEP protocol in 2001, after many flaws had been discovered in the original one. The new version, WEP2, has increased the IV space from 24 bits to 128 bits and provides Cerberus V Support, though problems haven´t disappeared (discussed later). Complete support for the entire WEP2 has yet to be achieved. # Open System Authentication Each Wireless network has two authentication systems. Open system authentication is the first one and default authentication protocol for 802.11. The name implies that this system authenticates anyone who requests authentication (like a root account with null password). WEP is not helpful, since experiments have shown that the authentication management frames are sent in the clear even if WEP is enabled. # Access Control List This security feature is not defined in the 802.11 standard, but it is used by vendors to provide better security in addition to the standard security methods. Access Control List is based on the client’s wireless Ethernet MAC address (unique for each NIC). The access point can limit the clients using the network by using the ACL. If a client´s MAC address is listed, then he is permitted access to the network; if not, then access to the network is denied. # Closed Network Access Control This feature allows an administrator use either an open network or a closed network. Open network means that anyone is permitted to join the network, while in closed network, only clients that know the network name, or SSID, can join. The network name acts as a shared key. Wireless Networks Attacks Most of you will probably find this section more interesting, since it explains common attack techniques that are used to compromise wireless networks, stealing bandwidth and just for having fun. If you have a wireless network nearby or you live in a place where wireless technology is widely used, any of the attack techniques described below will have 98% success. Attackers target Wireless networks since about 95% of all networks are completely unprotected. The current standard (802.11b) grants bandwidth of up to 11 MBps. If attacked Wirelesses network uses default settings there will be no cap set on bandwidth, which means the attacker can have complete access to the capacity. You can find a very convincing example at Neworder – http://neworder.box.sk/newsread.php?newsid=3899. # Access Point Spoofing & MAC Sniffing Access control list lists provide a reasonable level of security when a strong form of identity is used. Unfortunately, this is not the case with MAC addresses. MAC addresses are easily sniffed by an attacker since they must appear in the clear even when WEP is enabled. Further more, wireless cards permit the changing of their MAC address via software. An attacker can use those “advantages” in order to masquerade as a valid MAC address by programming the wireless card, and get into the wireless network and use the wireless pipes. Spoofing MAC address is very easy. Using packet-capturing software, an attacker can determine a valid MAC address using one packet. If the wireless card firmware allows changing the MAC address, then he is done. If an attacker holds wireless equipment nearby, and he/she is near a wireless network, he will be able to perform a spoofing attack. To perform a spoofing attack, an attacker must setup an access point (rogue) near the target wireless network or in a place where a victim may believe that wireless Internet is available. If the rogue´s signal is stronger than the signal of the real access point, then the victim´s computer will connect to the attacker´s access point. Once the victim has established a connection, the attacker can steal his password, network access, compromise his/her computer etc. This attack is used mainly for password acquisition. # WEP Attacks In this attack, the attacker knows the plaintext message and has a copy of the ciphertext. The missing piece is the key. In order to get the key, an attacker would send a target system a small part of data, and then capture the data that is sent to the destination. Once the attack captured the data, he got the IV. Now, he can simply run a dictionary attack to find the key. Another plaintext attack reveals the key stream using a simply XOR. If an attack has the ciphertext and the plaintext, he can XOR the ciphertext and get the key stream. The attacker can use the key stream with the right IV, to inject packets into the wireless network without authenticating o the access point. Cipher stream Reuse This problem allows an attacker to recover the key stream from a WEP packet (encrypted packet). The WEP cipherment algorithm declares small space to the initialization vector, using this flow an attacker can capture key streams by sending packets with various IV. Later, the attacker can decrypt the encrypted message using by XORing with the plaintext message (Note that the attack must have both ciphertext and plaintext). Later, when authentication data traffic though the network, the attacker would be able to intercept it, and using the key stream(s) the attacker can recover it to plaintext. This research project was released a year ago (August 2001) by Scott Fluhrer of Cisco Systems, Itsik Mantin and Adi Shamir of the Computer Science Department of The Weizmann Institute in Israel. The project is dealing with weaknesses in the Key Scheduling Algorithm (KSA) of RC4. This group discovered two weaknesses in the KSA. The attack technique, which is described in their research paper, cracks keys of both WEP (24 bit long) and WEP2 (128 bit long). Adam Stubblefield or Rice University and John Loannidis and Aviel Rubin of AT&T Labs approved this attack technique later. Once the attack was approved, two new tools become available to the public (air snort and WEPCrack). Their source code, though, was never released to the public. # Man-in-the-middle attacks Most of the attacks in this category are based on ARP poisoning, or cache poisoning. Basically, ARP spoofing is a method of exploiting the interaction of IP and Ethernet protocols. Since, this article is not about ARP protocols, or ARP attacks, I´ll describe shortly the attacks and the purpose. The attacker combines an access point with a virtual private network server of the same type as the one on the target network. When a user tries to connect to the real server, the spoofed server sends a replay back, leading the user to connect to the fake server. This type of attack is complex to be explain in this article, though you can find good articles about ARP poisoning at www.ebcvg.com # Low-Hanging Fruit A wireless attacker would probably start with this attack, since most wireless networks are completely unprotected (they use open system authentication), and moreover, WEP is not present by default. All an attacker needs to attack such a system is a wireless card and a scanner (see at the end of this article). An attacker scans for open access points that allow anyone to connect, and then connects to the access point. Attackers use it to have free Internet access, launching a blind attack (attacking a third party) etc. Securing Wireless Networks Wireless network has become very popular among companies, because it allows employees to access the wired network (WLAN) from any place thus granting roaming ability. New technologies usually fail to provide decent security level – Wireless is not an exception. This section describes the most common ways to improve security of a Wireless network. # MAC Address Filtering This method uses a list of MAC addresses of client wireless network interface cards that are allowed to associate with the access point. If there are several access points, the list should be available on all access points, which the client can be associated with. Administrators should take care that the list is “up-to-date”. Though this method is vulnerable (see above), it is widely used to secure wireless networks. As was stated above, WEP provides a certain level of data encryption for communication between clients and access points. Still, WEP should be enabled, because there is no need to make it easier for attacker to compromise the network. Once again, this method is vulnerable to different kind of attacks. # SSID (Network ID) The first attempt to secure wireless network was the use of Network ID (SSID). When a wireless client wants to associate with an access point, the SSID is transmitted during the process. The SSID is a seven digit alphanumeric ID that is hard coded into the access point and the client device. Using SSID, only those clients, who hold the correct Network ID, are allowed to associate with the access point. With WEP enabled, the SSID is transmitted in encrypted form, but if an attacker has a physical access to the device, he/she can determine the SSID, since it is stored in clear text. Once the SSID is compromised by an attacker, the Wireless network administrator must assign a new SSID manually. Using a firewall to secure a wireless network is probably the only security feature that will prevent unauthorized access. As mentioned below, the access to the network should be done via IPSec, secure shell or VPN. Thus, the firewall should be configured to allow only IPSec or secure shell traffic. A illustration of an access to a wired network form a wireless network: 1. The wireless client authenticated and associates with wireless access point. For better security, the access point should be configured to filter MAC addresses. 2. The access point sends a request to the DHCP server. The server assigns the client a network address. 3. Once the network address is assigned, the wireless client is now at the wireless network. In order to access the wired network, it either sets up an IPSec VPN Tunnel or uses secure shell. It is important to configure the firewall to accept only secure connections, all other connect should be denied. # Access Points Access points can accommodate MAC filtering and should be configured to do so, though MAC addresses can be spoofed (see above). Administrators should take care that the AP is located in a secured place, so unauthorized physical access will be permitted. Configuring AP is done via telnet, web browser or SNMP. It is recommended to allow configuration only via telnet session, and block all other ways. # Design Considerations Before taking any implementation to secure a wireless network, it is important to properly design the network. A properly designed network can eliminate some risks associated with wireless networks. Some tips for proper design: 1. Protect wireless networks with VPN or access control list. 2. Access points should not be connected to the internal wired network even if WEP is enabled. 3. Access points should be never placed behind a firewall. 4. Wireless clients are allowed to access the network though a secure shell, IP-Sec or virtual private network. These provide user authorization, authentication and encryption – secure your network.
<urn:uuid:c22171b8-9f03-4eba-8e45-18717afc0372>
CC-MAIN-2022-40
https://it-observer.com/wireless-security-hacking.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00138.warc.gz
en
0.911396
3,096
3.890625
4
Intrusion Detection Systems [IDS] and Intrusion Prevention Systems [IPS] are two of the most important tools in any cybersecurity strategy. However, they aren’t always used properly or fully understood by companies. It’s important to understand the differences between these two cybersecurity tools, so you can make the right decisions for your company. To understand the differences between Intrusion Detection Systems and Intrusion Prevention Systems, first it’s important to know what they do. Intrusion Detection System [IDS] is a network security system that monitors the traffic flowing into or out of a system and alerts administrators to any unusual activity. Intrusion Prevention Systems [IPS] are specialised Intrusion Detection Systems that not only detect attacks but also attempt to block them. An Intrusion Prevention System [IPS] is a network security system that monitors network traffic and detects malicious activity. It differs from an intrusion detection system in that it blocks or mitigates attacks before they cause damage. The Intrusion Detection System [IDS] analyses network traffic and compares it to a database of known malicious activity. When the IDS finds something that matches its database, it sends alerts to security personnel who can then take steps to contain or stop the attack. The Intrusion Prevention System [IPS] works similarly, but instead of just sending an alert that there may be an intruder, it actually blocks intruders from accessing your network by blocking any traffic matching its signature database. The key difference between IDS and IPS is that IDS is a passive detection system, while IPS is an active Intrusion Prevention System. IDS analyses network traffic to identify suspicious activities such as port scanning, denial of service attacks, or worm propagation. It monitors the traffic flow from one point in the network to another by looking at the header of each packet-based communication on your network. An IDS can detect any unauthorised activity that occurs within its own network boundaries, which are called attack signatures or alert rules. IPS acts as a firewall between hosts on your internal network and outside networks like Internet Service Providers [ISPs]. When it detects suspicious activity on your internal host computers it automatically blocks it before it can affect other systems or networks connected to yours You’ve seen how IDS and IPS differ from a technical standpoint. But what does this mean for your organisation? If you’re concerned about the security of your network, here are some key takeaways: You have now learned the differences between an IDS and an IPS. As you can see from this article, these two systems work together to protect a network from threats. The IDS is passive and only detects intrusions after they occur while the IPS actively prevents them before they happen. To sum up: if you want a system that just detects intrusions after they occur, then install an IDS. However, if you want to prevent intrusions before they happen, then install an IPS instead of or along with your IDS! In general, IDS and IPS both play important roles in any company’s cybersecurity strategy. The main difference between the two is that while an Intrusion Detection System is a detection system, an Intrusion Prevention System actually prevents attacks from taking place. This makes them complementary to each other: you can use both together or in tandem to improve your overall security posture. In general, IDS and IPS both play important roles in any company’s cybersecurity strategy. The main difference between the two is that an IDS only detects attacks and doesn’t try to stop them from happening again. An IPS can detect and block attacks, but only after they’ve happened once already. The best way for companies to protect themselves against intrusion attacks? Make sure you have both kinds of security systems!
<urn:uuid:b6fd8a82-8cb1-46ea-bf18-fcff775ca298>
CC-MAIN-2022-40
https://www.neumetric.com/intrusion-detection-system-ids-vs-intrusion-prevention-system-ips/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00138.warc.gz
en
0.938
786
3.109375
3
Spear phishing attacks are executed through the use of electronic or email communications. Targeted phishing may impact an individual, corporation, or business, depending upon the objectives and intentions of its perpetrators. Cybercriminals may launch spear-phishing attacks for the following reasons: - To steal sensitive data such as credit card information and account credentials for financial gain - To install malware on a targeted device for malicious intent - To target an organization for securing trade secrets and confidential data which is later resold to competitors - To acquire military information A staggering 95% of fraudulent activity which is conducted against enterprises with the sole objective of gaining sensitive data is conducted via spear phishing. In the last two years, email communications scams have caused companies to suffer financial losses of more than two billion dollars according to the FBI. This illustrates the colossal scale of spear phishing attacks that take place globally. For more information about phishing please refer to our guide on the topic: Further reading Anti-Phishing Guide How Does Spear Phishing Work? Spear phishing email attacks are more sophisticated in nature compared to phishing attacks because they are customized for specific victims. Cybercriminals hunt through the Internet to find their targets and record personal information about them, such as their email addresses, hobbies, and recent purchases by probing their social media accounts. Based on this data, they carefully draft spear-phishing emails, assuming the identity of someone the victim can trust. The messages delivered to recipients create a sense of urgency and compel the victims to share their personal information, such as passwords and credentials. Spear phishing email examples include requests to click on links that direct recipients to websites where they are asked to provide their access codes, PINs and account passwords, or to download malware. After gathering this information through targeted phishing, criminals make use of data to enter victims’ bank accounts or even create fake online identities. Perpetrators of these scams disguise themselves as friends of the victim or a reliable entity, which makes it difficult to distinguish between legitimate and fraudulent messages without proper spear phishing training. For more information about other phishing types and techniques please refer to our corresponding guide: Further reading Methods and Types of Phishing Attacks How to Prevent Spear Phishing Attacks? Are you wondering how to prevent spear phishing attacks to protect your users and their private information? Fortunately, there are a number of tried-and-tested measures that you can deploy to combat this menace and stop spear phishing attacks. #1 Filter Your Email and Implement Anti-Phishing Protection Besides traditional email security solutions such as anti-spam and antivirus filters, extra anti-phishing software should be implemented (spear-phishing emails usually contain no malware and are almost never spam, which is why they often easily bypass traditional security mechanisms). There are several useful anti-phishing protection techniques that you can make use of. These include checking for domain spoofing, any instances of impersonation, and flagging questionable content in the email. From an enterprise perspective, there are several reputed organizations such as PhishLabs, IronScales, and PhishMe which are progressively working to protect corporations from becoming victims of these scams. #2 Keep Your Systems Up-To-Date With the Latest Security Patches While viruses might be delivered via email, they can be spread across your network using gaps in security caused by outdated software. This is precisely why it is fundamental for individual users and organizations to update their security software regularly to build a wall against possible spear-phishing attacks. #3 Encrypt Any Sensitive Company Information You Have Data encryption should be the foundation of your security strategy and is a must-have tool in your arsenal. Encrypting sensitive information essentially makes it impossible for cybercriminals to access data, shutting down or at least weakening their attempts to attack the system. #4 Conduct Multi-Factor Authentication This data protection method only unlocks sensitive information upon the completion of an authentication process which has two or more steps. It is a means of applying additional security layers and locking confidential information with more than just a password. Further reading Multi-Factor Authentication (MFA) as a Must-Have for MSPs #5 Use DMARC Technology DMARC stands for Domain-based Message Authentication, Reporting & Conformance technology. The purpose of this mechanism is to evaluate incoming emails against a database with a complete record of the senders. If an email does not align with the information of the sender as recorded in the database, an automatic email is sent to notify the security admin. #6 Run Frequent Backups In the event of a successful attack, you need to get users back to work quickly by getting them access to the latest versions of uninfected files. Having a cloud-based backup solution is critical to keeping users productive during a spear-phishing attack. #7 Conduct Email Security Training for Employees Security awareness sessions, including spear phishing training, are vital in order to equip employees with the knowledge to identify and divert incoming attacks, particularly at the enterprise level. #8 Be Wary of Suspicious Emails Spear phishing emails are becoming increasingly sophisticated. If you receive an email that seems to be from someone you know, but is suspicious of its intent and content, as a best practice, check to see if the person actually sent the message to you. For more phishing prevention best practices please refer to our corresponding guide: Further reading Guide on How to Prevent Phishing Win the Battle Against Spear Phishing The abundance of personal information and data on the Internet has become a goldmine for cybercriminals to dupe unknowing victims. By staying vigilant and exercising tested tips to dodge spearfishing attacks, you can protect your users from falling into this trap.
<urn:uuid:a64d443a-eda7-4f50-8585-a7c225c70bdc>
CC-MAIN-2022-40
https://www.msp360.com/resources/blog/spear-phishing-prevention/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00339.warc.gz
en
0.936023
1,192
2.859375
3
spainter_vfx - stock.adobe.com A fundamental of backup is 3-2-1 – often referred to as “the 3-2-1 rule”. But what is the 3-2-1 rule? Is it still of value to all organisations, especially in an era defined by increasing use of the cloud for backup and disaster recovery? This article looks at the 3-2-1 rule for backup, defines what it meant as originally intended and asks whether it has been superseded by recent developments. The conclusion we’ll come to is that the principles it embodies are good ones, and if it isn’t directly applicable to current scenarios, it does provide some essential guidelines about how we should protect data in the 2020s. Defining the 3-2-1 rule The term 3-2-1 was coined by US photographer Peter Krogh while writing a book about digital asset management in the early noughties. 3: The rule said there should be three copies of data. One is the original or production copy, then there should be two more copies, making three. 2: The two backup copies should be on different media. The theory here was that should one backup be corrupted or destroyed, then another would be available. Those threats could be IT-based, which dictated that data be held in a discrete system or media, or physical. 1: The final “one” referred to the rule that one copy of the two backups should be taken off-site, so that anything that affected the first copy would not (hopefully) affect it. Shortcomings of the 3-2-1 rule This set of rules is fairly limited, if taken as originally intended. The idea of three copies is fine. It seems to fit the bill of a minimum viable number to ensure recovery in case of disaster. But the two backup copies being on different media is full of potential limitations and pitfalls today. The idea was that the first of the two is for fairly rapid recovery, so would be accessible from the main production system. The second, so says the rule, should be on different media. Back when photographer Krogh coined the idea, the intention was to ensure a gap – logical, if not physical – between copies so that data corruption or tangible damage affecting one would not affect the other. That seems like a lot of trouble to go through for an organisation that might need rapid access to backups for recovery, test & dev, and analytics. Different file systems and protocols may also create more layers of complexity and expense in compliance terms where stored data needs to be treated similarly across all retained instances. 1 becomes 2 Mostly, though, the idea of different media as a necessity looks pretty redundant in the light of the development of the cloud, which potentially collapses point three – the “1” – into point two. In other words, the ability to move data off-site to the cloud is available cheaply and with sufficient bandwidth in ways that were not really realistic when 3-2-1 was devised. Tape still has its place, but overwhelmingly – due to slow access times – that is in archiving. It is potentially a good insurance against ransomware, too, with its in-built “air gap” to core systems. But yes, access is slow, so its use cases are limited. So, in place of what was once tapes in the car boot/trunk, we now have the cloud. It is quite clearly off-site, so fulfils point three. And although a cloud tier connected to an organisation’s datacentre is not necessarily on a different type of media or storage mode (often object), it can fulfil the same purpose of the original point two – to play host to a copy that is incorruptible or undamageable, should the first be so affected. But there is a big can in there, to coin a phrase. Cloud storage and cloud backups that sync with on-premise systems can be affected by ransomware and other nasties. Storing backup in the cloud is a good idea, for the physical distance it places between it and on-site copies. But to ensure a logical gap, the backup must be done right, with the correct security and access rules, immutability of data, and point-in-time restore. All of which makes the original “2” look redundant, and possibly something that is only possible for individuals and organisations operating at small scale and with low recovery time objective (RTO) requirements. Pulling out the principles of 3-2-1 The advent of intelligent and rapid ways of making secondary copies of an organisation’s data to other systems on-site, off-site and in the cloud means that much of what was literally intended by 3-2-1 backup is redundant. Instead, we can perhaps draw out the principles within 3-2-1 and make use of them in the era of cloud, ransomware, and so on. Firstly, multiple copies is essential. Obviously, there is the production copy. This may be copied via various means – snapshots, replication, continuous data protection and/or various suppliers’ disaster recovery failover products – to a discrete system that can be activated in case of serious outage at the first. This could also be in the cloud. But, in addition to any rapidly restorable failover copy, there should also be true backups. Snapshots and the like can provide quick access to files and past system states, but they are more costly to store – and therefore don’t usually go back as far – and if they have been compromised, they will be useless. Backups provide copies that are retained for longer and are taken at less frequent intervals, say once a day, so there will potentially be clean copies from further back in time available. Secondly, off-site copies are also essential. The principles of disaster recovery dictate that secondary copies that you may need to rely on should be as isolated as possible from things that could catastrophically affect the primary site. Secondary sites in the same organisation and the cloud fulfil this need. But the old requirement in 3-2-1 for data to be on different media is not really practical. As we’ve seen, a second site or the cloud can do what this rule was intended for, but only where security and access are up to the job. So, what are we left with from 3-2-1? The principles seem to be that: - There is a primary copy. - There should be a secondary copy, which can be snapshots or a failover system, but there should also be a true backup. - A secondary copy should be off-site (or in another cloud location?). This can be the backup, or the failover or snapshots. Primary-secondary-off? Or 1-2-&-off? Not necessarily very snappy – but the principles are there. Read more on backup - Create your data backup strategy: A comprehensive guide. This data backup guide will help you if you’re starting the planning process, looking for a refresh or seeking new options. Backup plans are critical in today’s environment. - The importance of data backup policies and what to include. It’s important to document your data backup and recovery policy. Take the first step and download our template and use this structure for developing other IT policies.
<urn:uuid:d2be2914-0709-4a3f-9511-49ebed265b4b>
CC-MAIN-2022-40
https://www.computerweekly.com/feature/The-3-2-1-backup-rule-Has-cloud-made-it-obsolete
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00339.warc.gz
en
0.952022
1,561
2.609375
3
As technology develops, so too do concerns about the privacy of our data. Thankfully, though, there are ways we can protect it. One of these ways is through encryption. This is something many companies already use (e.g. Amazon when you’re using your credit card) and it involves scrambling information so no one can decrypt it without authorization. You can also encrypt files on your computer, secure your emails, and make sure your cloud storage is safe. Whether you’re new to encryption or you wish to delve deeper into the topic, below you’ll find some handy resources that will tell you more about this method of protecting data. Guides About Encryption To learn more about encryption as a beginner or expert, these guides will provide you with all the information you need to know: LifeHacker.com – Aimed at beginners, this guide will talk you through what encryption is, what the benefits of encrypting your files are, how to encrypt your files and how to encrypt your entire hard drive. Ebuyer.com – A detailed resource on USB encryption, including what the advantages are, why businesses should encrypt their data, what levels of protection there are and how you should choose your level of security. BusinessNewsDaily.com – A guide to computer encryption for small businesses, this resource explains why encryption is so important, what types are available and what built-in or third-party programs are best. Microsoft.com – Here you can learn about cryptography from one of the world’s leading software companies. In this guide, Microsoft talks you through encryption, digital certificates, secret keys and public/private keys. ico.org – This UK-based website from the Information Commissioner’s Office provides a basic overview of encryption and what it means for organizations in the UK. Guides About Cryptology Cryptology is the art of solving and writing codes, which is a fundamental process in encryption. If you’re interested in getting to grips with the technical aspects of cryptology or you’re wanting to become a cryptologist, the below resources provide you with some useful terminology, tips and advice: LearnCryptography.com – On this website you’ll find a great overview of encryption, including cryptanalysis and cryptocurrency (Bitcoin). The website also delves into mathematics, steganography and general computer security. Comparitech.com – This helpful guide covers some useful cryptography terminology and also points you in the right direction for further resources, organizations, books, and papers. A great guide for those looking to learn more and further their cryptology learning. OWASP.org – Offering a “Guide to Cryptology” this resource covers a variety of topics including cryptographic functions, cryptographic algorithms, algorithm selection and key storage. With sections that cover how to determine if you’re vulnerable and how to protect yourself, this guide is great for making sure you’re using cryptology safely. NIST.gov – The National Institute of Standards and Technology has produced a Cryptographic Toolkit. Here you’ll find a collection of guidance and standards, which will help you to protect your operations, communications and data. This has been produced to help U.S. government agencies but will also benefit other organizations. Choosing Tools for Encryption A number of tools can help you to protect your data through encryption. Detailed below are some helpful resources that will offer you advice on what tools are available and what benefits they will provide you with. HeimdalSecurity – This article offers nine free encryption software tools that you can start using straight away. These include ones that help you to encrypt your online traffic, files and your online accounts. SDD.EFF.org – Within this resource you’ll find some helpful advice when it comes to choosing your encryption tools. It covers everything from how transparent the software is to what to do if the software creators become compromised. LetsEncrypt.org – This tool helps you to create an HTTPS server that gains a browser-trusted certificate automatically and without the need for human intervention. The tool is free and is brought to you by the Internet Security Research Group (ISRG). GnuPG.org – As another free tool, the GNU Privacy Guard helps you to sign and encrypt your communication and data. It can be easily integrated with other applications and provides you with a flexible key management system. ResettheNet.org – As the ultimate privacy pack, this resource points you in the right direction when it comes to protecting the data on your phone or your computer. It offers advice on which tools are best, covering various systems including Windows and Mac. ICFJ.org – Produced by the International Center for Journalists, this guide suggests six encryption tools that journalists should use. These are all aimed at protecting emails, including Peerio, Hushmail and OpenPGP. You’ll also find handy links about how you can use these tools and integrate them with your system. Comparitech – This guide offers a list of free software used to encrypt individual files and folders before uploading them to unencrypted cloud storage providers like Google Drive, Microsoft OneDrive, and Dropbox. News and Opinion If you’re starting to get fanatical about encryption, the below resources show you where to find other like-minded individuals. Hear from some industry-leading experts and discover organizations that are devoted to improving public safety. Cipher – This is the newsletter produced by the Technical Committee on Security & Privacy who are part of the IEEE. You can also find information on the latest research events and calls for papers. Epic.org – The Electronic Privacy Information Center publishes a wide range of articles and news-related guides in a bid to protect privacy. They cover a variety of hot topic issues, including encryption. Bristol Cryptography Blog – This is the University of Bristol’s official blog, which allows you to follow their latest research developments and discussions on related topics. Ideal if you’re a crypto student or cryptographer. Philip Zimmermann – Zimmermann created an email encryption software package, Pretty Good Privacy (PGP). On his website, you can find out more about him as well as further readings on PGP. Bunnie’s Blog – If you want to get inside the mind of one of the world’s most famous hardware hackers, now’s your chance. Bunnie’s blog is run by the first person to hack an XBox and brings you a variety of interesting security-related topics. Crypto-Gram – This free monthly e-newsletter is produced by Bruce Schneier, a leading expert on encryption. Discover the latest news as well as Schneier’s thoughts and advice. Below are some further resources which will come in handy if you’re looking to enhance your encryption knowledge. This includes the best books on the topic and a forum where you can have all your encryption-related questions answered. Cryptography Engineering – Produced by Tadayoshi Kohno, Bruce Schneier and Niels Ferguson, this book is an updated version of Practical Cryptography, an international bestseller. Crypto StackExchange – If you’ve got any questions you want answering, this website is the perfect place to go. With a community of crypto-fanatics, this Q&A site covers all the latest hot topics. Modern Cryptography: Theory and Practice – This book by Wenbo Mao provides you with a great introduction to cryptography and goes into much more detail than many of the online resources. Handbook of Applied Cryptography – Available as a PDF, this book provides you with 794 pages of detailed information on encryption. It covers basic terminology and concepts, public-key cryptography, hash functions and stream ciphers along with a plethora of other useful topics. IACR – This non-profit scientific organization, the International Association for Cryptologic Research (IACR), has been established to provide further research in cryptology and other related topics. On the website you’ll find the latest meetings, news updates and events.
<urn:uuid:cf2545c5-3b09-436a-9cdb-6abead84d8d8>
CC-MAIN-2022-40
https://www.comparitech.com/vpn/encryption-resources-tools-guides/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00339.warc.gz
en
0.912039
1,686
3.28125
3
When talking about latency, we are referring to end-to-end latency. We define end-to-end latency as the delay from when an image is captured by a camera until it is visible on a video display. It is divided into 3 major steps impacting the total system latency. 1.2.1 Camera latency Factors: Stream (Resolution, Image settings, Audio, Compression) Capture frequency (sensor), Multiple Streams, Image processing… 1.2.2 Network latency Factors: The network infrastructure, Data amount, Transmission Protocol (UDP / TCP) Network infrastructure and management In most cases, a limited network is the largest contributor to jerky| choppy| lagging videos. If the bandwidth is limited, the device will have to compensate the quality of the stream (reduce bitrate) to match the available network infrastructure bandwidth. It will do by lowering the image quality or the framerate. This can result in a choppy video stream. - Using the Axis Site Designer tool, you can estimate the needed bandwidth depending on the Axis camera model. - A good network infrastructure that is well managed (QoS, enough bandwidth, Network hops well planned) will highly contribute to a smoother video stream.
<urn:uuid:8854f4cc-5619-442c-a092-7690ef3e3636>
CC-MAIN-2022-40
https://www.axis.com/en-in/support/troubleshooting/streaming
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00339.warc.gz
en
0.880574
263
2.65625
3
Before 2020, outbreak analytics was on the fringes of data science studies. It was a niche not often called upon, but when COVID-19 began its quick global spread, the public, policymakers, and scientists alike looked to outbreak analytics to better understand the scope of the virus. At its core, epidemiology and outbreak analytics focus on utilizing all available data to build models and allow evidence-based decision-making. It is an interdisciplinary field whose ultimate goal is to provide answers to public health crises in real-time. This is achieved through in-depth analyses of raw data to build epidemiological models, which can then inform public health officials and policymakers. The field of outbreak analytics isn’t as old as one might think. Prior to COVID-19, there have only been a few notable instances of in-depth epidemiological studies–Middle-East Respiratory Syndrome coronavirus (MERS-CoV), Zika, and the West African Ebola virus disease (EVD)–and none on the scale of COVID-19 studies in terms of urgency and importance in guiding global policy. The first and most important step in data analysis is exploration, wherein epidemiologists work on visualizing data and generating summary statistics. The first graphic created is the epicurve, which shows case incidence over a given time interval. Next, epidemiologists create maps. These maps are used to visualize the ‘ecological niche’ of infectious diseases and strategize intervention based on disease distribution. Many conversations among outbreak analysts surround data capture and what tools can be utilized to make the process easier, quicker, and more accurate. In recent years, cloud computing, mobile data collection, and automated data analyses and reporting have advanced data collection capacity. Dr. Lauren Ancel Meyers, a mathematical biologist at the University of Texas, Austin, says, “a lot of new thinking, new methods” has come about due to COVID-19. “I would venture that we’ve probably progressed as much in the last 10 months as we have in the prior six years.” Researchers predict that as technology makes real-time sequencing a standard, genetic analysis will likely emerge as an important tool in studying outbreak analytics. This line of inquiry would likely offer insight into pathogenesis, risk stratification, and response to vaccination. Understanding how different populations are affected by applying these understanding to diverse, under-studied global populations is vital to stopping outbreaks on a global scale. Outbreak analysts construct epidemiological models to understand a virus further as it develops to reduce harm in the now and prepare for a range of possible futures. However, the insights we can glean from outbreak analytics are still severely limited. The COVID-19 pandemic has made the limits of epidemiology abundantly clear. In studying COVID-19, epidemiologists have run up against several roadblocks. Some are due to inherent challenges in data collection, while others are due to limited resources. Data collection is made even more challenging in areas where resources and funding are limited. In these situations, the data is there–it just can’t be easily accessed. Surveillance and accurate reporting are vital in this process but have been severely lacking on both fronts. Epidemiologists have cited forecasting methods as a major dilemma. It has thus far been difficult to methodically gather spatial information about population flows and integrate them into existing transmission models. It is also a challenge to combine different types of data into transmission trees. Many in the field advocate for transparency and availability through freely available, open-source software to ease issues of accessibility. Other limitations have arisen due to government intervention. One data scientist studying outbreak analytics, Rebekah Jones, found herself in hot water when she attempted to spread information about COVID-19’s presence in her home state of Florida. Jones claims that she was unlawfully targeted by the FBI and fired from her job for refusing to manipulate the true number of detected cases in Florida. This, she claims, was to suppress her from speaking out against how Florida Governor Ron DeSantis is handling the state’s COVID-19 outbreak. Epidemiological models are built on data collected from the initial data collection phase. There are three different models: stochastic, deterministic, and SIR. These are called “compartmental” models. Epidemiologists integrate different metrics to create variations of these three basic models. There are numerous models available, each with its own findings. Different countries and regions have been utilizing different models. The ICL model developed at Imperial College London and the IHME model, developed at Institute of Health Metrics, are the two prominent prediction models for the U.S., U.K., and Australia. A new stochastic model has been developed in China, which aimed to “account for transmission dynamics and capture the effects of intervention measures in Mainland China.” Meanwhile, South Africa, the epicenter of the continent’s COVID-19 outbreak, has relied on a SIR model. The ultimate goal of outbreak analysis is to provide policymakers with a real-time projection of an outbreak’s status. Data is collected, and models are built so that policymakers can make informed decisions about how to proceed. This is the third phase of outbreak analytics: intervention. Intervention planning begins shortly after case detection, and a risk assessment are completed. This is followed by surveillance during the planning stage, which will guide decisions when implemented. Models continue to be built throughout the intervention phase as new information is discovered and real-time surveillance is studied.
<urn:uuid:abe6baeb-a5ea-47f5-aa95-e745373b96be>
CC-MAIN-2022-40
https://plat.ai/blog/outbreak-analytics-role-in-combatting-covid-19/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00339.warc.gz
en
0.951209
1,150
3.171875
3
What is data integration?Data integration is the process by which data from various sources is combined into a unified view of data. Organizations around the world use such unified views of data for business intelligence, reporting, and analytical purposes. Data integration primarily consists of three key steps: from various heterogeneous sources, which may reside on-premises or in the cloud, using various access protocols. Integrating the data and applying transformations such as data mapping, validation, data normalization, data quality checks, and many other steps, depending on the integration style. Delivering the data to the data consumer, whether an end user or an application, through various protocols for business reporting and analysis. Primary data integration styles Data integration has evolved over many years, and three primary styles have emerged: data virtualization; extract, transform, and load (ETL) processes; and enterprise service buses (ESBs). Data virtualization is the latest style in data integration. Data virtualization is a real-time data integration style that creates a virtual, integrated data layer, which provides an abstraction above the physical data sources and delivers the combined information to consuming applications. ETL is a bulk data (batch processing) style that moves physical copies of the data to a central repository for the purposes of applying transformation before the data can be consumed by target applications. ESB is a message-based, near-real-time data integration style in which enterprise applications are integrated through a bus-like architecture. Through 2020, 35% of enterprises will implement some form of data virtualization as one enterprise production option for data integration. When to use which data integration style? Data virtualization is an excellent choice for a data integration solution when a combination of structured, semi-structured, and unstructured data from legacy as well as modern data sources need to be combined and delivered to end users. A data virtualization solution is also critical when data needs to be accessed and delivered in real-time, as no other data integration style can do that. Data virtualization is well suited for both analytical and operational use cases. ETL processes work well for bulk copying large data sets, transforming them, and delivering them to the target. It is designed and optimized to handle datasets with millions (or even billions) of rows. ETL processes are best suited for applications that require access to the complete consolidated data set, such as historical trend analysis or data mining operations. ESB as a data integration style is primarily beneficial when a comparatively lightweight data transfer is needed across a set of enterprise applications comprised of both legacy and modern applications in a service oriented architecture (SOA). ESBs primarily focus on service-enabling business logic, applications, and processes for transactional use. Data integration in the modern world In this rapidly changing IT landscape, it is prudent for companies to choose a future proof data integration style. ETL, as a legacy technology, has been successful in extremely high-volume data integration scenarios for analytical and operational use cases, but it is not efficient, or almost unusable, when it comes to modern, unstructured data sources or real-time data integration needs. ESBs, on the other hand, are useful for data integration when all applications are SOA ready, when companies want to create an application-agnostic communication layer using message based communication, and when stakeholders want to move away from point-to-point integration. But using ESBs for data services often reduces performance, increases initial and maintenance costs, and reduces the breadth of accessible data sources. Data virtualization strikes a fine balance using data abstraction, support for a broad range of data sources, including support for SOA architecture. It also enables the application of universal data governance and security, and offers high ROI with lower operational costs. Data virtualization promises not to completely replace ETL, ESB, or MDM systems, but to extend their functionality to help build out companies’ enterprise data architecture 2.0.
<urn:uuid:675783f2-c9df-4876-baf7-038962e2d97b>
CC-MAIN-2022-40
https://www.denodo.com/en/page/data-integration
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00339.warc.gz
en
0.905184
821
2.8125
3
According to IBM, the average time to identify a breach in 2019 was 206 days. Network perimeter protection has certainly increased in sophistication, making it more difficult to breach the perimeter, but cyber criminals have also become more sophisticated in the methods they use to break into networks and then cover their tracks to mask their activities. Once they manage to get past the perimeter, do you have the visibility to detect them and see what the bad actors are doing? In this guide, find out how next-gen Intrusion Detection Systems (IDS) will help protect you against threats and behaviors that occur once a cyber criminal is able to breach the perimeter. You can take full control of your security from the inside out.
<urn:uuid:b94313a2-000e-4462-ba8b-2bdf7c28d058>
CC-MAIN-2022-40
https://www.networkdatapedia.com/post/next-generation-intrusion-detection
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00539.warc.gz
en
0.969352
142
2.515625
3
A Java applet is a small application that is written in the Java programming language, or another programming language that compiles to Java bytecode, and delivered to users in the form of Java byte-code. The user launched the Java applet from a web page, and the applet was then executed within a Java virtual machine (JVM) in a process separate from the web browser itself. A Java applet could appear in a frame of the web page, a new application window, Sun’s AppletViewer, or a stand-alone tool for testing applets. In the smart card world, applets are sometimes called cardlets, and each applet usually corresponds to a smart card application. A small program that can be downloaded to and executed on a Java Card. In this book, the term applet is used both for programs running in web browsers as well as for programs running on Java Cards.
<urn:uuid:720d9050-e2f3-4b58-a641-97080ef4a175>
CC-MAIN-2022-40
https://www.cardlogix.com/glossary/java-applet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00539.warc.gz
en
0.943077
188
3.578125
4
Cyber attackers exploit commonly used business applications to bypass security controls, research from enterprise security firm Palo Alto Networks has revealed. Traditional exploit techniques used in innovative ways can mask dangerous threat activity, according to the firm’s latest annual Application Usage and Threat Report that analyses the link between the two. “Today’s advanced cyber threats use applications as their infiltration vector, exhibit application-like evasion tactics, and act as, or use common network applications for communications and exfiltration,” the report said. This means the threat is not primarily from things like social media sites, but from core business applications, said Alex Raistrick, regional vice-president, Western Europe, at Palo Alto Networks. “Attackers are using the very applications that companies require to do business,” he told Computer Weekly. The report findings are based on analysis of traffic data collected from 5,500 network assessments and billions of threat logs over a 12-month period. Read more on cyber attacks - Most cyber attacks use only three methods, Verizon breach report shows - UK operation nets 17 suspected Blackshades cyber attackers - Cyber attacks move to cloud with increased adoption, report shows - Target cyber attack not isolated, warns FBI - SMEs believes they are immune to cyber attack - UK and Ireland cyber attacks up 300% in 2013, says FireEye Common sharing applications such as email, social media and video remain favoured vehicles for delivering attacks. Researchers found that 19% of threats observed were code execution exploits that were delivered across common sharing applications. Although only 5% of threat activity was seen within these applications, the report said attacks delivered in this way are often the start of multi-phased attacks rather than the focus of threat activity and could be linked to 32% of all attacks. Researchers found that a small number of applications exhibited nearly all of the observed threat activity. Networking and utility apps accounted for 11% of all apps observed but were linked to 62% of threat activity, and business apps accounted for 8% of all apps observed but were linked to 27% of threat activity. According to the report, 99% of all malware logs were linked to the User Datagram Protocol (UDP), an alternative to the Transmission Control Protocol (TCP), the majority of which were generated by a single threat. “These applications were found on nearly every network we analysed and it’s evident they have now become a favourite vehicle through which attackers can mask their activities,” the report said. Just over a third of applications observed can use SSL encryption, but many network administrators are unaware of what applications on their networks use unpatched versions of OpenSSL, which can leave them exposed to vulnerabilities such as Heartbleed. “Our research shows an inextricable link between commonly used enterprise applications and cyber threats,” said Matt Keil, senior research analyst, Palo Alto Networks. Our research shows an inextricable link between commonly used enterprise applications and cyber threats Matt Keil, Palo Alto Networks Most significant network breaches start with an application such as email delivering an exploit, researchers found. “Then, once on the network, attackers use other applications or services to continue their malicious activity – in essence, hiding in plain sight,” said Keil. “Knowing how cyber criminals exploit applications will help enterprises make more informed decisions when it comes to protecting their organisations from attacks,” he said. The report recommends that information security teams deploy a balanced safe enablement policy for common sharing applications alongside security awareness training for users. “Because Palo Alto technology is designed to identify applications, not just port numbers and protocols, it enables businesses to tie them to users and enable a 360-degree view of activity,” said Raistrick. “This approach also enables companies to stop unwanted applications and safely enable the ones they need by ensuring they are not masking malicious traffic,” he said. Researchers said security teams should also ensure they can control unknown traffic, which although averages about only 10% of bandwidth, the risk is high. Controlling unknown UDP/TCP will quickly eliminate a significant volume of malware, they said. The report also recommends that security teams should identify and selectively decrypt applications that use SSL. “This is why it is important to understand the data and see exactly what is moving through networks rather than relying merely on a port number 443 to identify SSL traffic and assume that it is safe,” said Raistrick. Researchers said selective decryption, in conjunction with enablement policies, can help businesses uncover and eliminate potential hiding places for cyber threats.
<urn:uuid:2f30dfa5-6790-4613-97fc-10a14236747b>
CC-MAIN-2022-40
https://www.computerweekly.com/news/2240221702/Cyber-threats-hiding-in-plain-sight-says-Palo-Alto-Networks
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00539.warc.gz
en
0.948703
962
2.53125
3
Cloud computing is shared pools of configurable computer system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet. Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a public utility. often referred to as simply “the cloud,” is the delivery of on-demand computing resources — everything from applications to data centers — over the internet on a pay-for-use basis. Elastic resources — Scale up or down quickly and easily to meet demand. |Architecting on AWS||Architecting on AWS||Cloud Computing||3 Days| |AWS Cloud Practitioner Essentials||AWS Cloud Practitioner Essentials||Cloud Computing||1 Day| |AWS Technical Essentials||AWS Technical Essentials||Cloud Computing||1 Day| |Prisma Access SASE Security: Design and Operation (EDU-318)||Prisma Access SASE Security: Design and Operation (EDU-318)||Cloud Computing||2 Days| |Understanding Cisco Cloud Fundamentals||Understanding Cisco Cloud Fundamentals||Cloud Computing||5 Days| |CompTIA Cloud+||Cloud Essentials||Cloud Computing||12 Days|
<urn:uuid:29302952-da9a-4706-a71c-2bf002e16c33>
CC-MAIN-2022-40
https://clclearningafrica.com/technology/cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00739.warc.gz
en
0.782799
274
2.8125
3
Editor's Note: This post was originally published December 2014 and has recently been updated and revised for accuracy and comprehensiveness. Computer hacking forensic investigation is the process of detecting hacking attacks and properly extracting evidence to report the crime and conduct audits to prevent future attacks. In this way, computer forensics is much like other forms of forensics in that you search for evidence that can’t be seen by the untrained eye, looking for evidence buried deep within computer software and hardware. Job responsibilities will often include using computer software to analyze and reconstruct computer files or financial records pertinent to an investigation. If you offer services directly related to a criminal case, you will likely be asked to prepare a report, so you need to document every action. Ultimately, computer forensics is simply the application of computer investigation and analysis techniques in the interests of determining potential evidence. CHFI investigators can draw on an array of methods for discovering data that resides in a computer system, or recovering deleted, encrypted, or damaged file information known as computer data recovery. Computer crime is the ever-present malediction that security professions combat daily. Computer Investigation techniques are being used by police, government, and corporate entities globally and many of them turn to EC-Council for the Computer Hacking Forensic Investigator CHFI Certification Program. The tools and techniques covered in EC-Council’s CHFI program will prepare individuals to conduct computer investigations using groundbreaking digital forensics technologies. Situations that call for a CHFI are numerous : - Breach of NDA - Hacking/Data Theft - Possession of illegal pornography - Industrial espionage - E-mail Fraud - Corporate Bankruptcy - Web page/profile defacement Become a Computer Hacking Forensic Investigator The CHFI certification is used to validate the candidate’s skills to identify an intruder’s footprints and to properly gather the necessary evidence to prosecute in the court of law. Computer forensics graduates have been in high demand over the past few years, with Glassdoor reporting an average salary of over $96,750 as of 2019. - Police and other law enforcement personnel - Defense and Military personnel - Corporate IT professionals - Systems administrators - Legal professionals - Banking and Insurance professionals - Government agencies - IT managers Training and Exam Information The CHFI certification is awarded after successfully passing an exam. The recommended training is EC-Council Computer Hacking Forensics Investigator (CHFI). It is also strongly recommended that you attend the Certified Ethical Hacker (CEH) class before enrolling into CHFI course. At New Horizons, we’re talking about Information Security everyday—and not just with a variety of clients, but with leading vendors—about industry trends and real-life challenges. And because of our close partnership with these vendors, New Horizons is positioned to help businesses like yours leverage our knowledge experts to discuss strategies, implementation and troubleshooting.
<urn:uuid:fdf21903-b3c9-4768-b195-5a6112b080c8>
CC-MAIN-2022-40
https://training.nhlearninggroup.com/blog/what-is-a-computer-hacking-forensic-investigator?__hstc=100576513.1c3e16ca7c7f9f344805df084b110318.1586879936688.1597178884110.1597275978308.66&__hssc=100576513.4.1597355556332&__hsfp=1665710045
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00739.warc.gz
en
0.912331
601
2.640625
3
The runaway success of Internet of Things (IoT) devices presents fascinating ways to implement them within both customer-facing businesses and manufacturing plants. However, there’s a problem with the associated product boom: a shortage of electronic components that’s led to an increase in counterfeiters creating fake parts. Tagging and Tracing Systems Increases Visibility Counterfeit components in IoT devices raise the chance that a product won’t perform as expected, or that it’s riddled with security flaws such as accidental (or even intentionally place) backdoors. Solutions are being explored. A firm called HID Global is cutting down on component counterfeiting with a near field communication (NFC) tagging system, but a more industry-wide approach is needed. The tags get placed on the device components during production and have advanced encryption elements that are reportedly impossible to copy. Then, a user can read them with a compatible smartphone and a complementary app by going through a single-step authentication process. Some companies are also turning to blockchain technology to stop counterfeit parts from getting into IoT devices. Approaches exist that use NFC tags to feed data onto a blockchain ledger system. Advocates for this approach think that it could cut down on the amount of electronic waste that eventually gets repurposed for counterfeit products as well. [bctt tweet=”IoT success has led to electronic component shortages, which give counterfeiters opportunities to create fake IoT device components. #IoT implementers must safeguard against and know how to handle counterfeit components in their systems. || #IoTForAll” username=”iotforall”] The Gray Market Could Spur Counterfeiting Practices The “gray market” describes the practice of trading products through legal channels but in ways unintended by the manufacturer. If a company opts to outsource some of its manufacturing requirements for IoT devices, one of the consequences is that surplus products could be sold on the gray market for drastically discounted prices. A counterfeiter could then acquire the gadgets, disassemble them and learn which components they should copy. The gray market issue often arises if a contractor produces extra IoT devices and secretly sells the leftovers. However, one thing device manufacturers can do is issue a set number of licenses for their IoT products that matches how many an outsourced company is contractually obligated to produce. When an IoT device checks for a license while booting up, it could cease to function if the license isn’t legitimate. It’s crucial for IoT implementors to vet their parts suppliers thoroughly to avoid gray market issues. Sometimes, there’s a push to source components as quickly as possible to meet market demands. In the United Kingdom, for example, electronic component sales climbed by 17 percent in 2017. Although the drive for efficiency is understandable in a segment that’s growing as fast as IoT, businesses shouldn’t risk sacrificing the authenticity of their products by overlooking the suitability of supply chain members. The reputational damage that results could be severe. SiliconExpert Technologies, a supplier of electronic component management tools, offers a data analysis platform that calculates the likelihood of a part being a copycat. It looks at market data, including shortages and price hikes, to make a determination. This proactive option could be ideal if people who work with IoT devices want to curb gray market activity as parts sold on illegal channels. Again, however, we need an industry-wide effort (not just a few companies) to solve this problem. The IoT Boom Brings Various Implications With It Analysts at the Pew Research Center conducted an in-depth study about how fast people have started using IoT devices. Some of the individuals who contributed their opinions pointed out how many of those who use IoT devices are so connected that they can’t imagine not using those gadgets—especially if they’re from a younger generation. Some also chimed in to say that even though IoT devices have security risks that might make individuals or businesses hesitate to use them, it’s difficult for people to disconnect completely from IoT because it’s already so deeply integrated into everyday life. However, one of the primary themes of the Pew research above was that increased regulation would make IoT devices safer to use, especially if governments start to punish bad actors. Since the infrastructure for IoT devices is still so new, it could take a while before such regulations get ironed out. Once they do, counterfeit parts may be subject to increased security measured. Manufacturers may have to verify and confirm that all of their components are genuine. Copycatting Exceeds IoT Copycat parts undoubtedly post security threats to IoT devices (and therefore IoT systems), especially since they often have security flaws from the start without counterfeit parts. Besides the problem of inauthentic parts making it into the supply chain, it’s possible to interfere with IoT security by physically opening device that doesn’t have tamper-proof exteriors. It’s then possible, for example, that sensors could report false data. Going back to copycatting, IoT implementers need to recognize that imitation products show up around the globe. According to the International Chamber of Commerce, copycatting and piracy will take $4.2 trillion from the worldwide economy by 2022. Research indicates profit margins and financial stakes are two of the biggest factors that predict whether a product will have copycat competitors. Besides, consumers that are particularly budget-conscious often prefer the lower prices of copycat items. They may decide that it isn’t worth waiting for the brand-name product or shelling out the money for it. Researching the costs of the authentic products and setting an IoT budget accordingly should help implementers avoid that situation. Copycat Parts Won’t Go Away, but It’s Becoming Easier to Spot Them It’s crucial for IoT implementers to remember that they can’t assume the devices they want to buy are free from counterfeit parts. Manufacturers may be completely unaware that their devices even contain those fake IoT device components. Moreover, as people become increasingly fascinated by what IoT systems can do, they may not be as careful as they should be about avoiding IoT knockoffs, thereby increasing the demand for fakes. Fortunately, for the individuals and companies that want to make sure they acquire and distribute products that are free from counterfeit parts, there are methods in place and other techniques in the works. Nonetheless, the industry needs to take more the issue of fake IoT device components more seriously. We need more transparency, accountability, and better authentication protocols across the entire IoT development stack.
<urn:uuid:18fadb68-a5ff-4c9a-a16a-9e4bf8e6a529>
CC-MAIN-2022-40
https://www.iotforall.com/counterfeit-electronic-components
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00739.warc.gz
en
0.95244
1,373
2.53125
3
It’s not enough for a network just to get packets from here to there. Networks, in some sense, bear some responsibility for the epidemic of cybercrime: networks have enabled instant global connectivity, but they have also allowed instant global threats to come into your home or workplace. Networks need to be designed so that they enhance overall security, rather than contribute to the problem. It is simply a fact of life that operating systems and applications will always have flaws that can be exploited by malicious actors. Software is getting more complex, straining the abilities of us humans to comprehend it. The rate of new software vulnerabilities discovered each month is not going down, it is increasing. If you’re hoping for the day when operating systems and applications are bug-free, forget it. We live in a world of vulnerable software and increasing connectivity, and your systems will always be exposed to potential attackers. The fact is, your networks will get attacked, and your users will be compromised. So we, as network engineers, need to do more. I don’t mean add more firewalls or IDSs or other “security” devices, although they’re certainly helpful. We need to design networks that actively defend against network attacks. It’s been said many times before, but this design principle bears repeating: networks need to be designed with security in mind, and not as just an afterthought. Most networks are designed for performance, redundancy and administrative ease. Security is one of the last qualities to be considered, and it is almost always applied piecemeal – without a consistent plan. You have all heard of and probably have even used the phrase “hard and crunchy on the outside, soft and chewy on the inside,” to describe the security posture of many networks. The candy metaphor may be a bit overused, but it is still accurate because it describes how network security is often applied: to the edges of the network only, leaving the internal network components subject to attack (soft and chewy). Remember, if an attacker is able to compromise one workstation on the internal network, he essentially becomes an “internal” hacker, and all your edge defenses are for naught. To build an effective secure network, security has to be included in all points of the network, by enforcing security policy throughout the network, not just at the edges. How does a secure network defend against attacks? Richard Beijtlich has some useful blogs on this subject. Principally, in two ways: - By limiting what can be attacked, improving the odds of detecting attacks, and it facilitating the containment and eradication of compromises. - By providing information that can indicate that an attack took place (or is taking place). By providing evidence of network activities and events (or, just as importantly, their absence). In other words, by providing useful forensic information so that attacks can be detected. I should emphasize that a secure network will not stop all attacks from succeeding. Given enough time and motivation, an attacker will eventually compromise your systems. However, a secure network design assumes that attacks will take place — even that some will (initially) succeed — and plans accordingly. The first step in building a secure network is what I call “compartmentalization.” It refers to the idea of logically separating the network into sections, or “compartments,” where access policy can be enforced and attempted violations of that policy can be detected. A good analogy for this is a fire door. You probably have fire doors spread throughout your place of work. Consider what a fire door does. A fire door does not prevent fires. But when a fire occurs, a fire door slows the fire’s spread. It gives you time to detect the fire, and it gives you time to respond. Similarly, a compartmentalized network does not prevent attacks, but it slows down attackers, making it harder to compromise more systems, and also makes it easier to detect the attacks taking place. The first step in compartmentalizing a network is understanding the components and the subsystems, how they interact with other devices on the network and what kinds of traffic they use (and should not use). Here are some examples of components: - General user workstations. - General purpose servers (file/print, database, web, etc.). - Servers with restricted access, such as financial systems, HR systems, etc. - IP telephony servers. - IP-based security systems (cameras, badge readers, recording equipment, motion sensors, etc.). - Wireless guest networks. This is not an exhaustive list – you may have additional categories. The point is to understand the different types of devices and their differing security needs – each of the categories above will have different access policies. The next step is to separate the devices in each of these categories from devices in other categories. By “separate,” I mean create layer 3 boundaries between, say, printers and workstations. The easiest way to do this is to create VLANs for each category. Place your workstations in one VLAN, printers in another, IP phones in a third, and so on. In a large network you will have many VLANs for each category. As an example, each floor of a building may have a VLAN for user workstations, a VLAN for IP telephones, a VLAN for printers, etc. This strategy greatly increases the number on VLANs and therefore, the number of IP subnets in your network. So, we should talk briefly about your IP addressing plan because it also plays an important part of your network security. A well thought out addressing plan can contribute to security by facilitating the creation of ACLs to enforce security policies. It does this by grouping categories of devices into summarizable address blocks. For example, if you allocate the following subnets to workstations: You can summarize all these subnets as 172.24.0.0/20, and use that summary to make creating and applying ACLs much easier. Without summarization, it becomes difficult to create ACLs to control policy. They become long and difficult to manage. They may even affect network performance. Long ACLs may not be able to be processed in switching hardware and instead may require use of the CPU (often called process switching), which drastically reduces network performance. So, for each of your device categories, allocate a block of addresses that you can summarize. In the example above, we’ve allocated 172.24.0.0/20 to workstations. You might allocate 172.24.16.0/20 to IP Phones, 172.24.32.0/20 to printers, etc. Each subnet within the /20 block can be used in a different closet or floor, yet you can refer to them all in an ACL by using the summary address. In my next post, I will show how to develop an effective access policy for different groups of devices and how that contributes to a more secure network.
<urn:uuid:7a9b1ca2-cd10-4ebe-aa33-82f73d7b9de1>
CC-MAIN-2022-40
https://netcraftsmen.com/designing-defensible-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00739.warc.gz
en
0.940407
1,515
2.8125
3
Facebooks AI research department has announced Habitat 2.0, a next-generation simulation platform that lets AI researchers teach machines to navigate through photo-realistic 3D virtual environments and interact with objects just as they would in an actual kitchen, dining room, or other commonly used space. The open-source Habitat simulator was first launched in 2019, giving AI researchers a better way to teach industrial robots how to interact safely and efficiently within the environment in which they’re designed to operate. Facebook’s AI team said at the time that it built the simulator because it’s far easier and more efficient than creating a physical environment in the real world to train robots. Habitat 2.0 builds on their original open-source release of AI Habitat with even faster speeds as well as interactivity, so AI agents can easily perform the equivalent of many years of real-world actions, such as picking items up, opening, and closing drawers and doors, and much more. “We believe Habitat 2.0 is the fastest publicly available simulator of its kind available to AI researchers.” researchers wrote in a blog. Habitat 2.0 also includes a new fully interactive 3D data set of indoor spaces and new benchmarks for training virtual robots in these complex physics-enabled scenarios. With this new data set and platform, AI researchers can go beyond just building virtual agents in static 3D environments and move closer to creating robots that can easily and reliably perform useful tasks like stocking the fridge, loading the dishwasher, or fetching objects on command and returning them to their usual place. Alongside Habitat 2.0, Facebook is releasing a dataset of 3D indoor scans co-created with Matterport. The Habitat-Matterport 3D Research Dataset (HM3D), is a collection of 1,000 Habitat-compatible scans made up of “accurately scaled” residential spaces such as apartments, multifamily housing, and single-family homes, as well as commercial spaces including office buildings and retail stores. “Until now, this rich spatial data has been glaringly absent in the field, so HM3D has the potential to change the landscape of embodied AI and 3D computer vision,” said Dhruv Batra, Research Scientist at Facebook AI Research. “Our hope is that the 3D dataset brings researchers closer to building intelligent machines, to do for embodied AI what pioneers before us did for 2D computer vision and other areas of AI.” “We are excited to collaborate with Facebook as we provide the academic and research communities access to this unique spatial dataset that is sure to impact how we work and live,” said Conway Chen, Vice President of Business Development and Alliances at Matterport. “With more than five million spaces captured with the Matterport platform, we are the only company that can offer a diverse library of high-resolution, data-rich digital twins of various styles, sizes, and complexities from across the world. HM3D can also be used more broadly by academia, and we can’t wait to see what innovations emerge.” In the future, Habitat will seek to model living spaces in more places around the world, enabling more varied training that takes into account cultural- and region-specific layouts of furniture, types of furniture, and types of objects. “Our experiments suggest that complex, multi-step tasks such as setting the table or taking out the trash are significantly challenging. Although we were able to train individual skills (pick, place, navigate, open drawer, etc) with large-scale model-free reinforcement learning to reasonable degrees of success, training a single agent that is able to accomplish all such skills and chain them without cascading errors remains an open challenge. We believe that HAB presents a research agenda for interactive embodied AI for years to come.”
<urn:uuid:662fa28e-2d11-496d-bf04-0010fb86e808>
CC-MAIN-2022-40
https://aimagazine.com/ai-strategy/facebook-update-ai-habitat-simulator-improving-interactivity
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00739.warc.gz
en
0.943381
820
2.90625
3
(By Geist, a division of Vertiv and provider of intelligent power and management solutions for data centers.) Data center airflow management operates on a simple premise — IT equipment should only ever take in cool air, and CRAC return plenums should only ever take in warm air. Under no circumstances should there be a mixing of cold air and return air. Yet, many data centers struggle to facilitate this dynamic, and at a high cost to their operations. Data center cooling is infamously expensive, accounting for 40% of annual data center spend by some estimates. Inefficient airflow exacerbates this problem by causing hot spots that are all too often addressed by increased cooling capacity. We’re here to say, “No more!” Airflow management doesn’t have to be complicated if you adapt containment methods that are uniquely suited to expelling exhaust and keeping the cool aisle cool. What Is Data Center Airflow Management? Data center airflow management controls temperatures in and around IT gear to maintain and increase efficiency. Poor airflow can prevent cool air from reaching overheated components or cause warm air to remain trapped in one area. Without proper air circulation, your IT components and heating, ventilation and air conditioning (HVAC) systems will work harder than necessary to maintain ideal conditions. This problem can cause costs to skyrocket and stunt productivity. Data center airflow management addresses common problems by implementing solutions that control room temperatures, reduce fan speeds and create ideal air circulation channels. The primary areas of focus for data center airflow management are the floor, racks, rows and the room itself: - Floors: Floors present unique challenges for controlling a data center’s temperature. Standard floors often inhibit airflow, while raised floors can leak air in unwanted locations. - Racks: The racks holding your servers and other equipment need room to circulate air, but allowing too much space can create pockets that trap air before it can escape. - Rows: Situating multiple cabinets into rows is ideal for maximizing spatial efficiency, but for optimum temperature control, it’s necessary to separate aisles by hot and cold air. - Room: Actively controlling the temperature in a data center can maximize the effects of airflow management techniques. Common Airflow Solutions Managing data center airflow involves regulating the circulation of both cold and warm air. Airflow management’s primary goal is to contain cold and warm air to desired locations while providing channels for cool air to reach overheating equipment and for warm air to disperse. Floors, cabinets and temperature-separated aisles are all central battlegrounds for both cold and hot air regulation. Here are a few standard tools used to solve airflow problems: - Brush grommets: Circulating cool air over hot equipment is effective, but a lot of air can escape through the crevices that allow cables into the room. Brush grommets enable cables to pass through while sealing cold air where it needs to be. - Curtains: Sealing off hot and cold cabinet aisles will maximize the effects of any airflow regulation technique. Plastic curtains, blankets or other heavy drapery items are easy to install and produce noticeable results. - Rack chimneys: Hot air rises, and without an avenue to escape from, it will sit on top of IT equipment and complicate cooling efforts. Chimney structures provide an escape route for rising warm air, funneling it into HVAC ducts and out of the building. - End of row doors and aisle ceilings: Structures like doors and ceilings enclose hot and cold aisles to contain air in one space, allowing cool air to ease efficiently overheated equipment and hot air to circulate into HVAC ducts in a controlled manner. - Filler panels: When empty spaces along an aisle of cabinets go unfilled, the gaps cause inner-rack recirculation. This process allows temperatures to fluctuate, reducing the effects of containment stand circulation efforts. Filler or blanking panels prevent hot and cold air from transferring to unused spaces, making it easier to manage airflow. Help your return air rise When cool air passes through the IT load, a heat exchange occurs. Cool air becomes hot air, which is expelled into the hot aisle behind cabinets. Here, it will ostensibly rise into return plenums to be treated once again by CRACs. Then, it can be cycled back into the cold aisle through perforated tiles and drawn in by server fans — and so it goes. However, there are two problems with this theory: - It only works if an adequate supply of cool air passes through the IT load, as opposed to passing around it or never reaching it at all. - It operates under the assumption that warm air in the hot aisle will automatically rise to the return plenum. However, that only works if zero pressure is maintained as treated air passes through the IT load. This process isn’t possible if hot air seeps into the cold aisle because it isn’t properly contained, or because there’s a shortage of cool air being taken in by server fans. Often, traditional hot-aisle, cold-aisle setups will experience hot spots resulting from this combination of problems — bypass airflow and re-circulation air, respectively. This issue is especially true for high-density racks that generate more heat, particularly for servers positioned higher up and therefore farther from the perforated tiles on the ground. Given these circumstances, how do you simultaneously ensure that servers get enough cool air and hot air is expelled? Abandon your hot-aisle aspirations and use active containment chambers instead. The most effective way to balance airflow in high-density racks susceptible to hot spots is to directly pipe return air into the ceiling plenum through a containment chamber placed atop each rack. Small fans that respond to pressure changes are embedded within those chambers. For example, if there’s a shortage of cool air reaching the IT load at any given moment, those internal fans increase RPM to maintain optimal air intake and outtake pressure. It sounds complicated, but the premise is simple. Cool air goes into the server fans, and hot air comes out the other side and is sucked in through the containment chamber. Then, it returns to the ceiling to become cold air again. Install Fans in Front of Your ToR Switches “Install fans in front of the switch that will route cool air into the equipment’s air intake.” It’s quite possible that certain parts of your data center or server room have low-density racks that don’t necessarily warrant an active containment setup. In these cases, passive containment should suffice — with one notable exception. Many top-of-rack network switches are configured backward so connectors face the maintenance aisle. This arrangement can cause airflow reversal — back-to-front as opposed to front-to-back. This problem is compounded when ToR switches are farthest from the cool-air source in a raised-floor data center setup. Under these circumstances, it might not be worth it to implement active containment for every cabinet. The more cost-effective option is to install fans in front of the switch that draw in cool air and route it into the equipment’s air intake. This setup ensures a steady supply of treated air passes through equipment, regardless of its orientation on the rack or its distance from the cool-air source. Who said airflow had to be complicated? Alex von Hassler’s long term focus is the continued testing, learning, and deployment of modern IT solutions. During his years as a DataSpan team member, his responsibilities grew from managing Salesforce CRM to improving system security, creating marketing initiatives, as well as providing continued support to the highly motivated and experienced team in an ever-changing industry. As DataSpan evolves to provide the best-fitting IT solutions to its customers, Alex von Hassler continues to hone his skills in the world of web-based ERP systems, security, and best customer engagement practices. Empowering such a dynamic team with the right tools provides him with enormous gratification.
<urn:uuid:4f72f6ea-621a-43bf-ab20-6aee3f08e1f4>
CC-MAIN-2022-40
https://dataspan.com/blog/stop-complicating-data-center-airflow-management/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00739.warc.gz
en
0.906838
1,677
2.546875
3
One of the most effective and dangerous ways to run malicious software on a victim’s computer is by exploiting vulnerabilities in popular applications or within the operating system. The greatest threats are the so-called zero day vulnerabilities – the flaws in software, for which the manufacturer has not yet released an update to fix the problem. To infect the system through a vulnerability, criminals often resort to mass communications via email and social networks. This type of message usually contains a link to an infected web page or a specially prepared document, the opening of which launches malicious code. In most cases, the attackers use popular software under MS Windows as doorways that provide them with the greatest number of potential victims. Handling the threat requires complex security measures at a highly technological level. That is why Kaspersky Lab has developed a new technology called Automatic Exploit Prevention designed to combat the most challenging type of threats – the exploitation of vulnerabilities. Evolution of malware distribution methods The Automatic Exploit Prevention technology (AEP) included in Kaspersky Small Office Security is designed to protect against malicious software that exploits vulnerabilities in programs and the operating system. This technology protects your workstation from malware that attackers put on various popular web resources. The analysis of the behavior of existing exploits and information about the applications that are most exposed to malicious attacks gives KSOS special control over such applications. As soon as one of the programs at risk have tried to run a suspicious code, the procedure is aborted and the test starts. Running an executable code may be quite legitimate. For example, a program can request updates from its developer. To distinguish normal activities from infection attempts, the new Kaspersky Lab technology uses the information about the most typical behavior of the known exploits. The characteristic behavior of such malicious programs helps to prevent infection even in the case of a previously unknown zero day vulnerability exploit. Exploits quite often preload files prior to directly contaminating the system. Automatic Exploit Prevention monitors programs appealing to the network and analyzes the source files. In addition to AEP technology to combat threats from infected web resources, KSOS possesses tools to deal with web-infected workstations. One of these methods is referring to the updated database of trusted domains – the web resources, which during long term Kaspersky Lab observations, have not encountered any cases of infection and malware distribution. This list does not just include the sites of the manufacturers of legitimate software, but the sites of distributors, too – file collection that was not recorded cases of malicious software. If one of these sites has reported spreading malicious code, it is immediately removed from the database of trusted domains, which can significantly reduce the probability of a malware infecting users’ computers. Combined with traditional methods of protection, like filtering through our database of trusted domains in AEP, the KSN cloud has even greater potential to repel attacks.
<urn:uuid:8a71de96-8c8d-4eac-b3e4-14107134fe3f>
CC-MAIN-2022-40
https://www.kaspersky.com/blog/case-6-automatic-exploit-prevention-against-targeted-attacks/1338/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00739.warc.gz
en
0.92698
586
2.765625
3
SQL Slammer: In 2003, a bug – termed as SQL Slammer – was implanted in Microsoft SQL. The bug was spread like a wildfire that doubles the size of the buffer after every 8.5 seconds, resulting in a loss of mobile phone coverage and internet outages across the world. The Morris Worm: It was a buffer overflow attack that occurred in 1988 and resulted in the compromise of more than 60,000 machines. This fraud was also convicted under the Computer Fraud and Abuse Act. Avoid Using C and C++ Languages: C/C++ are high-level programming languages that are vulnerable to buffer overflow attacks. Prefer using other programming languages such as Python, Java, and COBOL. These languages don’t allow direct access to memory. Buffer Overflow Protection: The security of executable programs should be executed by detecting buffer overflows on stack-allocated variables. Static Code Analysis: Use static application analysis tools such as Kiuwan to scan your code for buffer overflow vulnerabilities. Bounds Checking: Avoid using standard library functions that do not bound checked such as strcpy, scanf, and gets. In fact, bounds checking in abstract data type libraries can reduce the occurrence of buffer overflows. Executable Space Protection: Memory regions should be marked as non-executable. Doing so will prevent the execution of machine code in these regions. Use Modern Operating Systems: Modern operating systems have runtime protections that help mitigate buffer overflow attacks, such as randomly rearranging the address space locations of the main data areas of a process, avoiding knowledge of the exact location of important executable codes and assign a binary value, whether it is "executable" or "non-executable" in a memory area, protecting the non-executable area from exploits. Threat actors exploit buffer overflows by overwriting the memory of the application. Doing so would prevent the normal functioning of the program. The most famous buffer overflow attacks are SQL Slammer and The Morris Worm. Buffer overflow attacks can be prevented by using modern operating systems, executable space protection, bounds checking, static code analysis, and avoid using C and C++ languages. In IT security debates, projects aimed at managing access and identifying users are considered fundamental. However, the processes and...
<urn:uuid:5bfb0fc5-b119-452e-8d56-6b9d89b1b8e0>
CC-MAIN-2022-40
https://www.logsign.com/blog/buffer-overflow-attack-prevention/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00739.warc.gz
en
0.904373
470
3.390625
3
Unlike traditional malware, which relies on a file being written to a disk, fileless malware is intended to be memory resident only, ideally leaving no trace after its execution. The malicious payload exists in the computer’s memory, which means nothing is ever written directly to the hard drive. For an attacker, fileless malware has two major advantages: - There is no file for traditional anti-virus software to detect. - There is nothing on the hard drive for forensics to discover. As a rule, if malware authors can't avoid detection by security vendors, they at least want to delay it for as long as possible. Which makes fileless malware a step forward in the arms race between malware and security products. Is fileless malware new? Fileless malware attacks have been around for 20 years at least. The first malware to be classified as fileless was the Code Red Worm, which ran rampant in 2001, attacking computers running Microsoft's Internet Information Services (IIS). But in the last few years fileless attacks have become more prevalent. Four years ago, the Ponemon Institute's "The State of Endpoint Security Risk Report," reported that 77 percent of compromised attacks in 2017 were fileless, and that fileless attacks were ten times more likely to succeed. We noted the trend ourselves, with an overview of fileless attacks in 2018. How is fileless malware delivered? In the case of the Code Red Worm, the malware exploited a buffer overflow vulnerability that allowed it to write itself directly into memory. Modern ransomware attacks sometimes rely on PowerShell commands that execute code stored on public websites like Pastebin or GitHub. Fileless malware attacks have also been seen hiding their code inside existing benign files or invisible registry keys. Some use the so-called CactusTorch framework in a malicious document. And sometimes the malicious code does exist on a hard disk, just not on the one that belongs to the affected computer. For example, "USB thief" resides on infected USB devices installed as a plugin in popular portable software. It gathers information on the targeted system and writes that to the USB device. How to create fileless malware Our esteemed colleague Vasilios Hioureas has written a walk-through by demonstrating some of his own fileless malware attacks. His write-up also nicely demonstrates what modern anti-malware solutions need to do to protect their users against fileless malware attacks. Showing that modern-day solutions must contain technology to dynamically detect malicious activity on the system rather than simply detecting malicious files. Old-school signature-based detection is useless when dealing with fileless malware. What can fileless malware do? In essence, fileless malware can do anything that “regular” malware can do, but for practical reasons you will often see that there is a limited amount of malicious, fileless code. For more complex programs like ransomware, the fileless malware might act as a dropper, which means the first stage downloads and executes the bigger program which is the actual payload. And, of course, fileless malware can use native, legitimate tools built into a system during a cyberattack. The most common use cases for fileless malware are: - Initial access. The first step of a cyberattack is to gain a foothold on a system. This can be stealing credentials or exploiting a vulnerability in an access point. - Harvest credentials. Fileless malware is sometimes used to hunting for credentials, so an attacker can use alternative entry points or elevate their privileges, - Persistence. To ensure they have permanent access to a compromised system, an attacker might use fileless malware to create a backdoor. - Data exfiltration. An attacker might use fileless malware to hunt for useful information, such as a victim's network configuration. - Dropper and/or payload. A dropper downloads and starts other malware (the payload) on a compromised system. The payload may come as a file, or it can be read from a remote server and loaded into memory directly. Fileless malware detection So, how can we find these fileless critters? Behavioral analysis and centralized management are key techniques for detecting and stopping fileless malware attacks. Knowing how to identify attacks and having an overview of the attack surface however is easier said than done. What you need is anti-malware software that uses behavioral analysis, ideally supported by an Artificial Intelligence (AI) component. And for a large attack surface you will need something like a Security Information Event Management (SIEM) system to tie all the alerts and detections together. In short, detecting malware is no longer a matter of detecting malicious files, but more and more a matter of detecting malicious behavior. Stay safe, everyone!
<urn:uuid:0bac74b0-b682-4087-a1d5-5d4925218bc6>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2021/10/what-is-fileless-malware
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00739.warc.gz
en
0.924251
967
2.96875
3
When scientists need to perceive how particular person human genomes range, they flip to a single, central genetic sequence: the reference genome. That genome serves as a type of standardized measurement, a yardstick, in opposition to which all different human variation may be measured. However right here’s the shock: About 70 p.c of that reference genome comes from a single man in Buffalo, New York, whose DNA was sequenced in the course of the 1990 to 2003 Human Genome Undertaking, the primary try and document the total genome of an individual. That raises apparent questions: Are variations from the reference genome truly irregular? The person behind the reference genome, generally known as RP11, is likely of mixed African and European ancestry, however how a lot data can one genome give about variation amongst 7 billion of us? Geneticists have toyed with quite a lot of fixes for the issue. Generally, genetic medication practitioners use population-specific reference genomes that could be extra consultant of somebody with sub-Saharan African or East Asian ancestry. Others have proposed creating a “consensus reference,” which might be a Frankenstein-style meeting of the most typical genetic variants, all stitched collectively. There might even be a reference genome primarily based on that of humanity’s most up-to-date widespread ancestor. However all of these share a central limitation: reference genomes depend on the belief that there’s a baseline human genetic blueprint, and genetic range have to be understood as variations from that baseline. This week, research in Science lays out a brand new software for investigating the human “pangenome.” The pangenome permits geneticists to map variations in an infinite variety of genomes abruptly, which researchers say might seize advanced variations and higher tailor genetic medication to individuals who aren’t European. “What could be higher would as a substitute be, let’s examine to a complete numerous assortment of a sampling of what we expect humanity seems like,” says Benedict Paten, a computational biologist on the College of California Santa Cruz, and the senior writer on the analysis. As a substitute of one single genome, says Paten, “we map out a community of potentialities.” Think about two folks with a barely totally different sequence: AGTCA and ATTGA. Within the pangenomic standpoint, variations are represented as a sequence of branches on a tree: A results in T or G, which leads again to T, which ends up in C or G, which ends up in A. The place two genomes are similar, they observe the identical path. The place the genomes are totally different, the paths cut up off. Many individuals with related genomes could be a bit like a bundle of strings, following the identical pathway by means of a community of potential sequences. That makes it a lot simpler to see variations in context, slightly than as deviations from a norm. “Historically, when we have now a reference, we speak about edits,” says Paten. “So we are saying, place a million and blah, there was a flip from an A to G.” In a pangenome, “as a substitute of being described as edits, they’re only a sequence. They’re only a level in that community.” Most instantly, that may assist researchers perceive deep patterns in our genes. The only adjustments—swaps of a single letter, or brief insertions and deletions—are straightforward to determine utilizing a reference genome. However there are extra sophisticated patterns, which scientists name structural variants. A complete stretch of DNA could be reversed or repeated, or reduce out and plopped down elsewhere. And even the most effective reference genome is a foul software for understanding the total complement of structural variation. As a result of genomic patterns range considerably by ancestry, the reference genome is particularly unhealthy at explaining variation in undersampled communities, from Tuscans to Yoruba—it could merely not have an analogue for a standard function of genomes in these communities. (It’s necessary to do not forget that ancestry doesn’t usually map onto cultural definitions of race, and that variations between populations are superficial or minor subsequent to overwhelming commonalities.) “If you’re structural variants,” says Stephanie Fullerton, a bioethicist on the College of Washington who research genetic medication, scientists ask whether or not the variant may be very uncommon that “might be breaking one thing tremendous necessary? Or is that this simply one thing floating round within the human genome that’s successfully impartial?” As a result of the vast majority of genomic research has looked at people of European ancestry, researchers typically don’t perceive what population-specific variants imply for the well being of non-Europeans. Ambroise Wonkam, a human geneticist on the College of Cape City, wrote in Nature earlier this year that in folks of African descent, biased analysis implies that “the chance of cardiomyopathies [a heart disease] or schizophrenia may be unreliable and even deceptive utilizing instruments that work properly in Europeans.” And, he identified, fewer than 2 p.c of human genome sequences come from people in sub-Saharan Africa. Within the new paper, the researchers put the software into motion onto quite a lot of genomic databases from throughout the planet. They had been ready to select one structural variant, a deletion of a gene known as RAMACL, that confirmed up in half of individuals of African descent, 4 p.c of Individuals with blended ancestry, and only one p.c in different teams. That means that the variant is a wonderfully regular a part of human range, when it in any other case might need been flagged as uncommon, and doubtlessly dangerous. “This has been an issue up and down,” says Paten, “the place folks have studied one subpopulation and located a variant that appears attention-grabbing, and could be related to one thing, however they haven’t had the context of how widespread that variant is in different populations.” Fullerton agrees. “However does that assist us assist particular person sufferers from underrepresented teams?” she asks. “That’s a far greater query.” On the one hand, it might give sufferers readability on whether or not a function of their genome is one thing to fret about, and provides docs instruments for understanding the hyperlinks between genes and sickness. “If you happen to’ve ever had any well being issues and had a health care provider inform you, we don’t know what which means, it’s very irritating, proper?” she says. As genetic counseling, to information administration of breast most cancers danger or inform sophisticated diagnoses, turns into extra widespread, sufferers who aren’t represented by the reference genome might be ignored. “So it may assist with that data drawback. However on the finish of the day, realizing that this [gene] is inflicting illness doesn’t get you to, that is what we do about it. Notably in the event you’re speaking about sufferers who’re decrease socioeconomic standing, or don’t have social capital to navigate the healthcare system, getting it answered is necessary, nevertheless it’s the very first step of a really lengthy odyssey.” And with out extra sequences from people who find themselves underrepresented—notably within the world south and Indigenous communities—there received’t be the underlying information to grasp the hyperlink between illness and genetics. Easy methods to accumulate and share these sequences is a complete totally different set of questions: the historical past of genetics is full of ethical failures by educational researchers. Wonkam, the South African researcher, is asking for a venture to sequence 2 million genomes in Africa—and to give the owners of those genomes power over how they are going to be used. The pangenome supplies a framework for understanding human range, however folks ought to determine methods to fill it in.
<urn:uuid:7e3618fd-5fd6-4d52-837e-9146a7d50024>
CC-MAIN-2022-40
https://dimkts.com/the-benchmark-for-human-diversity-is-based-on-one-mans-genome-a-new-tool-could-change-that/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00139.warc.gz
en
0.943008
1,675
2.921875
3
Health care is focused on the use of innovation and connectivity in an age of digitization to advance the field. Smartphones, cyber networks, mobile devices, and interconnections can be used by healthcare providers to enhance patient care and quality of care. As consumer requirements grow, health companies and technology giants have the opportunity to develope mHealth applications. Through mobile health apps, players in the healthcare sector are searching for the ability to combine electronic health records(EHRs) and other wearable technologies. What is mhealth? Mobile health (mHealth) is the observation and shared information on health through mobile devices – such as wearables and fitness monitoring applications. The use of monitoring and care of symptoms of mobile devices and wireless technology allows physicians to make diagnoses quicker and with fewer problems. As tech giants such as Apple and Google continue to move towards healthcare, mHealth will probably become increasingly popular. In comparison, mHealth is a telehealth category, which primarily applies to the use of mobile technologies to inform customers about healthcare. Mobile devices are used to monitor the exercise, cardiac rate, and medication adherence of patients. Why has the healthcare industry gone digital? In addition to improved health for patients, the advantages of mHealth will improve doctors' life. Instead of going to the hospital, physicians from rural areas can use telehealth applications. By streamlining data entry, smartphone apps will improve doctors' performance. It can be used to help decision-making or to improve communication among specialists. The typical personal interaction between the patient and the practitioner is evolving. New applications for mHealth create a different approach to healthcare. A wide array of innovative mHealth applications can be put on the market from a remote body temperature sensor to a child's fever to a digital pill that monitors medication adhesion. The Importance of Mobile Health in Today's Healthcare Increasing awareness among people Growth has encouraged more people in the healthcare industry to pursue fast medical assistance. Furthermore, people are now more conscious of their health, which has become more necessary to consider a realistic solution to follow it. Mobile healthcare apps are an omnipresent option for the people in this scenario of increased awareness. Personalized treatment methods Currently, consumers are searching for a personalized treatment option that focuses on their individual needs and delivers the same results. A health-care professional, for example, would need a physical exercise log, several measures taken, sleep consistency, and meter in some cases to be fitted with functionalities such as a heart monitor, pressure controls. Improved patient experience due to rising internet mHealth applications are renowned for better internet access and fast basic healthcare facilities. The patients are closely linked to physicians, who can monitor their health status regularly, using these mobile healthcare devices. If the patient is handled correctly, their trust in the health system is strengthened and patient-doctor relationships are reinforced. Some of the top trends making a wave in the mHealth industry Mobile devices and apps are becoming an integral part of telemedicine Telemedicine crosses the geographic region and connects providers to patients, including providers to other providers, to expand access to underserved areas. Many leading telemedicine firms provide telemedicine applications as a way of linking and communicating remotely and on-the-go with physicians. Nearly 84% of young adults aged 18 to 34 have indicated they would like to be contacted by mobile devices. As more clinics are practicing telemedicine, the technology is still on fire as the sector addresses payment and refund practices and what telehealth services should use. Through mHealth, patients are encouraged to be actively involved in their health In order to ensure good health care decisions and enhance public health, patient awareness and customer education are essential for the healthcare industry. In the hands of patients, the rise in smartphone devices implies data collection and symptom monitoring which once was carried out in the examination room. Developers are designing software and wearable devices that can monitor cardiac rate, wind speed, body temperature, and applications wirelessly to improve consumer health choices. Consumers track their overall health and wellbeing with over 100,000 mHealth applications on the market and become involved and engaged in wellness tracking and management. The use of mHealth and fitness applications increased 87 percent faster than the overall app industry in the past year, and analysts believe that this trend will continue. mHealth achieves healthcare's triple aim The healthcare industry is taking big steps to achieve the idealized triple target of wellness, and mHealth is in line with its objectives. mHealth will help to reduce the use of healthcare, thereby reducing the related costs of health care. Patients want mHealth, and they want access to their health and medical records everywhere they go. More committed and informed customers will also add to reducing healthcare costs, as patients will be less likely to over-use resources. Finally, all the data collected using mHealth devices and their knowledge may be crucial to improving the health of the population and not just to improving individuals by monitoring their health. The digitization and connectivity in the new world provide a range of opportunities to advance the industry, including mHealth advantages. The mHealth market paves the way for improved doctor-patient communication, efficient data management, and better patient monitoring, and lowered hospital admissions. As a value proposition, mobile health apps improve outcomes in measurable, repeatable ways by connecting patients to their physicians even after office hours are over. Free Valuable Insights: Global mHealth Market to reach a market size of USD 206.1 billion by 2026 Summarizing this, it can be established that the mHealth industry will have a huge impact in the coming years. With growing technological innovations, mobile healthcare apps are being used on a wider scale across the globe. In addition, people are becoming more aware of their health, which has resulted in more people becoming aware of the benefits of mHealth apps. In addition, doctors and patients often rely on this innovation for improved outcomes and more patient-centered care.
<urn:uuid:56027516-61b8-4657-8337-f6f614cc4727>
CC-MAIN-2022-40
https://globalriskcommunity.com/profiles/blogs/top-3-trends-driving-innovations-in-the-mhealth-technology?commentId=15253904%3AComment%3A249996
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00139.warc.gz
en
0.949163
1,222
3.140625
3
Ever since Mark Zuckerberg announced that Facebook was to be renamed Meta, the hype surrounding the metaverse has been considerable. While digital twins have been around for some time and indeed are already a $3.1 billion market, there has also been considerable interest from more knowledge-based suppliers. For instance, Microsoft has targeted its Mesh platform squarely at knowledge work by integrating mixed-reality with Teams. Nvidia has also been bullish about the prospects for the metaverse, and argues that the economy of the metaverse will eventually be much larger than the economy of the physical world. Advocates of the metaverse marvel at the potential for mixing the physical and virtual realms, both by accurately representing the physical in the virtual world and augmenting the physical world with virtual data and items. They believe that this will allow people to work, socialize, and even worship in an environment where the boundaries between physical and digital are both blurred and permeable. Despite the consumer-facing version of the metaverse being more of an idea than a real thing thus far, there have already been concerns about the potential for cybercriminals and others with malicious intent to start using it for criminal activities, whether for fraud, disinformation, money laundering, or child exploitation. Indeed, as research shows, the malevolent is often nothing if not creative in finding new ways to do harm, with the researchers concerned that it could even extend into terrorist activity. Suffice to say, there has never been a technology invented to date that did not have the potential for both good and bad, and the metaverse will certainly not be any different. That should not prevent us from exploring the possible harm the platform could cause to society, however. For instance, it could act as a fertile environment for recruitment, whether into criminal gangs or into terrorist cells. The web has become the bedrock of recruitment for all forms of modern extremism in recent years, and the metaverse promises to exacerbate the problem by making it easier still to meet up. Indeed, with technologies like deep fake videos rapidly progressing, it’s feasible that Osama bin Laden could be resurrected in digital form to give talks and recruit the next generation of jihadists. For many years, the web has provided an array of social tools to help groups liaise and coordinate activities. Indeed, the use of encrypted chat apps for the purpose of coordinating activities has been well documented, especially when those activities are illegal and participants wish to avoid detection. The metaverse could make things that much more serious given its potential to allow accurate replicas of physical environments to be created. Terrorists wishing to target a particular building, for instance, can create realistic replicas and walk through their planned attack with those involved. This would allow them not only to effectively train for the attack itself but also to plan alternative strategies should their initial plan be disrupted for whatever reason. The use of augmented reality could then be used when conducting the actual attack to provide directions or identify targets. That this coordination could all be done from anywhere in the world is one thing, but the terrorist could also conduct their virtual activities under the guise of an avatar that could take on whatever form they choose, thus helping them to keep their activities hidden. The rise of both virtual and mixed reality spaces also raises the prospect of new kinds of attacks on new targets. For instance, events and buildings in the virtual world could be harmed, such as by defiling religious buildings or disrupting normal activities, such as work or banking. Any potential festivals or events hosted in the virtual world would also represent an obvious target, albeit with clearly fewer implications than attacks on such events in the physical world. While such attacks may indeed cause minimal physical harm, with serious money being bandied about on virtual goods and virtual real estate, they could nonetheless result in high financial costs for the victims. As society begins to get a better idea of just what the metaverse could be capable of it will also hopefully gain a better appreciation for some of the harm that malicious actors could cause. It’s likely that the first organizations to stake their claims to the new space will be corporations, and while there have been various murmurs about companies being better corporate citizens, it’s likely that it will require state and law enforcement agencies to truly ensure that the metaverse is a safe space. We’re probably a few years off from these threats being a serious issue for society, but that means we have a few years to prepare the ground and make sure that the platform doesn’t become a force for the ill in the world.
<urn:uuid:521cb17f-ca01-4ef0-9edc-ae5dcf80f1d6>
CC-MAIN-2022-40
https://cybernews.com/security/what-are-the-cyber-risks-posed-by-the-metaverse/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00139.warc.gz
en
0.960972
939
2.53125
3
MITM (Man in the Middle attack) is a well known attack in Local Area Network. Most of use are connected with, public WiFi including free WiFi in cafe, Railway, Airport, and other public places without fear. But you don’t aware of hacker cunning mind, It might be possible they are sitting in your network (connected with same network) and sniff date easily. With the help of Man in The Middle attack hacker can steal your username and password, bank details, your login sessions of social media sites including facebook, instagram, twitter etc. Login session is used to hack your social media account by hackers. What is MITM (Man In The Middle Attack) Man in the middle attack is the most popular and dangerous attack in Local Area Network. With the help of this attack, A hacker can capture the data including username and password traveling over the network. He/she is not only captured data from the network he/she can alter data as well. For example, if you send a letter to your friend the hacker can capture the letter before reaching the destination, and can edit and then send to your friend a modified letter. But a good thing is this attack only can be performed in a local area network it means one of the victims must be in the same network of the attacker. May be possible you have heard that using a public Wi-Fi network is not as secure as your home network the only reason is a man in the middle attack. So my dear friend if you are using public Wi-Fi network or any other public network then please use one of the best VPN Service before access any website. There are some free VPNs are available in the market so you can use them if you don’t want to spend money on your security. But free VPN is not as trustable as paid.
<urn:uuid:3377f6e7-0756-40ac-be63-c69de8d66028>
CC-MAIN-2022-40
https://www.cyberpratibha.com/blog/man-in-the-middle-attack-prevention/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00139.warc.gz
en
0.928552
381
2.515625
3
What is Data Governance and Why is it Important? Did you know: The world’s data volume will grow at a staggering 40% per year? That’s according to the Aureus Analytics report that projects growth trends from 2021-2026. As far back as the early 2000s, enterprises recognized data as a strategic asset of the company to guide strategic decision-making, promote experimentation to learn and improve, and deliver better business results. But after public data breaches jolted well-known brands like Facebook and Yahoo, data security has become a top priority for enterprises. This led to the demand for regulatory data governance. What is Data Governance? Search “definition of data governance” in Google or Bing, and you’ll find many explanations that are sometimes confused with data management. According to the Data Governance Institute (DGI), data governance is “a system of decision rights and accountabilities for information-related processes, executed according to agreed-upon models which describe who can take what actions with what information, and when, under what circumstances, using what methods.” Gartner’s definition is the following: it encompasses a collection of processes, roles, policies, standards, and metrics that guarantee the efficient and effective use of information, allowing an organization to reach its goals. These data governance definitions indicate robust governance follows internal data standards and policies to ensure data is used with integrity. It stipulates who can take what action, in what situations, upon what data, and what methods. As new data privacy laws and regulations are passed, it will become critical for organizations to develop, implement, and follow ethically sound data governance frameworks. A concrete data governance framework covers operational roles and responsibilities, as well as tactical and strategic objectives. Who is Responsible for Data Governance? Now that we’ve looked at the definition of data governance, let’s discuss who’s responsible for the implementation. Effective data governance involves the entire enterprise. Large organizations typically designate a data governance team responsible for setting goals and priorities, architecting the governance model, gaining budget approval, and selecting appropriate technologies to use. Below is a breakdown of the most common team designations: This role should be assigned to a senior manager, who specifies the organization’s requirements on data and data quality. They need to be able to take initiative and make decisions for the entire organization. Their role is business-orientated. Data owners are accountable for the state of the data as an asset. This is a technical role. Data stewards are also referred to as data architects. They ensure all data standards and policies are adhered to daily. Often they are part of a central management team or IT department as they need to be subject-matter experts for a data entity or/and a set of data attributes. Data stewards provide standardized data element definitions and formulas as well as profiling source system details and data flows between systems. They are either taking care of the data as an asset or providing consultation on how to do so. Also called data operators, data custodians create and maintain data based on an organization’s standards. This includes business and technical onboarding, updates, and maintenance of data assets. Data custodian roles can be bestowed onto employees in established business units, or bundled together with dedicated support functions, for example, shared services. Data Governance Committee Data governance committees approve policies and standards that have to do with the governance of data. A governance committee is also responsible for handling escalating issues and may be divided into sub-committees if you have a large organization. For example, you may have sub-committees for customers, vendors, products, and employees. These committees ensure that data requirements, priorities, and issues are aligned between different entities. In addition to subcommittees, most organizations have two boards; one for strategic data management topics and another for tactical data management issues. In a perfect scenario, a data governance team should include a manager, a solutions and data governance architect, data analyst, data strategist, and compliance specialist, who pool their expertise to make informed and compliant decisions on behalf of their organization. The Importance of Data Governance Data governance provides clarity and safeguards against poor data management and non-compliance. IBM recently reported that in the U.S. alone, businesses lose $3.1 trillion every year due to poor data quality. When data quality is low, it affects every aspect of a business, from marketing insights to financial planning, and hinders the achievement of important KPIs. It’s impossible to make accurate decisions or take calculated risks when data quality is poor. Benefits of Data Governance Despite some initial challenges, data governance allows enterprises to remain agile in saturated markets while still being compliant with ever-changing legislation. A vigorous data governance program keeps your data clean. Shared responsibility ensures regular cleansing, updating, and purging of data. Dealing with data is laborious, but the process can be less tedious if your data management team keeps everything up-to-date and relevant. An effective data policy enables organizations to find and maintain useful information and reduces ROT (redundant, outdated, and trivial information). For example, when dealing with many data entry points, some data will inevitably be duplicated and/or incorrect. Your data policies should enable your team to eliminate these errors to create a single source of truthful, high-quality data. Better Decision Making and Business Planning We live in an age where data has become the critical driver of business decisions. A strong data governance allows authorized users to access the same data, erasing the danger of data silos within a company. IT, sales, and marketing teams work together, share data and sights, cross-pollinate knowledge, and save time and resources. Increased data centralization Along with better decision making comes faster compliance. Businesses are able to choose from a low code or no code approach, dependent on their specific needs, both of which achieve the benefit of faster compliance. Data governance software can transform the process of using masking as a data protection technique, allowing organizations to become compliant much more quickly. As a result, months or years of training are no longer necessary. Implementing a data governance system makes it easier for your organization to be 100% compliant with the latest laws, including the European Union’s General Data Protection Regulation (GDPR), U.S.’ Health Insurance Portability and Accountability (HIPAA), the Payment Card Industry Data Security Standard (PCI-DSS), and more. Of all the motivating benefits, compliance should be at the top of your list. Legislation around data privacy will continue to evolve as technology does. Adopting a comprehensive compliance system ensures adherence to the law and avoidance of paying penalties or fines for breaching legislation. Also, obeying current regulatory standards protects company data from getting into the wrong hands. Challenges of Data Governance An average user spends 1.8 hours a day looking for the right data because of insufficient data management, which remains a foundational challenge for enterprise teams. Lack of Leadership Data governance spans multiple departments within the business and requires clear leadership from the top down. A successful data governance program requires cross-functional collaboration. Industry trends indicate that Chief Data Officers (CDOs) now possess the same level of prominence as Chief Information Officers (CIOs). If not a CIO, an organization needs someone in senior management whose role is focused on data policy and procedural alignment. They must enforce their authority when advocating for budget and resource allocation and be committed to upholding good data governance. Lack of Team Support Organizations that struggle to implement strong data governance tend to rely too heavily on data scientists and expect them to shoulder most of the responsibilities that have to do with data. Data governance contains several components that are not within a data scientist’s skill set, such as setting up policy procedures. Data governance is best managed by a group of data stakeholders responsible for different parts of operational procedures and meeting compliance standards. Understanding the Value of Data Often there is a lack of clarity on ownership, access, management, and usage, which means that data is stored in systems that may not be accurate. This can result in issues of ROT and general mismanagement, which has an adverse compound effect. Technology investments won’t improve the quality and value of present data as data cannot govern itself and must be adequately understood for effective utilization. Poor Data Management Data management is not the same as data governance. The latter establishes policies and procedures around data. The former enact those policies and procedures to compile and use data for decision-making. Poor data management results in unsecured data, opaque processes, data silos, and a lack of control over processes. Without consolidating policies and processes, organizations face high-security risks and non-compliance. Data Governance Best Practices Since its establishment in 2003, the Data Government Institute (DGI) has provided a benchmark for data governance best practices. Its framework is used by hundreds of organizations all over the world. Below are fundamental principles of good data governance: - An organization must define its data governance team with clear job descriptions, responsibilities, and duties. This includes determining who is accountable for cross-functional data-related decisions, processes, and controls. - Data governance programs must define accountabilities in a way that introduces checks-and-balance between business and technology teams to ensure everyone is working effectively towards a common goal. - Data-related decisions, controls, and processes must be auditable and accompanied by documentation to support compliance requirements. Furthermore, the framework must support the standardization of enterprise data governance. - Everyone in the organization must work with integrity when dealing with each other and data. They must be honest in discussions and feedback around data-related decisions. - Data stewardship processes require transparency, so all participants and auditors know when and how data-related decisions and controls are introduced into processes. - Lastly, effective data governance programs must support proactive and reactive changes made by management to ensure the proper handling of data processes. Data Governance Tools As data and applications have become crucial for organizations, the importance of data governance tools to safeguard the integrity of data assets has increased. Most data governance tools can help you: - Empowered decision making - Improved data quality - Streamlined data management - Higher data interoperability - Superior data lineage But picking the right tools for your data governance framework is not so much about the tools as it is about knowing the goals and objectives of your own data governance strategy. Learn achieve data governance and protect your business-critical application data with Delphix Data Masking. After reading this article you will be able to: - Understand what is Data Governance - Identify challenges of Data Governance - Evaluate Data Governance best practices - See how Data Governance tools can help you
<urn:uuid:46a3e716-f062-4e47-b442-a122c9d1498f>
CC-MAIN-2022-40
https://www.delphix.com/glossary/what-is-data-governance
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00139.warc.gz
en
0.918798
2,293
2.90625
3
California Institute of Technology has opened a new facility to help further research efforts on drones, driverless cars, robotics and machine learning technologies. Caltech said Tuesday it established the Center for Autonomous Systems and Technologies to serve as a hub for university, industry and government researchers, such as NASA‘s Jet Propulsion Laboratory experts, to collaborate in efforts to develop autonomous systems. One of the facility’s key goals is to develop an unmanned ambulance platform for urban applications which will be used to autonomously transport medical patients. “The CAST team will also work on the next generation of drones and robots to explore the solar system, including submersible vehicles designed to operate in the ice-covered oceans of Europa, a moon of Jupiter,” says Woody Fischer, steering committee member for CAST. “The goal is to teach autonomous systems to think independently and react accordingly, preparing them for the rigors of the world outside of the lab,” says Mory Gharib, director of the CAST facility. CAST will feature an assembly room with an oval track for robots, an aerospace robotics control laboratory designed to help research fly hovering modified spacecraft as well as an enclosed aerodome where researchers can test fly unmanned aerial systems. The hub will also be a living experiment designed to learn tendencies within the facility and help run itself using the in-house robots.
<urn:uuid:99b76661-6469-4217-859d-3f7b4a6ae71c>
CC-MAIN-2022-40
https://blog.executivebiz.com/2017/10/caltech-opens-autonomous-tech-research-hub/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00139.warc.gz
en
0.931654
284
3.3125
3
The practice of Machine Learning (ML) is no longer an exotic concept for businesses. No matter if you have a small business or a Fortune 500 enterprise; the chances are that you can benefit from the nuances of AWS Machine Learning. While prominent organizations have different ways of using Machine Learning than their smaller siblings, there is a multitude of ways in which even small businesses can benefit from Machine Learning techniques. What Exactly is Machine Learning? Machine Learning is a type of artificial intelligence which uses programs, algorithms, and data to drive learning and automation. Under normal circumstances, it’s something you most likely already encounter on a day to day basis. For example, if you are using software like Amazon’s Alexa, Microsoft’s Cortana, Google’s Assistant, or Apple’s Siri, then you have already had a taste of the power of Machine Learning. On the other hand, Machine Learning and Artificial Intelligence can be used in businesses as well; it isn’t just for asking about the latest weather conditions. For instance, many websites are making use of chatbots to assist customers. Businesses, irrespective of their size, are using the likes of Machine Learning to help customers while driving efficiency and monitoring social media accounts. How can AWS Machine Learning Help Business? Amazon has established themselves as a leader in customer service and operations. Such execution can be found with the Amazon Web Services (AWS) Machine Learning tool. These data learning devices are aimed at catering to data scientists, researchers, developers and even small businesses who are enthusiastic to use Machine Learning to their advantage. The extent of Machine Learning advantages is not limited to just the essentials. Solutions such as Amazon Comprehend and AWS DeepLens are some of the top-notch services being provided by Amazon these days. Through these services, developers can inherit the ability to use neural networks to gain insight with regards to computer vision projects. Developers can also train chatbots, which can cater to a customer’s specific incoming request. Machine Learning and Artificial Intelligence can even be utilized to organize a website’s content, as various defined logical algorithms come into play. A small business can also coordinate their website’s inventory using artificial intelligence. If you are running a small business, and feel as though you don’t wish to dapple in artificial intelligence alone, then you can count on the consulting services of companies such as Idexcel. Experienced teams are always available to help businesses of any scales accomplish their goals and increase their cloud repertoire. How does AWS Machine work with Small Businesses? Small businesses often need to use predictive models to enhance their revenue and sales models. One of the ways to improve these models is through the use of machine learning. Entrepreneurs, who are running small businesses, often don’t have sufficient time or the resources to sift through massive data and derive intelligent decisions out of it; this is where machines learning techniques come to the rescue. Such business owners can benefit immensely from the use of AWS Cloud-based services and AWS Machine Learning. The vast amount of data which is collected can be sorted, sifted, and analyzed to deliver helpful business-related insight efficiently. Through the use of machine learning, small businesses can save on operating costs, while at the same time make sound decisions, and earn better profits than before. However, it is import to know that small businesses cater to customers at different stages. For this reason, it’s imperative to understand how customer behavior can change from time to time. Through predictive analytics and machine learning, such tactics can become a breeze. No matter what the type of business you have, machine learning can come to your aid at any given point in time. From data collection to data storage and insights you can have it all; it not only helps enhance your business’s image through the use of chatbots but also helps you manage your inventory efficiently. Such is the power Machine Learning gives to its users. For every small business owner out there, there is a unique benefit that you will get with the use of AWS Machine Learning techniques. It all depends on how you use the services to meet your company’s needs and wants at the end of the day.
<urn:uuid:7e4429c1-19c4-48d7-a376-ec85b9f036be>
CC-MAIN-2022-40
https://www.idexcel.com/blog/how-your-small-business-can-benefit-from-machine-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00139.warc.gz
en
0.941717
869
2.6875
3
Remote teams and distributed home networks have made it more difficult for IT teams to provide centralized support. Hackers have taken advantage of the increased attack surface to release new types of cybersecurity threats and CIOs are worried about remote employees falling prey to increased phishing and malware attacks. These issues underscore the importance of having a strong security network with strategies for preventing data loss, cyber-attacks, and loss of revenue due to security breaches. What is the purpose of a network security audit? A network security audit is a complete assessment of your company’s security network. It includes a review of company policies, applications, hardware inventory, and practices for security faults and risks. The objective of an effective network security assessment is to: - Define and classify your resources, including core business processes, networks, devices, and data to determine what systems are at risk for a breach. - Identify common entry points and hidden data sources that may allow easy access to sensitive information. Focus should be on end-point devices like smartphones and tablets as well as virtual or physical services that may not be secure. - Estimate costs and determine the impact of a cyber-attack to your critical assets and processes. Consider threats such as exposure of personal information, IP theft, data loss or destruction, and interruption of business processes - Identify critical data, sensitive personal identifiable information (PII), or corporate information that may be vulnerable to unauthorized access or external cyber-attacks. What are the types of network security audits? There are two types of network security audits: vulnerability assessment and penetration testing. 1. Vulnerability assessment A vulnerability assessment is an extensive evaluation of your IT infrastructure to identify, classify, and prioritize security vulnerabilities. It assesses whether exposure to a known vulnerability has occurred, the severity of the vulnerability, and outlines the steps necessary to remedy the exposure. A vulnerability assessment can provide a detailed view of the potential security risks to your organization, help you determine the effectiveness of your current security controls, and better protect your business from further cybersecurity threats. 2. Penetration test A penetration test is a simulated cyber or social engineering attack against your network that looks for exploitable security vulnerabilities. Penetration testing can use manual penetration testers or be automated with penetration testing tools. Insights gained from the testing process can help you determine how an attack might overcome your security infrastructure so that you can tighten your current security controls and remedy vulnerabilities found. How to conduct a network security audit Assess the vulnerability of your infrastructure Vulnerabilities within your network can arise from internal sources, like employees with poor security practices or external sources like third-party vendors with incorrect levels of access. A comprehensive vulnerability assessment should include: - A scan of all network ports as well as other network paths where an attacker might gain access e.g. Wi-Fi and IoT, and network services like HTTP and FTP - Network enumeration to find devices that might identify the operating system of remote hosts, - A review of third parties and remote workers and the level of security access they have to your internal company network and sensitive data. - A review of security policies related to BYOD, and email usage - An assessment of the possible risks from natural disasters, critical system failures, and human error should also be considered. Determine information value Highly sensitive data is often subject to regulatory or industry-specific requirements and can incur high penalties if lost or exposed to malicious actors. To manage potential security risks, it is critical to understand the value of the information you are protecting. Consider the following when determining the value of your business information: - Are there financial or legal penalties associated with the loss of this data? - Is this information valuable to a competitor? - If lost, how long would it take to recreate the data, and what would be the cost? - How much revenue would be lost if this data was lost or exposed? - Would the loss of this data result in damage to the company’s reputation? Take inventory of your resources An important step in conducting a network security audit is taking inventory of your resources. This will give you a high-level view of your network and its security controls. Evaluate all of your business processes to identify the critical assets that need to be assessed and prioritize to determine which assets should be assessed first. Not all companies have the budget to assess their entire network. Focus on your business-critical resources. Document the results in an assessment report Document the results of the assessment in a report to assist management in decision making related to budgets, policies, and procedures. The assessment report should describe the risks, exploits, and value of each vulnerability and outline its potential impact. It should also detail the likelihood of the vulnerability to occur and include control recommendations for preventing its occurrence. Implement security controls to improve cybersecurity Make a list of any weaknesses found in your security networks and develop additional security controls to address them. Controls can be classified into two groups: preventative and detective controls. The aim of preventative controls is to stop an attack from happening; detective controls are designed to discover when an attack has occurred. Security controls can be implemented through hardware or software, two-factor authentication, encryption, network intrusion detection, and other technical controls or by security policies. Monitor continuously for issues/changes It’s important to continuously monitor your entire IT architecture for issues on a day-to-day basis. This ensures continued compliance with your implemented controls and ensures that you are immediately aware of any changes in your security status. Continuous security monitoring involves automated monitoring of security controls, vulnerabilities, and cybersecurity threats. It alerts your organization of compromise and security misconfigurations in real-time. Security ratings can also be implemented as part of continuous security monitoring. These ratings are created by a trusted independent security rating platform and are an objective indicator of a company’s cybersecurity status. Contact us for more information
<urn:uuid:49449eae-2af8-4b78-918d-2a1768fccd0b>
CC-MAIN-2022-40
https://cybersainik.com/how-to-perform-a-network-security-audit/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00339.warc.gz
en
0.929761
1,218
2.75
3
A goliath space rock which resembles startling human skull prepare to fly through earth on 2018. The space rock named, 2015 TB145, is more than 600 expansive. The Halloween Asteroid moved towards Earth on past October 31, 2015. Be that as it may, researchers from University of Hawaii discovered this space rock on October 10, 2015. Presently, the disturbing space shake visits again in November 2018, as it heads towards the Earth. However, it will not come anywhere as close as before for a long time to come. This has intrigued a couple of scientists as they keep an eye out for the Halloween Asteroid. The Halloween asteroid resembles diminish carbonaceous shooting stars. While, NASA captured a couple of radar images when the asteroid reaches 4,80,000 km of our planet. Some striking data about its qualities The space rock’s last experience with earth provided some striking data about its qualities and conduct. The Halloween Asteroid flew by earth at only 1.3 lunar separations which is around 490,000 km and did not represent any risk to planet Earth. Two years ago, a NASA scientist stated, the NASA’s IRTF (Infrared Telescope Facility) information may show that the question may be a dead comet, yet in the Arecibo pictures it seems to have donned a skull outfit for its Halloween flyby. While, the rotation time of the asteroid is 2.94 hours, this is the inexact length of its day. The object measures between the range of 625 m and 700 m, and its shape is a marginally flattened ellipsoid, and its rotation axis opposite to the earth at the time of its nearest proximity. The skull rock is slightly more reflective than charcoal, while it ingests heat is steady with that of other asteroids. Thomas G Muller, a Max-Planck-Institute researcher said, the visit from the Halloween asteroid will come 71 years from now. The following somewhat even more energizing experience will associate with Halloween’s day in the year 2088, when the protest approaches earth to around 20 lunar distances.
<urn:uuid:bb66a3fe-83e9-414b-a404-3c60dedbccb4>
CC-MAIN-2022-40
https://areflect.com/2017/12/27/skull-molded-space-rock-will-voyage-by-earth-on-2018/?amp
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00339.warc.gz
en
0.940298
433
3.078125
3
We have been exhausting time and resources pushing aside the benefits of dark data, quickly dismissing the potential it can offer a business or industry. Dark data is a type of unstructured, untagged and untapped data that has not yet been analysed or processed. It is similar to big data, but differs in how it is often neglected by business and IT administrators in terms of its value. With 80 per cent of data considered as dark data, there is undeniably enough information for companies to utilise to their advantage. Therefore, it is now time to bring light to the unknown gold mine of dark data. For most businesses, understanding the vast amount of dark data can be an overwhelming challenge. Generally, businesses will use excuses like legality issues, legacy workflows or architectural costs as to why it has been reluctant to maximise its dark data. It may also fear that accessing dark data can occupy valuable time which could be used for other tasks and unsettle employees with new ways of operating. Of course, this disruption can be kept to a minimum when implemented precisely with the correct tools. The rise of machine learning For the majority of companies today, modifying unstructured data into readable assets involves lengthy processes that are mostly manual. To generate better value, businesses need to automate these processes and minimise resources spent on mundane tasks, and this is where machine learning can help. Machine learning is an application of artificial intelligence (AI), that provides systems with the ability to automatically learn and accomplish the equivalent of continuously running programmes in a fraction of the time. Businesses can utilise machine learning to build models that work in a particular business function or industry. In the case of dark data, the process of learning starts with data observations to look for patterns that will help make better decisions in the future based on previous examples. Typically the system alerts business users to exceptions, and remembers when these are addressed so that it can automatically offer a solution the next time a similar event occurs. If users keep accepting the recommended solution, the system will learn accordingly. Structural changes are necessary when implementing machine learning, which costs time and money. But this can be justified in the long term as the business benefits will guarantee a high return on investment. Unleashing the benefits At a first glance, dark data can appear unintelligible but when approached correctly, it can unlock benefits for businesses and boost the bottom line. The key to uncovering dark data’s secrets lies in the ability to understand the relationships between seemingly unrelated pieces of information. Machine learning plays a critical role in helping businesses uncover information and reveals a host of patterns or insights that would have otherwise been overlooked.[…] (Image credit: Image Credit: Pitney Bowes Software)
<urn:uuid:8edfc4e7-769a-492a-9a94-e083cdaa2bca>
CC-MAIN-2022-40
https://swisscognitive.ch/2019/09/14/the-evolution-of-machine-learning-to-command-dark-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00339.warc.gz
en
0.933176
553
2.921875
3
Many of today’s automobiles leave the factory with secret passengers: prototype software features that are disabled but that can be unlocked by clever drivers. In what is believed to be the first comprehensive security analysis of its kind, Damon McCoy, an assistant professor of computer science and engineering at the NYU Tandon School of Engineering, and a group of students at George Mason University found vulnerabilities in MirrorLink, a system of rules that allow vehicles to communicate with smartphones. MirrorLink, created by the Connected Car Consortium, which represents 80 percent of the world’s automakers, is the first and leading industry standard for connecting smartphones to in-vehicle infotainment (IVI) systems. However, some automakers disable it because they chose a different smartphone-to-IVI standard, or because the version of MirrorLink in their vehicles is a prototype that can be activated later. McCoy and his colleagues found that MirrorLink is relatively easy to enable, and when unlocked can allow hackers to use a linked smartphone as a stepping stone to control safety-critical components such as the vehicle’s anti-lock braking system. McCoy explained that “tuners” – people or companies who customize automobiles – might unwittingly enable hackers by unlocking insecure features. “Tuners will root around for these kinds of prototypes, and if these systems are easy to unlock they will do it,” he said. “And there are publically available instructions describing how to unlock MirrorLink. Just one of several instructional videos on YouTube has gotten over 60,000 views.” The researchers used such publically available instructions to unlock MirrorLink on the in-vehicle infotainment system in a 2015 vehicle they purchased from eBay for their experiments. The automaker and supplier declined to release a security patch – reflecting the fact that they never enabled MirrorLink. McCoy pointed out that this could leave drivers who enable MirrorLink out on a limb.
<urn:uuid:fcd1fe81-6f90-4469-a350-a2cb325106f8>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2016/09/01/vulnerabilities-cars-connected-smartphones/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00339.warc.gz
en
0.928447
401
2.609375
3
If you’re a computer security professional, the question is always there – lurking in the background: Is your security perimeter safe? Even if you have a strong security stance and you’re using a range of tools to keep out the cyber criminals – this may no longer be enough. As the approaches taken by intruders continue to evolve, many organizations find they require digital forensics (DF) to identify cyber security threats and thwart attacks. But what is DF, and how does it relate to other aspects of cyber security? Digital Forensics, Defined DF is an emerging discipline in cyber security, a proactive process that involves uncovering and interpreting electronic data. Frequently, data that is obtained is used in a court of law, though DF is also utilized in the private sector. DF preserves evidence in its most original form while performing a structured investigation –collecting, identifying, and validating information for the purpose of reconstructing past events. Typically, DF involves four steps: acquisition, recovery (including creating working copies), analysis, and presentation. As defined by NIST’s Guide to Integrating Forensic Techniques into Incident Response, this applies to the following categories of data sources: files, operating systems, applications, and network traffic. Keeping Ahead of the Game DF identifies suspicious activity and determines whether an attack has taken place that bypassed the security products that you installed. Advanced attacks may bypass security products by identifying the rules they run by and figuring out how to sidestep them. Or, an attack may utilize encryption or sandbox evasion to avoid being noticed. In this, DF differs significantly from the world of alert management – where the starting point is generally the review of alerts coming through an EDR (endpoint detection and response) tool, then determining an appropriate response. (For more insight regarding the alert management lifecycle, click here.) The Challenge – Obtaining Original, Accurate, Repeatable Data A fundamental goal in the field of DF is ensuring that evidence is forensically sound – something that isn’t as easy as it may seem. Threat intelligence analysts must identify ways of duplicating or preserving evidence while ensuring the process itself has not inherently changed the data. The data must be in its most original form. As pointed out in this piece by Blaine Stephens of interworks, without integrity, evidence loses its value and admissibility in a court of law. Likewise, data needs accurate time stamps. An accurate timeline demonstrates who did what, and when; but in digital data, time stamps may be absent – and where they exist, they can be spoofed. Reconstructing the timeline is complex – in fact, the process is complex enough to require application of machine learning techniques. Repeatability is another challenge. To be able to state conclusively that Action A caused Result B, the concept of repeatability must be introduced – something that is difficult to obtain. And because this is an emerging field, there are no agreed standards. There are very few researchers – and each threat hunter is working in an independent fashion. This last issue is compounded by the sheer volume of vendors, devices, software, and protocols in the industry, which creates a large amount of data that must assessed – and means the investigative process is inherently complicated. Last but not least is the challenge of finding the right people for this work. Bottom line: DF expertise is hard to come by. A general shortage of cyber security talent plagues businesses nationwide – and, according to a recent report by CyberSeek, the problem is intensifying as the demand for cyber security workers is increasing continuously across the United States (and in other regions the situation is often more acute). In fact, there were 301,873 cyber security job openings in the private and public sectors between April 2017 and March 2018, including 13,610 public sector openings. As a result, organizations may want to consider turning to MSSPs (Managed Security Service Providers) to provide the necessary expertise rather than looking for a cyber threat intelligence expert to maintain in-house. Working Effectively with DF within Your Organization As the need for DF continues to grow, organizations need to find ways to integrate it into the organizational work process. As pointed out by NIST (see page ES-2 of their report, Guide to Integrating Forensic Techniques into Incident Response), organizations must define policies addressing major forensic considerations, such as contacting law enforcement, performing monitoring, and conducting reviews of forensic procedures. Likewise, organizations should maintain guidelines for forensic tasks that consider both the organization’s policies and other applicable laws and regulations, while supporting appropriate use of forensic tools. On the departmental level, IT and cyber security teams need to be prepped to participate in forensic activities. This is challenging, as IT groups notoriously are overworked and often are reticent about participating in DF-related activities. Finally, digital forensic investigators need the full support of C-level management – particularly, for forensic actions that may have an impact on the organization’s operations, such as affecting mission-critical systems. CyberProof’s Next-Gen SOC – a Use Case The highly experienced cyber threat intelligence investigators & analysts that work in CyberProof’s security automation and orchestration platform provide in-depth analysis of malware and forensics. The defined process for DF followed by CyberProof’s team begins when an alert is triggered in the SOC’s platform and escalated to Tier 3 teams or higher. If the alert is identified (by automated or manual investigation), an incident is created and tagged with a severity level – according to Playbook procedures. If the incident requires deep investigation it will quickly be escalated and assigned to Tier 3-4 team members, such as the DF analyst. At this stage, a request for data duplication is triggered. Requests are submitted for data, including .pcap files, FTK images, AccessData images, Autopsy output, etc. The DF analysts go through the different files and data, looking for evidence. When evidence is uncovered, they go through a process of validation. Once the evidence has been validated, the incident is updated in CyberProof's threat intelligence platform. Where required, the incident continues to be investigated by the SOC team members or is closed and a report issued for the client. Each & every step is backed by a “chain of custody” procedure in accordance with the relevant local applicable laws. What’s Next? Digital Forensics Best Practices Despite the lack of industry standards, a clear set of DF best practises should be followed. As the field of DF continues to expand and become more critical in cyber security operations, CISOs and CIOs can sidestep trouble and optimize the time invested by threat hunters by adopting these best practices: Time stamps: Synchronize times on all servers to avoid trouble later; with accurate time stamps, you can develop accurate timelines and use material in court. Log retention: Special attention must be made to define the right level of log retention and other key information to enable digital forensics. One of the major challenges when setting up cyber defense is making sure the data required for forensics is available (and correctly archived). Imaging software: Become more familiar with imaging software so you can make duplications of data at the endpoint and on mobile devices. MDM: Investigate options for MDM (mobile device management), which increases security and awareness and helps organizations better handle the challenges of mobile devices. Open-source: Look into open-source forensic products including Autopsy, Sleuth kit, and FTK. Do you have questions about how CyberProof can help you? Contact us today!
<urn:uuid:7dc5b0d0-b2df-4c3d-9cd6-463bd9518921>
CC-MAIN-2022-40
https://blog.cyberproof.com/test/a-needle-in-a-haystack-why-your-organization-needs-digital-forensics
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00339.warc.gz
en
0.93926
1,589
3
3
Every administrator who is not new to the virtualization world has heard about Hyper-V. It is a type 1 (or bare metal) hypervisor from Microsoft that allows the user to create and run virtual machines on a server or computer. For those who don't know about type 1 and type 2 hypervisors, let me briefly explain both: - Type 1—A type 1 hypervisor acts like a low-footprint operating system and runs directly on top of the host computer's hardware. This is why it is also known as a bare metal or native Common examples of type 1 hypervisors are VMware ESXi, Citrix XenServer, and Microsoft Hyper-V. This type of hypervisor is generally used for production-ready virtualization, such as data centers. - Type 2—A type 2 hypervisor runs as a software layer on an operating system, like other computer programs. It is also known as a "hosted" hypervisor. Common examples of type 2 hypervisors are VMware Player, Oracle VirtualBox, and Parallels Desktop. The type 2 hypervisor allows an end user to run virtual machines on a personal computer, so this type of hypervisor is generally used for training, development, and research. Many admins are still confused as to whether Hyper-V is a type 1 or type 2 hypervisor. Since it appears to be running on top of the operating system itself, most people think it is a type 2 hypervisor, but in reality, it's not. Let me tell you why. When you install or enable the Hyper-V role or feature on a system, the original operating system is converted into a virtual machine, and a layer of Hyper-V hypervisor is added under it. This is why your system is restarted when you install the Hyper-V role or feature. After restart, the original Windows operating system starts working as a virtual machine on top of the Hyper-V hypervisor. In a nutshell, Hyper-V acts as a type 1 or bare metal hypervisor under the hood. In this article, I will cover how you can manage Hyper-V completely using PowerShell. There are some tools you can use to manage Hyper-V servers using the GUI, but this guide is for the admins who love doing things with PowerShell rather than using the GUI tools. Hyper-V is available for both server and client operating systems. In server operating systems (Windows Server 2016, Server 2019, Server 2022), it is available as a server role, and on Windows client operating systems (Windows 10 or Windows 11), it is available as an optional feature. Microsoft likes to differentiate between them as Hyper-V on Server and Hyper-V on Windows. So, from here on in, we'll use the official naming scheme. To install the Hyper-V role or feature, you need to use different PowerShell commands depending upon the operating system type. Install Hyper-V on a server ^ To install Hyper-V on Server, you can use the following command in an elevated PowerShell session: Install-WindowsFeature -Name Hyper-V -IncludeAllSubFeature -IncludeManagementTools This command installs Hyper-V, including all the features and management tools on the Windows Server operating system. If you are trying to install the role on a remote server, you can specify the server name using the -ComputerName parameter. You can also use the -Restart parameter to restart the server automatically to complete the installation. Install Hyper-V on Windows ^ To install Hyper-V on Windows, you can use the following command in an elevated PowerShell session: Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-All -All This command enables the specified optional Hyper-V Windows feature, including all dependencies and management tools. It prompts you to restart the system to complete the installation, where you can type "Y" to reboot straight away or "N" to manually reboot later. To suppress this reboot prompt, use the -NoRestart parameter. Subscribe to 4sysops newsletter! In my next article I will explain how to create a virtual Hyper-V switch with PowerShell.
<urn:uuid:9e53e661-48bb-4ecf-be7d-4e8bfa38d9f9>
CC-MAIN-2022-40
https://4sysops.com/wiki/install-hyper-v-with-powershell/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00339.warc.gz
en
0.864541
850
3.015625
3
Ah, security. It is the dulcet tone of a symphony that we play over and over in the IT world. IoT (Internet of Things) and the myriad of connected devices allow us some intriguing security options. For example, in a mesh array of sensors, you could effectively force users to correctly identify themselves more than once. A user would identify herself repeatedly, but you wouldn’t prompt the user more than one time. By using an array of sensors, you could effectively determine the employee’s identity many times before she has even entered your facility. Or, for that matter, before the employee has put her badge in the badge reader to enter the building. Facial recognition using IoT video sensors is a quick and easy solution. I see you. I look you up. I know who you are and let you through to the next array of sensors. But I can also measure your voice as it has a unique timbre, analyze your particular speech patterns, and measure your gait. The very way you walk can also be used to not only determine you are who you say you are, but can also be used to verify you should be where you are. You can deploy these sensors in places people wouldn’t suspect them. Parking lots make for great places to observe someone’s gait. You can do further security checks as they stand and wait to get their bags checked or to get visitor badges. You record and identify their voice. You could even use the unique identifier that is the IMEI of a cellular phone to determine if someone is who they say they are. You can steal someone’s phone but not their voice and gait. Even the most practiced mimic is off just enough that we are able to tell the difference. Previously, I wrote about the internet of illness. The end game in that scenario involves your employer scanning you as you ‘badge’ into your Workplace, or greeting you at the visitor entrance, and then basically telling you to go home. You are sick, the system tells you. Come back when you are well. So you take a sick day or work from home, and the infection stops with you. Instead of walking into the office and infecting 200 people, you stay home and infect your cat. The same can be applied to people entering a building. These sensors allow you to have good solid security at your entrances quickly. Over time, organizations can share data about people that visit them. The US Government and other governments could sell security information about the gait, voice, and other attributes that can be quickly captured. Imagine security of the day after tomorrow. You are asked to step into a booth; the booth has a camera and a microphone. You are asked to say three sentences. A single ding and the door opens and you pass through a metal detector. The booth finds no weapons, no threat, and that you are who you say you are. Ah, security. You could, in cases where there are other issues and concerns like bombs and other risks, create artificial barriers to walking up the walkway to your building and add bomb and chemical sensors to the solution as well. Expense, however, is always the issue in terms of security. You can’t spend so much money on security that you bankrupt the company. Likewise, you can’t spend so much time and energy that the organization can’t get any work done because it takes a half hour to get from your car to your desk (even though you only travel a total of 10 feet). IoT devices will add a great deal of information quickly for people that end up staffing visitor desks. It will allow organizations to increase the safety of their workers by reducing the number of ways you can get weapons or dangerous devices into the building. If they somehow get past the entrance, you can lock them into stairwells or shut off elevators and let them know you know they have a gun or knife, and that you have notified the authorities. The application of security for and of people is a great use of IoT sensors and devices, and can be effectively utilized while still managing costs. It will increase the overall security of your building and people greatly. You can have people log into the system over and over again, without them having to sign in more than once! By Scott Andersen Scott Andersen is the managing partner and Chief Technology Officer of Creative Technology & Innovation. During his 25+ years in the technology industry Scott has followed many technology trends down the rabbit hole. From early adopter to last person selecting a technology Scott has been on all sides. Today he loves spending time on his boat, with his family and backing many Kickstarter and Indiegogo projects.
<urn:uuid:900f9b38-f859-49e9-82af-d41224232826>
CC-MAIN-2022-40
https://cloudtweaks.com/2015/07/the-concept-of-securing-iot-to-secure-your-building/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00339.warc.gz
en
0.966382
970
2.703125
3
Perceptual quality lightens the weight of your images while maintaining the best visual quality. This is done by intelligently calculating and applying the precise degree of compression that balances maximum byte reduction against visual quality. The resulting reduction in quality is imperceptible to the human eye. Previewing the impact of perceptual quality on your images can help you better understand why some images were not converted to the expected format. For example, you may expecting the images served to a Google Chrome browser to be in WebP format. However, if conversion to this format would result in a perceptible reduction in visual quality, the image will be compressed and converted to another format that retains the perceptual quality of the original image. See Create perceptual quality previews for guidance on creating these previews. Updated about 1 year ago
<urn:uuid:529c5e15-b1f4-4975-b94a-a76523355d88>
CC-MAIN-2022-40
https://techdocs.akamai.com/ivm/docs/images-were-transformed-into-an-unexpected-format
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00339.warc.gz
en
0.916299
160
2.796875
3
Deep learning is a subset of machine learning that involves the application of complex, multi-layered artificial neural networks to solve problems. Deep learning techniques are applicable across diverse problems and used in all three machine learning subcategories discussed before. For example, a deep neural network classifier is a form of supervised learning, while a deep neural network autoencoder is a form of unsupervised learning, and a deep neural network Q-function is a form of reinforcement learning. Deep learning takes advantage of yet another step change in compute capabilities. Deep learning models are typically compute-intensive to train and much harder to interpret than conventional approaches. In a deep neural network, data inputs are fed to an input layer of “neurons,” and the output of the neural network is captured in the output layer. The layers in the middle are hidden “activation” layers that perform various data transformations. The number of required layers generally (but not always) increases with the complexity of the use case. A single node in an artificial neural network takes input signals and produces an output, as shown in Figure 11. Deep Learning: Node Figure 11 Single node in a deep learning neural network. A deep learning neural network is a collection of many nodes. The nodes are organized into layers, and the outputs from neurons in one layer become the inputs for the nodes in the next layer. Deep Learning: Layers Figure 12 Single nodes are combined to form input, output, and hidden layers of a deep learning neural network. In the network shown above, each layer is fully connected to the previous layer and the following layer. Each layer enables complex mathematical transformations to be represented. Deep neural nets typically have multiple (more than two or three) hidden layers. Deep neural networks initially found broad application in the field of computer vision. A specific type of deep neural network – convolutional neural networks (CNNs) – are used broadly today in image and video processing. Convolutional neural networks are not fully connected as in the previous example, but instead apply convolutional functions at each layer and transfer the results to the next layer. They simulate how visual neurons work in animals.
<urn:uuid:6a7f0da5-7764-4a6f-9aab-98272c71b68b>
CC-MAIN-2022-40
https://www.c3iot.com/introduction-what-is-machine-learning/deep-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00339.warc.gz
en
0.897535
447
3.921875
4
Cybercime is evolving. Do you remember back when our biggest fear was that our computer would get a virus? Fast forward ten years later, and that’s the least of our concerns. Today, there are trillions of cybercriminals out there that have the power to shut down entire industries. All kinds of important information is stored online, which means cybercriminals can cause serious damage. And that is not even the most terrifying threat – cyberterrorism is. Criminals can now cause massive damage to states, cities, and corporations by stealing valuable data. Here are the top 10 trends in cybercrime: 1. More than 6 billion fake emails are sent every day. Email scams are one of the most frequent cybercrime methods. How many widows from Nigeria have asked you for help so far? That’s what we are talking about. According to EY cyber stats and facts, around 6.5 billion spam emails are sent every day. 2. Less than a third of URLs in the world are considered safe. The internet is home to billions of websites. It’s no wonder so many of them are used for cybercrime. The Webroot Threat Report says that more than a third of all websites (35%) are unsafe. URLs are divided into high-risk, moderate-risk, and low-risk categories. 3. Phishing is the third most common cause of data breaches. Phishing campaigns are easy to run, explaining their frequency. The Verizon 2018 Data Breach Investigation report explains that phishing is one of the most common attack methods. 4. Nine out of 10 attacks start with a phishing email. These attacks rely on prolonged, personalized communication with a victim so criminals can obtain credentials. These attacks are made to target larger enterprises. This makes it different than standard phishing attacks sent out to hundreds of addresses, claims Cofense. 5. Almost 60% of all computers in China are infected by Malware. According to Panda Security, not only is China the main contributor to cybercrime, it is also its biggest victim. China is the most infected country by malware in the world. In Europe, Turkey has this title with more than 40% of all PCs infected. 6. American companies lose $27 million annually from cyber attacks. With the world’s strongest economy, the United States is the ideal target for cyber attacks. The Cost of Cyber Crime study reveals that US companies lose $27 million every year due to cyber attacks. 7. A business is targeted with ransomware every 14 seconds What’s worse, half of successful ransomware attacks affect over 20 company devices, on average. 8. More than half of fraudulent activity happens on mobile phones. Mobile platforms are becoming the main vehicle for fraudulent activity, statistics show. 9. More than 10,000 Androids are infected every day. New malware pops up every minute. Cybercrime statistics show that more than 11,000 incidences of malware appear every day – a 40% increase from 2017. 10. This year, 300 billion passwords will be in use. According to research by Cybersecurity Ventures and Thycotic. Can you imagine protecting 300 billion passwords? There surely is a reason to worry if we know that the most common password is 123456. This tells us that great improvements in cyber education is needed on all levels. One of the biggest problems is the lack of effective methods for verifying your identity online. There will always be risks of fraudulent activity as trends in cybercrime continue to develop. But staying well-informed and educated can help prevent these attacks.
<urn:uuid:dc530c54-6919-4469-a273-54b16bc754f5>
CC-MAIN-2022-40
https://www.cybintsolutions.com/terrifying-trends-in-cybercrime/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00539.warc.gz
en
0.945271
743
2.9375
3
Intelligence at the edge is exciting. The edge allows devices to compute and analyze data closer to the user as opposed to a centralized data center far away, which benefits the end user in many ways. Intelligence at the edge is exciting. The edge allows devices to compute and analyze data closer to the user as opposed to a centralized data center far away, which benefits the end user in many ways. It promises low latency as the brains of the system are close by, rather than thousands of miles away in the cloud; it functions with a local network connection rather than with an internet connection, which may not always be available; and it offers a stronger guarantee of privacy because a user’s information is not transmitted and shared to remote servers. We will soon be able to process data closer to (or even inside) endpoint devices, so that we can reap the full potential of the intelligent analytics and decision-making. But the computing power, storage and memory required to run current AI algorithms at the end point is hampering our ability to optimize processing there. These are serious limitations, especially when the operations are time-critical. To make intelligence at the edge a reality, being able to understand, represent and handle context is most critical. What does that mean? It means that we give computing systems the tools to identify and learn what is needed, and only what is needed . Why generate and analyze useless or low-priority data? Capture what is needed for the purpose required and move on. Intelligent machines at the edge should be able to “learn” new concepts needed to perform their tasks efficiently and they should also be able to “systematically forget” the concepts not needed for their tasks. Humans learn contextually. Computer systems at the edge don’t – at least, not quite yet. But when they can, the power of AI and machine learning will be transformational. The “Edge” of innovation There are many definitions of context. Among other things, context can be relational, semantic, functional or positional. For our discussion we will use this definition: “A system is context-aware if it uses context to provide relevant information and/or services to the user, where relevancy depends on the user’s task.” Here’s a simple example in which object-recognition is strongly influenced by contextual information . Recognition makes assumptions regarding object identities based on its size and location in the scene. Consider the region within the red boundary in both images. Taken in isolation, it is nearly impossible to classify the object within that region because of the severe blur in both images. […]
<urn:uuid:40817c91-2136-43b8-bd3b-2bc1b6f6d011>
CC-MAIN-2022-40
https://swisscognitive.ch/2019/07/14/context-the-key-to-unlocking-artificial-intelligence-at-the-end-point-reader-forum/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00539.warc.gz
en
0.931333
547
2.921875
3
As part of National Cyber Security Awareness Month, or NCSAM, theNational Cyber Security Alliance is advising all computer usersto “Protect IT” by taking precautions such as updating to the latestsecurity software, Web browser and operating system. The nonprofit public-private partnership, which works with the Department ofHomeland Security as well as private sector sponsors, includingSymantec and Microsoft, advised computer users on ways to protect their personal data and information, as well as how to use WiFi safely. Protect IT is the third pillar of the NCSA’s overarching messagearound this month’s awareness program, which focuses on keyareas related to citizen privacy, consumer devices and e-commercesecurity. Outreach programs such as this one call upon consumers aswell as businesses to take responsibility for protecting electronic data. “National Cyber Security Awareness month is an opportunity to advocatefor informed policies and business models,” said Jim Purtilo,associate professor in the computer science department at the University of Maryland. “While it is always in order for citizens to take responsibility fortheir own safety, that task sure would be easier if businesses andagencies shouldered a fair share of the liability for tech tragedies,”he told TechNewsWorld. “Today companies have every incentive to gamble with cheap designs andsketchy practices; the market for clever tech applications is great,and the occasional exploit, accident or spill is a small cost ofbusiness,” warned Purtilo. “The impact to some consumer might be lifealtering, but at the end of that day the executive or official whomade risky decisions will get to go on with his life. Better cyberdesigns and practices are known today, and policy reforms would offergreater incentive to invest in them,” he said. Download and Update Outdated software continues to be a major issue when it comes to basiccybersecurity today — and ironically one of the easiest things toaddress. Consumers and businesses of all sizes too often failto make regular updates that can plug security holes. It isn’t just operating systems and antivirus programs that need to beupdated. Older browsers, and even older multiplayer games, also canpresent issues, as each of these also can be exploited by tech-savvyhackers. The same is true of virtually all programs on a computer, tablet or phone. In other words, every piece of software that can be upgraded or updated should regularly be patched to address potential weaknesses. “Third-party code is an area that has received little attention, eventhough it impacts consumers and the businesses that serve them,”noted Usman Rahim, digital security and operations manager atThe Media Trust, a cybersecurity research firm. “Any business that has a website, an app, or a platform relies on abevy of known and unknown third parties who have access to valuableuser information,” he told TechNewsWorld. “That access isn’t always authorized by the website or app owner,” Rahimadded. “Unless that owner has the right expertise and tools, theywon’t have any clue who is running code on their site and what thatcode does to their users.” Protect IT – Update the Software There are things that all users should be doing, and one ofthe easiest is also one that is often done the least often. That isupdating to the latest version of security software. “Your security software, antivirus and antimalware is only as goodas its latest update,” said Ralph Russo, director of the School of Professional Advancement Information Technology Program at Tulane University. “As malicious software is discovered on an ongoing basis, securitysoftware companies update their security definitions daily — or more –to recognize these new threats and counter them,” he told TechNewsWorld. To take advantage of this, security software needs to be kept currentthrough updates. “It is equally important to update your computer or device operatingsystem — Windows, Android, iOS, etc. — and devices including routers,printers and other digital equipment, on an ongoing basis to closevulnerabilities,” Russo added. “Vulnerabilities are flaws in computer systems and devices that leaveit vulnerable to attack, he noted. Oftentimes these vulnerabilities can be discovered months or evenyears after a system — software or hardware — has been in production. “Software and digital device companies develop fixes to close thesevulnerabilities and then release them as software patches and fixes,”explained Russo. “Downloading and installing these updates means that you are nowprotected from vulnerabilities that are known by the manufacturer ordevelopers,” he said. Failing to update the software or hardware can leave the system opento older, even known, attacks. Also, it isn’t just thesoftware, but much of the hardware around the house that poses risks. “Most people don’t update their home router’s, or Internet of Thingsdevices’ embedded software,” Russo pointed out. “However, anysoftware-controlled device can have a vulnerability, including yourhome router. Visit your home router manufacturer’s website and check.Newer routers allow you to check and install router updates right fromthe router homepage.” Protect IT – Staying Safe on Public WiFi Today the connected world is very muchwireless rather than wired, but public WiFi and mobile networks aren’t always sufficiently secure or hardened. Users need to keep this in mind when checking email at a coffee shop or working in a hotel room. Wireless networks simply do not offer the same level of protection as the more secured office or even home network. “When using WiFi in public — including coffee shops, airports, hotels –you should use a reliable virtual private network,” said Tulane’sRusso. VPN software encrypts your transactions and routes them through the VPNservers, and users can connect to a VPN via a reliable app beforeperforming more personal actions that should require a heightenedlevel or layer of security. “This will result in your actions not being visible on the publicWiFi network, because it is encrypted,” Russo told TechNewsWorld. “However, remember that all your traffic is then going through the VPNservice, meaning you should find a VPN solution you trust, or has highratings for policies — no logging — and trustworthiness,” he added.”You are never truly invisible and untraceable on the Internet, but agood VPN can help.” When on the go, it isn’t just what can be seen online either. “When using WiFi, the Internet and applications in public, be wary of’over the shoulder’ watchers, including cameras trained on yourcomputer or device,” said Russo. Secure IT – Home/Office WiFi Many home and office WiFi systems are not secure enough to dispel concerns. “Home and business WiFi networks should always be encrypted usingWPA2 security, as opposed to WEP or WPA, and require a passcode tojoin,” said Russo. “Some folks consider hiding their network name (SSID) so people’wardriving’ (searching for WiFi networks) won’t see your networkname pop up as an option,” he added. Taking simple steps such as changing the default username and password ofthe router are advisable too. “Failing to do so will mean that anyone who has bought the same modelrouter would be able to log into your router’s network settings andchange them to their advantage,” Russo warned. “When using your secure home network, you should consider adding aguest network to offer Internet on a limited one-time basis bychanging login credentials, without impacting your main WiFicredentials,” he suggested. “People should also create a separate network for your ‘Internet ofThings’ devices, like remote garage door openers, TVFirestick/Chromecast, thermostats and security cameras,” said Russo.”This will segregate the IoT devices, and their sometimes-shakysecurity from your home computing, which should remain on its ownWiFi network.” Protect IT – Keep Data Safe It isn’t just personal data that is at risk. As many healthcareproviders, retail companies, and even municipalities have learned alltoo well, cybercriminals often seek credit card and other personalinformation and data from customers and clients. “At the high level, businesses should employ data protection bestpractices by encrypting data at rest, when it is sitting indatabases; data in transit, or moving over a network; and data in use,which is actively being accessed,” said Russo. In addition, networks should be segregated logically to enforce “needto know” access to guard against an inside threat, and firms shouldimplement a “defense-in-depth” approach to security, which canensure that hackers that gain initial access to the business networkdo not also gain access to its most sensitive information. Companies also should ensure “physical security around technology andsystems, as physical access to systems defeats many cybersecuritymeasures,” added Russo. “When it comes to developers and network administrators, it’simportant to keep security in the front seat,” suggested Tulane’s Fox. “It doesn’t matter if you have a highly available and performant (optimal) solution f it is not secure. Every software solution needs to be designed to be secure by design, private bydesign, and data localized by design.” Protect IT – Insider Threats Of critical importance in any approach to cybersecurity is the humanelement. In many cases hackers aren’t as tech-savvy as movies and TVshows suggest. Instead it is human error, including the use of weak passwordsand other bad practices, that is at fault. “Insider threats account for the majority of mishaps and breaches,”said The Media Trust’s Rahim. “Some of these mishaps are unintentional and directly result fromemployees’ lack of training in cybersecurity basics,” he added. Many attackers use phishing campaigns to steal credentials and othersensitive information, and if employees are trained to watch out forthese attacks, the threat can be neutralized before any data iscompromised. “All employees should receive at least basic cybersecurity trainingsince insider threats remain the most prevalent yet receive the leastexecutive attention and priority,” said Rahim. “Safety practices should be things we know about but don’t need toobsess over when they easily fit into our daily lives,” saidUniversity of Maryland’s Purtilo. “We know many ways to protect peopleand systems.”
<urn:uuid:41a6eac8-378c-4e1b-bd03-e15c8825e661>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/adopt-a-maintenance-mindset-protect-it-86304.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00539.warc.gz
en
0.937289
2,318
2.59375
3
This tutorial is part of our SAP MM course and it provides SAP MRP process overview. MRP stands for Materials Requirement Planning and it is one of the most important functions of SAP ERP system. This tutorial will help you understand SAP MRP process overview, its outcome, and levels of planning. Materials Requirement Planning (MRP) The objective of MRP is to ensure material availability for the requirements. Requirements could be of two types – internal requirements and external requirements. External requirements originate from customer requirements which are entered into SAP as a sales order (for example). Internal requirements are to manufacture components on the same plant which could be supplied as components to finished goods or to satisfy the customer requirements. SAP MRP checks the stock level of the respective material and generates procurement proposals or planned orders which could be either converted to purchase requisitions or production orders based on the MRP settings in material master records. Master Data for MRP The following master data is required to carry out SAP MRP process: - Material master - Bills of material - Work center (in-house production) - Routings (in-house production) - Demand management - Sales and distribution (if required) For SAP MRP process to be carried out, material master would need to be maintained accordingly. Material master has several views related to MRP viz. MRP1 to MRP4. MRP 1 view has fields like purchasing group, plant special material status, MRP procedure, and lot size data. In material master MRP 2 view, fields like procurement type, special procurement, scheduling margin key, and planned delivery time are located. There is a separate tab available for net requirements calculation which includes safety stock, minimum safety stock which helps to calculate the required quantity at the right time. Material master MRP 3 view includes fields like strategy group which is used to decide between Make-To-Order (MTO) or Make-To-Stock (MTS) scenarios. Also, Availability check filed is available which is used to maintain the checking rule for checking the material availability and update it for available to promise dates and quantities. In material master MRP 4 view, fields for BOM explosion and dependent requirements are available. There is a separate tab available for Repetitive Manufacturing which helps to maintain REM profile that is used to enter and record transactions for repetitive manufacturing. If a material is subject is to in-house production, then work scheduling view would need to be maintained. If a material is subject to MRP planning, then all the four views of MRP would need to be maintained in material master. SAP MRP Process Flow SAP MRP process flow starts with customer requirement, which originates from the sales department or marketing department. The customer requirements are entered in SAP via sales orders. The customer requirements are entered as demand in demand management system. The output of demand management is Planned Independent Requirements which would be used in long-term planning. Now, materials requirement planning comes into picture. The input to MRP is from sales order and planned independent requirement, if applicable. When the MRP run is carried out, planned order or purchase requisition would be generated based on the planning run settings. Planned order could be converted into purchase requisition (PR) or Production order. Purchase requisition is for external procurement and production order is for in-house production. SAP will then convert all the dependent requirements of planned order into reservations in the production order. When planned order is created for external procurement, it would need to be reviewed by planners, and if required planners would convert the planned order to purchase requisition. Otherwise, the purchase requisition would be generated automatically and available for purchasing. In SAP MRP process, the system calculates the net requirements while considering available warehouse stock and scheduled receipts from purchasing and production. During MRP process all levels of the bill of material are planned. SAP MRP Planning run can be executed at plant level or MRP area level. This planning run can be executed for a single material or a material group. Planning run can be total planning for a plant, single-item single level planning, or multi-level single item planning. The SAP system creates procurement proposals which could be planned orders, purchase requisitions, schedule lines based on the planning run settings. Planning file entry contains details of the materials that are to be included for the MRP run. SAP MRP Planning run type depends on the processing key in the MRP run screen. There are three types of processing key: - NETCH – Net change planning in total horizon. - NETPL – Net change planning in the planning horizon. - NEUPL – Regenerative planning. SAP MRP Planning Run The figure above shows planning run at material level. The material and a respective plant are entered. There are several fields which come under MRP control parameters control data. - Processing key field – net change in the planning horizon. - Create Purchase Requisition – This field has an option for creating purchase requisitions or planned orders. - Scheduling Agreement (SA) delivery schedule lines – This field has an option for creating schedule lines / no schedule lines. To create a scheduling agreement there should be settings maintained in a source list. - Create MRP list – MRP list would be created and displayed when the planning run is executed and saved. - Planning mode – In this field, the planning mode would be maintained whether to run normally or delete and create all planning data or re-explode BOM and routing, if there are any changes made to those master data. Transactions for SAP MRP Planning Run Transaction code: MD01 SAP Menu -> Logistics-> Production-> MRP -> Planning -> Total Planning -> Online With this transaction code, we would be able to carry out planning run at a plant level. As this would consume a lot of time for the output to be displayed, it could be executed as a background job. MD02 – This transaction code is used to execute a planning run for a material and used to explode multi-level materials. In this planning run, MRP would be carried for a material and for the first level of BOM, the other components will not be included for planning. In the transaction code MD04, we could get the latest stock / requirement list for a particular material and plant wise. Enter the material and plant; this would give you the current stock with requirements or receipts. Did you like this SAP MM tutorial? Have any questions or comments? We would love to hear your feedback in the comments section below. It’d be a big help for us, and hopefully it’s something we can address for you in improvement of our free SAP MM tutorials.
<urn:uuid:6c6291c9-b75c-4080-8c87-9e7f4a5804b7>
CC-MAIN-2022-40
https://erproof.com/mm/free-training/sap-mrp-process/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00539.warc.gz
en
0.908369
1,399
2.6875
3
Access to parks and green spaces has been one of the defining factors of an individual’s experience and wellbeing during the lockdown. The value placed on green places has never been higher. And I believe our renewed desire to ‘get out’, to explore, to exercise and to be immersed in nature is here to stay. Everyone deserves to be positively affected by good design of public places, parks and green spaces. So, how can we realise our need for more green spacewith the ever-increasing pressure on our public places. One way is to better utilise the digital tools we are already engaging with in design, construction and infrastructure planning.We are in a unique position to innovatively use BIM and digital tools to increase access to and improve the quality of our green spaces. BIM and 3D visuals BIM by its nature encourages us to design, work and collaborate in 3D. BIM readily allows us to create models that are more accessible to a range of stakeholders, including the general public and end users for many of the environments we create. BIM utilises data within the design process from ecological surveys to topographical information to provide an accurate interpretation of the space. This data can be creatively used to develop inspiring landscapes for our local communities to enjoy. It is difficult to bring 2D plans to life, but by using 3D visualisations and augmented reality we can inform and engage a wider audience at public consultations. This gives people a more realistic understanding of how an urban environment will be implemented and the opportunity to provide more meaningful feedback on how that space could be improved. During lockdowns, this has become even more important as digital models and ways of engagement have been essential to allow projects to move forward without face to face consultations. BIM has created a more efficient way to communicate designs but more importantly to bring to life the design process, to promote more open and engaging conversations. To realise the full power of BIM and 3D, we need to look at the integration of the visualisation and modelling tools we use across disciplines. Too often, an engineer will use one platform on a single project, an architect another and a landscaper yet another. And while a Revit model may be effective for a single building, it doesn’t account for the space around the building to the extent it could. There are powerful tools like NScape that allow you to pull together technical models that are also accessible to stakeholder and community groups. Our aim should be to move towards a common data platform between disciplines that is accessible to all who are working on, or impacted by, a project. Following a cohesive BIM strategy can help deliver an integrated and multi discipline design that considers all factors particularly the experience of the end user. Data-based decision making We can also be much better at using data to inform location, type and function of green space, as well as management and maintenance. The London Planning Datahub that Atkins has created with the GLA is a great example of this – it allows for close to real-time planning information to be made accessible to all 36 Greater London planning authorities. This type of system could help us model a more equal provision of infrastructure across a city. For example, identifying locations where pocket parks are most needed and viable and then replicating their design to deliver them faster to disadvantaged communities. Our urban environments are built up of different layers of information from pedestrian movement analysis, climate readings to historical records. There is a narrative running through this, segments of information and data that can be utilised to optimise the design process, construction, maintenance and the user experience of our urban environments and green places. BIM provides solutions on how to integrate new features into existing assets. Take a typical city centre street: the pavements cover the complex mix of utilities from electric supplies, water to fibre optic broadband and gas supplies. Integrate this with the need for different traffic modes, drainage, street furniture, street trees and the different user demands of people and there is a truly complex system. Through digital twins, we can provide a virtual representation of our cities to establish how to interface with each of these assets. This model would provide the optimum locations for street greening, where there is space for new trees and where they have opportunity to thrive. This can be correlated with data on the societal demands for additional space and inform positive change across our towns and cities. VU City have started to map this with 3D models for planning and design. Taken further, the functions of our roads, crossings, movement and our green infrastructure can be integrated and city-wide strategies developed. An economic case Underlying all of this is a strong business case for green space. The Natural Capital study for the GLA showed that for every £1 spent on parks there are £27 of benefits, on everything from property prices to water management and drainage, biodiversity and wildlife, improvements to air quality and benefits to our health and wellbeing. The future of digital greening sits beyond the traditional benefits of driving efficiency and deliverability – it provides an opportunity to deliver greener and more sustainable landscapes in the heart of our cities.The strong economics behind green space, not to mention the added social value, mean our investment is more than justified, as is our incentive to bring in new technology.
<urn:uuid:185f8ba9-062c-46a6-a609-0bc4f343458d>
CC-MAIN-2022-40
https://construction.cioreview.com/cxoinsight/digital-greening-nid-32936-cid-25.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00739.warc.gz
en
0.938282
1,095
2.765625
3
A mobile access control system is an access control system that uses smartphones, tablets, and smartwatches as credentials to allow access to the restricted area. It is a great alternative to the traditional physical access card systems that use keycards, fobs, and badges to gain access. Access control systems have been around for a very long time. With the advent of the IoT and smartphones, access control system technology has evolved a lot. As per the recent study by Gartner 20% of organizations use smartphones in place of traditional physical access cards. In this post, we will look at how a mobile access control system works, what are the advantages of mobile credentials, and what to keep in mind while choosing a mobile access control system. How does a mobile access control system work? Like any physical access control system, a mobile access control system has the five main components: access point, personal credentials, reader, control panel, and access control server. The major difference between a physical access control system and a mobile access control system is that a mobile access control system uses mobile credentials. Mobile access control is powered by Bluetooth and NFC technology. Using the BLE or NFC technology, a secure connection between the reader and the smartphone is established. As everyone has a smartphone, this system can be easily used in most organizations. What are the benefits of mobile credentials? Mobile credentials offer a range of advantages over traditional physical access credentials that use keycards, fobs, badges, pins, or biometrics. Have a look at the three main benefits of mobile credentials. Mobile credentials are more secure than keycards, fobs, and badges. There are various reasons behind that. The first is that mobile phones cannot be cloned. They are the most secured devices in the market. If you lose a keycard, it can be cloned easily and someone unauthorized can get access to the facility easily. Whereas in the case of using smartphone credentials, a user needs to unlock their phone first. In today’s digitally advanced world, almost every smartphone comes with a passcode, fingerprint, or facial ID lock. Thus, it is almost impossible to breach security if you use a mobile access control system. Second, you can add the two-factor authentication in highly restricted areas that require a user to use biometrics to unlock their phone via thumbprint or FaceID to use their mobile credentials. Third, all the communication between mobile access control systems and mobile credentials is end- to-end encrypted. Mobile credentials are more convenient than physical credentials. Your employee need not go through the hustle of carrying the keycards, badges, or fobs. Forgetting keycards at home or losing keycards is quite common, but it is highly unlikely that someone will lose their phone as compared to the keycard. Thus, the mobile access control system is highly convenient. Your employees do not have to carry the keycards anymore and the employer does not have to worry about managing the inventory of the keycards. Bluetooth Low Energy technology does not require manual pairing. The connection between the reader and the mobile phone can be established over long distances as BLE Range can be meters instead of inches. Not just that, mobile access control systems come with a range of features such as some readers in the market allow users with a mobile credential to open a door hands-free with the motion-activated Wave to Unlock feature. Some systems also support door unlocks by tapping the entry in the app, via Apple Watch or tablet app, or by simply touching the reader. Thus, you do not even have to take your phone out to unlock the door. Also, mobile access control is based on cloud storage and IoT. All-access permissions can be easily regulated in the cloud by specialized security agents. The administrators can easily manage the access using their devices. A mobile access control system is a cost-effective solution. There is no need for employers to invest in purchasing, issuing, and replacing keycards, fobs and badges. Almost everyone working in an IT organization has a smartphone. All you need is software to connect your smartphones with the mobile access control system. The users have already invested in the hardware. Not just that, the mobile access control system is different from the physical access control system in the fact that it uses cloud storage instead of the on-site server. All credentials are managed in the cloud. There is no need to have a dedicated physical server on-site to manage which eliminates the cost of licensing, maintaining, and servicing the on-premise servers. What to keep in mind while choosing a mobile access control system? There are plenty of things to keep in mind while choosing a mobile access control system. Here are the top three things that will help you make the right choice: The first and most important thing you need to ensure is that your mobile access control system is secured. Look for a system that comes with an ISO 27001-certified system architecture. Most good mobile access control systems come with 256-bit encryption on a BLE or NFC connection. Features of the system Mobile access control systems come with a range of features. It is up to you to decide what features you need. For example, do you need a camera on the reader? Do you need a mic on the reader? Do you need cloud-based access control? Based on the features you want, the price of the mobile access control system varies. Make sure the mobile access solution has its app, an SDK to integrate mobile access features into your existing app and support the advanced features. The cost of ownership It is a very important factor when choosing a mobile access control system. Many factors can influence the cost of the access control system. Look for a system that has a lower cost of ownership in the long term and can be easily scalable to fulfill the growing needs of an organization.
<urn:uuid:7f1f9707-4e86-47ae-b567-73f8247cb0e1>
CC-MAIN-2022-40
https://mgiaccess.com/mobile-access-control-what-are-mobile-credentials/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00739.warc.gz
en
0.941341
1,208
2.78125
3
Why are Healthcare Organizations Targeted for Cyber Attacks? It may be common knowledge that healthcare organizations are often targeted by hackers, but people may not always know why that is. The short answer is that healthcare organizations are targeted because the information stolen from them can be sold through cybercriminals at a higher dollar value than most other forms of data. Targeting of Healthcare Organizations The real key to why healthcare organizations are targeted more than other types of companies is that the information that they possess is much more valuable on the black market. Protected Health Information, or PHI, is the information that is found in a person’s health record and could be used to identify them. This could be anything from their full name, address, social security number, medical record number or any other form of PHI. Healthcare records containing PHI are one of the most valuable and sought after types of information that hackers look for which leads to them having a higher selling price on the dark web. There are three key reasons that medical records and PHI is so valuable to cybercriminals - the higher selling price, the long shelf life and the multiple uses for the same form of data. Why are Healthcare Records so Valuable? According to the 2018 Trustwave Global Security Report, a person’s healthcare record can be sold for $250.15 versus a single social security number would only receive $0.53 in comparison. Healthcare records are known to have a long shelf life when sold, because unlike a credit card, those affected are unlikely to realize that this has happened quickly, and therefore the information can be spread and sold further before detection. Beyond just the high selling prices on the dark web, health records can also be used for multiple purposes by the purchasers as they could purchase prescriptions, receive treatment or even make false medical claims using this stolen record. Each of these actions could have significant costly effects on the patient whose information is taken but also on the healthcare industry more broadly. Now we can see the ways in which PHI is valuable on the darkweb but we must also keep in mind that a patient’s information is even more valuable to them and the provider that they have trusted to protect it. This should remind healthcare providers that their susceptibility to cyber attacks makes their dedication to cybersecurity and HIPAA compliance all the more important. CyberSecurity in Healthcare Increasing Number of Attacks As the years have passed and more and more of the operations of the healthcare industry have moved to an digital format, the number of breached healthcare records has trended up right alongside. Each year Verizon releases data breach reports that tell the story of that year's worth of breaches that have occurred. Between the 2016 and 2019 reports, the number of data incidents and breaches increased by 200%. The recent 2020 report shows that these numbers have continued to grow, now revealing a 71% increase in the number of breaches this year. With many of the challenges with COVID-19 and a work-from-home environment, organizations need to be more aware than ever that the PHI they are responsible for is completely secure and protected. Data Security and COVID-19 Especially in the middle of a nationwide healthcare crisis, healthcare organizations are at an even higher risk than typically. While healthcare providers are working overtime to take care of COVID-19 patients, some of their attention may be taken away from PHI security. Since the beginning of the pandemic, the FBI has reported about 2,000-3,000 more cybersecurity complaints each day from the typical 1,000 a day. Many of the increased cybersecurity attacks can be explained by the hackers desire to gain information about COVID-19 related information and use these vulnerabilities to do so. It is fairly typical that any monumental event within a country would spur on a spike in cyber attacks as it has happened in the past with other events, and is happening again with COVID-19. This crisis in particular has moved most of the workforce to remote work which has presented a whole new set of challenges to staying HIPAA compliant in a work-from-home environment. The Future of Cybersecurity Year over year we have seen the number of cybersecurity attacks on healthcare organizations continue to increase, and there is no sign that this increase will slow down. Hackers have not only increased their attacks, but they have regularly reinvented the methods with which they infiltrate the organizations, which we can see through the increasing commonality of phishing scam emails. It is vital that healthcare providers have the necessary policies, training and technology in place in order to comply with related data security laws while ensuring that all of this important information is well protected. How to Protect Yourself from These Attacks Now that healthcare organizations understand the reasoning behind the target on their back from hackers and cybercriminals, it is important for these providers to also know how they can prevent the attack. The key for health organizations is their understanding of and compliance with HIPAA, or the Health Insurance Portability and Accountability Act of 1996. HIPAA is the legislation that lays out the physical, technical and administrative safeguards that organizations must follow in order to ensure that PHI is kept safe from attacks. These safeguards, which you can read about here, mandate that organizations take certain steps like implementing workforce training and management, limiting access to facilities or devices that contain PHI and requiring all data to be carefully encrypted. Due to the high value of PHI data, healthcare organizations should regularly assess the steps that they are taking to ensure the security of PHI. Although a breach of information is not entirely avoidable, it is important that a healthcare provider takes all the steps possible to lessen the risk and comply with all aspects of HIPAA. For more information on how to understand the risks associated with storing, sharing and maintaining PHI and what steps need to be taken to reach HIPAA compliance, feel free to ask Accountable, your simple HIPAA compliance software solution provider.
<urn:uuid:cb0b7755-108f-46b3-8a25-8bf20a136e7a>
CC-MAIN-2022-40
https://www.accountablehq.com/post/why-are-healthcare-organizations-targeted
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00739.warc.gz
en
0.962085
1,196
2.6875
3
4 Technology Stories You May Have Missed Last Week Technology Is Monitoring the Urban Landscape Technology is changing the urban landscape in the U.S., starting with San Francisco, by automating and remodeling buildings and streets. Parking spaces will be cut down, thanks to Uber, Lyft and self-driving cars making car-sharing easier, and buildings will be remodeled using 3D printing and robots to help with the labor. With high-tech buildings and utilities in place, the data collected can help with automation and predictability. London and Shanghai have already seen great changes in their skylines, thanks to computer-assisted design. Read more about the new urban landscape on The New York Times. Google's DeepMind A.I. Can Slash Data Center Power Use 40% Google is reducing the energy used in its data centers by 40% using its DeepMind neural network. The network is able to actively learn about the environment using methods similar to the human central nervous system to solve complex tasks. Using historical data about temperatures, power and pump speeds, it’s been able to cut down Google’s total energy use. This makes a huge impact since Google’s data centers account for 40% of the internet. Read more on CIO. FBI Needs to Beef-Up High-Tech Cyber Threat Evaluations says DOJ Inspector General The FBI’s method of identifying major cyber threats, the Threat Review and Prioritization (TRP), may not be cutting it anymore, says the Department of Justice Office of the Inspector General. Because the process is conducted annually, it is not timely enough to catch every threat, and it does not prioritize threats in an objective, data-driven manner. However, a new tool out of the FBI Cyber Division may change that. The Threat Examination and Scooping (TExAS) software is able to prioritize threats using specific, real-time data and can find threats more frequently than TRP. Check out Network World for the whole story. Auto Industry Sharing Info on Cyber Attacks The U.S. automotive industry is coming together to share information on cyber threats and eliminate them. The Automotive Information Sharing and Analysis Center includes nearly all companies that sell cars and trucks in the U.S. and is sharing data via a common internet portal to create best practices and guidelines for how secure hardware and software should be built, as well as how to respond to hacking incidents. The group has been operating since January. Learn more at ABC News.
<urn:uuid:7d62368d-4f4f-439d-8ce6-ff73b8f1ea42>
CC-MAIN-2022-40
https://www.dlt.com/blog/2016/07/25/4-technology-stories-missed-week-2
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00739.warc.gz
en
0.929858
509
2.515625
3
New Training: Describe Relational Data Workloads In this 6-video skill, CBT Nuggets trainer Ben Finkel covers the structures and objects that are commonly found in relational data storage models. Watch this new Azure training. Learn Azure with one of these courses: This training includes: 45 minutes of training You’ll learn these topics in this skill: Recognizing the Right Workloads for Relational DBs Understanding Tables and Fields Understanding Queries and Views Using DDL and DML Querying Azure’s Relational DBs What Makes Up a Relational Database? A relational database is a type of database that organizes data based on unique attributes. For example, a relational database may represent a data entry by employee ID, name, age, and date of birth. These unique characteristics are how relational databases organize the database structure such that data entries share a common relationship. A relational database is made up of 3 basic elements: fields (columns), records (rows), and table. A record is a row of entries that reads from left to right and will be populated with data for employee ID, name, age, and date of birth. A field is a column that stores a relational data entry. An example of a field could be employee ID. As you can imagine, the ID field column will be populated solely with ID values reading top to bottom. Lastly, the table itself encompasses the entire relational database. So when you hear table, just think of the entire database of entries.
<urn:uuid:018a93f4-2391-4f6d-a848-e1fca269b1f8>
CC-MAIN-2022-40
https://www.cbtnuggets.com/blog/new-skills/new-training-describe-relational-data-workloads
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00739.warc.gz
en
0.853773
327
3.109375
3
If you were a kid of the 1980s and 1990s, you probably saved video games for after school. Super Mario Brothers was a great way to get over the trauma of learning multiplication tables, but kids these days may have it a bit better. One 23-year-old teacher is looking to bring video games into school as classroom software, according to the Indianapolis Star. Ben Bertoli, a sixth grade teacher at Danville Middle School, was looking at ways to better motivate his students. He said he didn’t feel like he was doing enough to reward the kids for good work. This, according to the source, led him to create ClassRealm, a teaching tool in a role-playing video game that can help motivate kids to do homework and gives them rewards they select. “It’s based on role-playing video games like Pokemon or Final Fantasy,” Bertoli said to the Star. “The students can gain achievements for doing things around the classroom or . . . can earn experience points for doing different things during the day, like participating in class or leading the class discussion.” So far, the teacher has a prototype of the game on paper in his classroom, but he is partnering with Country Cotten to help move the game forward. They have enlisted an illustrator to draw avatars for the game and are currently looking at getting $65,000 in startup costs from a Kickstarter fund. Thus far, there has been $20,000 raised. This video game in the classroom phenomenon has been going on across the country and world. For example, Northern California’s NPR station KQED reports that Santeri Koivisto and Joel Levin, two teachers, decided to create their own classroom video game. The game, called MinecraftEdu, allows teachers to tailor individual curriculum to students. “Koivisto and Levin decided to pursue a classroom application after observing students solve complicated problems with their collaboration in the game,” the source said, referring to the popular game Minecraft. “When Koivisto tested Minecraft at a Finnish school, one-third of the 20 teachers in the study later chose to incorporate the game into their teaching.” KQED spoke with New York public school teacher Matt Coia, who said the educational version of the game provides a deeper level of management and control for educators. Are video games in the classroom something you believe will gain more traction over the next few years? Would you care if your children played video games in school? Let us know your thoughts!
<urn:uuid:79a94533-d7b7-4ff7-85ec-2c44b7b97fb3>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/games-in-the-classroom-one-teacher-says-yes
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00739.warc.gz
en
0.973808
526
3
3
The early 1900s witnessed a massive public shift from traditional horse-drawn carriages to the greatest invention of the 20th century, pre-dating sliced bread. The transition was stark and irreversible. An age-structured demographic theory of technological change – Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/Transition-from-horse-drawn-carriages-to-petrol-cars-in-the-1920s-data-originally-from_fig1_236202800 [accessed 25 Aug, 2021] Data centers are the horse carriages of this century; cloud dominance is here to stay. Horse carriages didn’t just give up to cars that easily, but it was not a fair fight in all honesty. The case is the same with data centers attempting to compete with the cloud. There are rumblings in the industry, such as this a16z article, which argues that the cloud is depressing the market cap for companies. In reality, this is far from the analytical truth. It has been proven that effective cloud deployment increases companies’ capitalizations (Source: McKinsey). While data centers are the right solution for some use cases, a quant fund is unlikely to put its trading algorithms on a public cloud. Also, some mission-critical applications need a dedicated administrator on site who can ensure availability and security. Archival needs for regulatory purposes for large insurance companies do benefit from specialized storage data centers. Still, if business agility is a priority and your focus is on increasing the pace of innovation, the cloud is your answer. Otherwise, born-in-the-cloud businesses will give you a run for your money. The productivity and acceleration of innovation that the cloud unleashes are unparalleled. By delegating all your undifferentiated heavy lifting to the cloud provider, the engineering resources can laser focus on business innovation. The shorter innovation cycle results in a competitive advantage, which should increase your market cap. And it is in a company’s best interest to not only adapt, but invest in the cloud to unlock its full potential. My next blog will touch on an essential skill large organizations have to build, taking cues from nature’s mean machines.
<urn:uuid:11e25856-4c0b-451e-9333-204d121c7b85>
CC-MAIN-2022-40
https://cloudwiry.com/cloud-dominance-is-here-to-stay/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00139.warc.gz
en
0.928457
466
2.515625
3
A simple error message returned by a server to which a malware sample was trying to connect revealed to Dell SecureWorks researchers the origin of the RSA attack, says Joe Stewart, the company’s Director of Malware Research. Even though the message was seemingly truncated, the pattern could be tied to “HTran”, a very old piece of software that is used to obfuscate the real source or target of the attack by redirecting TCP traffic to alternate hosts. The error message in question happens due to a coding mistake by the author of the software, a Chinese hacker that goes by the handle of “Lion”. The great news is that this knowledge can be used by organizations to detect Advanced Persistent Threats targeting their networks – they only need to comb through the logs for the error message. They can also track down the IP addresses from which the attack is coordinated, as the researchers have done by using the malware sample used in the RSA attack. In this particular case, they tracked them down to a number of domains known to be connected to a variety of different APT trojans, and they are mostly located in the People’s Republic of China. “It’s not surprising that hackers using a Chinese hacking tool might be operating from IP addresses in the PRC,” comments Stewart. “Most of the Chinese destination IPs belong to large ISPs, making further attribution of the hacking activity difficult or impossible without the cooperation of the PRC government.” He also warns that the signatures that would prove that this type of traffic occurred on the network might soon prove to be obsolete, since the coding bug might soon be fixed by the attackers once this information is widely known. Unfortunately, that means that after a while, organizations will be able to detect only past attacks and not the ongoing ones. It remains only to hope that attackers will make this kind of mistakes in the future.
<urn:uuid:f7984112-88fc-44d3-80e1-10ab74ae27d7>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2011/08/04/coding-error-reveals-rsa-attackers-operated-from-china/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00139.warc.gz
en
0.956933
397
2.71875
3
The health of the U.S. economy and well-being of our citizens relies heavily on secure and resilient critical infrastructure. In the past year, we have witnessed an increasing number of cyberattacks on critical infrastructure entities, including the attacks on the Colonial Pipeline, SolarWinds and JBS, as well as attacks on California and Florida water systems. The list is only growing. It’s clear that the government has placed increased focus on protecting critical infrastructure, establishing the... The health of the U.S. economy and well-being of our citizens relies heavily on secure and resilient critical infrastructure. In the past year, we have witnessed an increasing number of cyberattacks on critical infrastructure entities, including the attacks on the Colonial Pipeline, SolarWinds and JBS, as well as attacks on California and Florida water systems. The list is only growing. It’s clear that the government has placed increased focus on protecting critical infrastructure, establishing the Cybersecurity and Infrastructure Security Agency (CISA) in 2018, passing legislation like the Strengthening American Cybersecurity Act of 2022 and issuing the executive order to improve the nation’s cybersecurity. Additionally, councils like the the President’s National Infrastructure Advisory Council and the Homeland Security Advisory Council have been created and include cybersecurity executives who have boots on the ground. Most recently, the Department of State created a federal bureau focused on including cyber protection to foreign policy initiatives. The Critical Infrastructure Information Act of 2002, which seeks to facilitate greater sharing of critical infrastructure information (CII) between critical infrastructure organizations and government agencies, has not been updated since it was first introduced in the early 2000s. But a lot has changed in the last 20 years, including best practices for protecting critical infrastructure. As threats evolve and increase for this sector, what else can the federal government do to ensure the security of the nation’s critical infrastructure? How the CII Act of 2002 is being implemented today Accurate CII is an essential resource for national security efforts in protecting critical infrastructure from a variety of hazards, natural disasters, internal threats and direct attacks. But because critical infrastructure is widely privately owned, this information is considered sensitive and proprietary, and disclosed reluctantly. The CIIA Act of 2002 established the Protected Critical Infrastructure Information (PCII) Program that allows public and private sectors to voluntarily submit information. The program seeks to protect U.S. infrastructure by offering protections to validated information and enhancing the flow of information between the private sector and all areas of the government focused on national security. This created a partnership between the government and the private sector to help build protective measures. Through PCII, government agencies with homeland security responsibilities can develop advisories, alerts and warnings for public notification that are timely for state, local and federal governments. PCII enables public and private entities to monitor and work together to design solutions for their unique security needs and assess vulnerabilities. PCII helps understand challenges in protecting critical infrastructure and the security risks faced by the sector, aiding recovery efforts and preparedness for critical infrastructure in case of any kind of disruption. Looking forward for CII Act of 2002 There’s no doubt that critical infrastructure has evolved since 2002, especially from the adoption of modern technologies that open doors for new vulnerabilities. This evolution requires the federal government to take a close look at the CIIA Act of 2002 and the PCII program to ensure it continues to provide the most useful information to effectively protect critical infrastructure. While the CII Act of 2002 has not been updated since it was first introduced, recent mandates from the federal government represent solid steps towards modernizing safeguards within the government including critical infrastructure. Most recently, Congress passed the Cyber Incident Reporting Mandate in March 2022 that requires critical infrastructure providers to report security incidents, marking a good step forward in improving rapid information sharing. Companies are often reluctant to share cybersecurity incident details affecting critical infrastructure due to privacy laws and regulatory concerns. While it’s not a free pass to not comply with regulatory obligations, the mandate alleviates worry about unintended exposure of confidential and proprietary information. Other recent initiatives like the zero trust mandate from January 2022 crafted to move the government toward a zero trust approach to cybersecurity and an August 2021 memorandum that provides tiered instruction on logging requirements for federal agencies provide additional focus to securing critical infrastructure. As global conflict heightens with the Russian-Ukraine crisis, we see the real-world implications of the compounding effect of physical and cyber conflict on critical infrastructure. The conflict highlights the wider humanitarian implications of disruptions in production and transport of oil, gas, nuclear and traditional energy. When the stakes of an attack change from monetary and financial loss to the health and welfare of human lives, the cost of cyberattacks move into an immeasurable and morose gray space. In this space, the call to action and the cost to protect shouldn’t be calculated in traditional financial transactions. Successfully defending against the rampant cyber threats the critical infrastructure sector is facing today requires proper preparation and protection. To defend against cyberattacks, the federal government must continue to analyze the current threat landscape for critical infrastructure and take a security-first approach. Andrew Hollister is the chief information security officer of LogRhythm. Cyber Safety Review Board’s first report gives CISA thumbs up for Log4j response
<urn:uuid:397ce2a3-6d83-47e7-b8f5-455179706a03>
CC-MAIN-2022-40
https://federalnewsnetwork.com/commentary/2022/07/updates-to-the-critical-infrastructure-information-act-are-long-overdue/?readmore=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00139.warc.gz
en
0.93343
1,089
2.875
3
The term Machine Learning (ML) is defined as ‘giving computers the ability to learn without being explicitly programmed’ (this definition is attributed to Arthur Samuel). Another way to think of this is that the computer gains intelligence by identifying patterns and data sets on its own, improving output accuracy over time as more data sets are examined. Since ML can be a challenging solution to implement, we’ve put together some foundational steps to assess the feasibility of building an ML solution for your organization: 1. Identify the problem TYPE Start by distinguishing between automation problems and learning problems. Machine learning can help automate your processes, but not all automation problems require learning. Automation: Implementing automation without learning is appropriate when the problem is relatively straightforward. These are the kinds of tasks where you have a clear, predefined sequence of steps currently being executed by a human, but that could conceivably be transitioned to a machine. Machine Learning: For the second type of problem, standard automation is not enough – it requires learning from data. Machine learning, at its core, is a set of statistical methods meant to find patterns of predictability in datasets. These methods are great at determining how certain features of the data are related to the outcomes you are interested in. 2. Determine if you have the right data The data might come from you, or an external provider. In the latter case, make sure to ask enough questions to get a good feel for the data’s scope and whether it is likely to be a good fit for your problem. consider your ability to collect it, its source, the required format, where it is stored, but also the human factor. Both executives and employees involved in the process need to understand its value and why taking care of its quality is important. 3. Evalute Data Quality and Current State Is the data you have usable as-is, or does it require manual human manipulation before introducing into the learning environment? A solid dataset is one of the most important requirements for building a successful machine learning model. Machine learning models that make predictions to answer their questions usually need labeled training data. For example, a model built to learn how to determine borrower due dates to improve accurate reporting needs a starting point from which to build an accurate ML solution. Labeled training datasets can be tricky to obtain and often require creativity and human labor to create them manually before any ML can happen. 4. Assess Your Resources Do you have the right resources to maintain your ML solution? Once you have an appropriate question and a rich training dataset in hand, you’ll need people with experience in data science to create your models. Lots of work goes into figuring out the best combination of features, algorithms, and success metrics needed to make an accurate model. This can be time-consuming and requires consistent maintenance over time. 5. Confirm Feasibility of ML Project With the four previous steps for assessing whether or not ML is right for your organization, consider the responses. Is the question appropriate for building an ML business solution? Is the data available, or at least attainable? Does the data need hours of human labor? Do you have the right skilled team members to carry out the project? And finally, is it worth it – meaning, will the solution have a large impact, financially and socially? It’s important to consider these key questions when assessing whether or not Machine Learning is the right solution for your organization’s needs. Connect with our ML experts today to schedule your free assessment.
<urn:uuid:7426d5e4-4e29-4c3c-be9c-f307c0cb10c9>
CC-MAIN-2022-40
https://www.idexcel.com/blog/tag/ai-and-machine-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00139.warc.gz
en
0.937101
720
2.59375
3
- Electronic Health Record (EHR)EHR (Electronic Health Record) or (EMR) Electronic Medical Record is a longitudinal digital record of patient health information generated by one or more encounters in any care delivery setting that are available instantly and securely to authorized users. Included in this information are patient demographics, progress notes, problems, medications, vital signs, past medical history, immunizations, laboratory data and radiology reports. EHRs are built to share information with other health care providers, such as laboratories and specialists, so they contain information from all the clinicians involved in the patient’s care. The EHR automates and streamlines the clinician's workflow. The EHR has the ability to generate a complete record of a clinical patient encounter - as well as supporting other care-related activities directly or indirectly via interface - including evidence-based decision support, quality management, and outcomes reporting. Related ArticlesNick Schooler2021-08-31T13:23:01-07:00 Mobile Technologies Inc. (MTI) and GBS Corp. Deliver Tablet Mobility Solutions That Increase Patient Engagement Nick Schooler2021-08-31T13:23:01-07:00February 11th, 2019| ORLANDO, FLA. -- FEBRUARY 11, 2019 -- Mobile Technologies Inc. (MTI), a global leader in tablet mobility solutions for healthcare, and GBS Corp., a leading information solutions provider, today announced a partnership that offers a simplified technology approach for collecting patient
<urn:uuid:2578adbe-a0a7-4461-99df-273816837b11>
CC-MAIN-2022-40
https://www.cardlogix.com/glossary/electronic-health-record-ehr/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00139.warc.gz
en
0.889652
321
2.609375
3
An Introduction to the Public Cloud and Public Cloud Networking This is a data center managed by a Cloud Service Provider, such as AWS, Azure, GCP, or OCI. The public cloud is a shared infrastructure running thousands of clients. While this allows CSPs to get their services to the most amount of people, this can also open up potential security concerns. Although the public cloud is gaining popularity, on-prem data centers still exist. These on-prem data centers are typically for only one client, and are managed by the client themselves. Because of this, on-prem data centers are very secure. The private cloud is a data center with a software defined layer, essentially mimicking the public cloud. However, there are lots of differences between the public cloud and the private cloud, because although some basic features overlap, such as the use of automation, the services in the private cloud fall behind those of the public cloud. The public cloud provides many advanced services in the IoT space, the container space, and in so many more aspects. The private cloud has more similarities between the on-prem DC than the public cloud. How Networking Works in the Public Cloud - The CSP’s pick a region to build their DC. - Next, they pick an Availability Zone located in the region. - The AZ’s hold the actual, physical data centers. - This prevents issues because there are multiple data centers, so one can take over if another has a problem. For more information, watch the video above.
<urn:uuid:d421e294-c151-4e1e-9e3e-75acd7903976>
CC-MAIN-2022-40
https://community.aviatrix.com/t/q6hx19y/an-introduction-to-the-public-cloud-and-public-cloud-networking?r=p8hwgq8
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00139.warc.gz
en
0.948925
331
2.796875
3
By modeling the flow -- of freight, commuters, money or anything else -- across each link of a network, researchers can show where investment will be most beneficial and which projects shouldn’t happen at all. The American economy is underpinned by networks. Road networks carry traffic and freight; the internet and telecommunications networks carry our voices and digital information; the electricity grid is a network carrying energy; financial networks transfer money from bank accounts to merchants. They’re vast, often global systems -- but a local disruption can really block them up. For example, the I-85 bridge collapse in Atlanta will affect that city’s traffic for months. A seemingly minor train derailment at New York City’s Penn Station resulted in multiple days of travel chaos in April. As the Trump administration plans to invest hundreds of billions in American infrastructure networks, it will be crucial to identify what elements are the most crucial to repair or improve. This is not only important for maximizing benefits; it’s also useful in preventing disaster. Is there, perhaps, a telecommunication line that would be particularly damaging if it were destroyed? Or one road through an area that has an especially large role in keeping traffic flowing smoothly? Patrick Qiang and I are operations management scholars who have developed a way to evaluate network performance and simulate the effects of potential changes, whether planned (like a highway repair) or unexpected (like a natural disaster). By modeling the independent behavior of all the users of a network, we can calculate the flow -- of freight, commuters, money or anything else -- across each link, and how other links’ flows will change. This lets us identify where investment will be most beneficial, and which projects shouldn’t happen at all. More isn’t always better It’s very difficult to measure networks’ performance, in part because they are so complex, but also because people use them differently at different times, and because those choices affect others’ experiences. For example, one person choosing to drive to work instead of taking the bus puts one more car on the road, which might get involved in a crash or otherwise contribute to a traffic jam. In 1968, German mathematician Dietrich Braess observed the possibility that adding a road to an area with congested traffic could actually make things worse, not better. This paradox can occur when travel times depend on the amount of traffic. If too many drivers decide their own optimal route involves one particular road, that road can become congested, slowing everyone’s travel time. In effect, the drivers would have been better off if the road hadn’t been built. We shouldn’t waste time and money building or repairing network links the community would be better without. But how can we tell which elements help and which make things worse? The best networks can handle the highest demand at the lowest average cost for each trip -- such as a commute from a worker’s home to her office. Evaluating a network means identifying which locations need to be connected to each other, as well as the volume of traffic between specific places and the various costs involved -- such as gas, pavement wear and tear and police services keeping drivers safe. Once a network is measured in this way, it can be converted into a computerized model where we can simulate removing links or adding new ones in particular places. Then we can measure what happens to the rest of the network: Does traffic get more congested, and if so, by how much? Or, as in the Braess paradox, do travel times actually get shorter when a link is removed? And how much money does a particular project cost to build, and save in time or user expenses? Our method of measuring a network’s performance has been used to refine the route of a proposed metro line in Dublin, Ireland; to design new shipping routes in Indonesia; to identify which roads in Germany should be first on the maintenance list; and to determine the effects of road closures after major fires in regions of Greece. As the U.S. works to enhance its economic competitiveness, the country will need to invest in many different types of networks to maximize their usefulness and value to Americans. Using measurement methods like ours can guide leaders to wise investments. NEXT STORY: HHS funds states’ fight against opioids
<urn:uuid:3f20c1fd-af95-4681-b42f-15e414f07f97>
CC-MAIN-2022-40
https://gcn.com/data-analytics/2017/04/a-data-driven-approach-to-deciding-on-infrastructure-investments/291408/?oref=gcn-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00139.warc.gz
en
0.944673
892
3.03125
3
Protecting an enterprise IT environment is a tall task. Trusting an organization’s cybersecurity defense to human intervention alone likely won’t be enough to combat sophisticated attacks. Fortunately, the best cybersecurity tools leverage artificial intelligence (AI) to help companies of all sizes across every industry. What is Artificial Intelligence in Cybersecurity Anyway? Artificial intelligence and machine learning have become crucial technologies for information security. AI has the ability to detect threats, improve response times, identify patterns, and analyze millions of data points. In addition to threat detection, AI can automatically respond to threats and stop breaches before they become a problem. According to a recent report from TechRepublic, the average mid-sized organization gets more than 200,000 cyber alerts per day. Even large cybersecurity teams can’t possibly respond to such a high volume of potential threats. But AI can process these threats with ease. It alleviates the burdens of IT security teams for the vast majority of threat prevention steps and assists with detection and response. How Artificial Intelligence In Cybersecurity Works Every AI cybersecurity solution is a bit different. But at their core, all AI tools get smarter over time. Artificial intelligence leverages deep learning and machine learning to assess the behavior of a network and IT environment over time. It recognizes patterns of good behavior, allowing the system to detect anomalies that could be perceived as threats. AI can even detect unknown threats that have traits similar to known malware, spam, DDoS attacks, ransomware, phishing, and more. As cybersecurity tools become more advanced, hackers and attackers use more sophisticated attack methods to breach networks. Artificial intelligence in cybersecurity has become a must-have feature to prevent modern attacks. One of the biggest benefits of AI in cybersecurity is its ability to analyze massive amounts of data. It’s nearly impossible for an IT security team to manually review all traffic. But AI can detect malicious behavior masked as normal IT traffic. While AI does most of the work on its own, it can also assist with human-related tasks. The research and data conducted by AI can be helpful for IT teams to improve vulnerability management strategies. AI can help detect weak points in a network or system, which makes it easier for organizations to identify areas that need improvement. Common Use Cases For AI in Cybersecurity There are lots of different ways that organizations are harnessing the power of AI for IT security. But these are some of the most common: - Threat Detection — AI can detect malicious activity faster than traditional software and humans. Artificial intelligence uses pattern recognition, data analysis, and other principles to stop malware and attacks before they infiltrate a system. In addition to getting smarter by assessing your own network traffic, AI learns from outside cyber threats as well. - Bot Defense — Bots make up more than half of Internet traffic, and many of these bots are dangerous. AI uses advanced algorithms to detect the difference between good bots and bad bots. So AI can allow search engine crawlers but deny account takeover bots. - Risk Assessment — AI can help security teams analyze an inventory of all assets, users, and devices on the network with different access levels. Based on this analysis, AI can generate reports showing which parts of your network are most likely to be compromised. Then your team can allocate resources accordingly or adjust access levels to high-value assets. - Endpoint Security — Basic antivirus software and VPNs can only stop some malicious threats. This technology typically uses signatures to protect endpoints based on known threats and signature matching. But if new signatures haven’t been updated by the software vendor, malware and viruses can sneak through. AI prevents this by flagging anything that’s an anomaly to protect against new threats. Problems With AI in Cybersecurity Like anything else, artificial intelligence in cybersecurity isn’t perfect. There are some drawbacks that you need to be aware of as you’re using AI to secure your organization. The financial investment required to build and maintain an enterprise AI security system from scratch is significant. Most companies are better off using a pre-built AI solution from a cybersecurity software vendor. False positives at the beginning can cause problems for the daily operations of an organization. Some AI systems might flag normal behavior as something suspicious, which could make it difficult for employees to access files or perform tasks. False positives typically lessen over time as the AI gets smarter and understands your network better. AI has also made it possible to collect and process an unfathomable amount of data. If third-party organizations also have access to this data, you need to make sure all of the data transfer and storage is compliant with mandates like GDPR, CCPA, PCI DSS, and more. It’s also worth noting that hackers and malicious users can also leverage the power of AI to launch large-scale attacks. In the wrong hands, AI can help users with malicious intent exploit vulnerabilities in network security systems. Example #1: Tessian Tessian is a UK-based security vendor that specializes in enterprise email security. This use-case-specific solution uses AI in many different ways for its product offerings. The software is used for phishing protection, account takeover prevention, ransomware prevention, compliance, data loss prevention, and more. Tessian uses behavioral intelligence to analyze over a year’s worth of enterprise historical data. This includes corporate emails, data on the company network, and data within the Tessian Global Threat Network. It uses this data to determine traits associated with inbound email attacks. The AI can help identify social engineering, account takeovers, whaling attacks, spear phishing, impersonation attacks, and more. All of this is accomplished through analyzing user behavior, email content, and other analytics. As a result, Tessian can stop email threats in real-time. Example #2: Sophos Endpoint Protection Sophos is a well-recognized name in the world of IT security. They offer a wide range of solutions for different use cases, industries, and business types. The endpoint security tools from Sophos use AI and deep learning to stop both known and unknown threats—without having to rely on signatures. When you enable the deep learning features of Sophos AI, you turn your endpoint protection approach from reactive to proactive. This software enhances malware detection and anti-ransomware to keep endpoints safe. The AI works to not only detect potential threats but also neutralize those threats before they become a bigger problem. Sophos AI is highly scalable, and it can automatically find the best combination of inputs required to detect even the most sophisticated attacks. How to Get Started With AI in Cybersecurity Using artificial intelligence to improve an organization’s cybersecurity standards will look a bit different for everyone. With that said, the steps below outline the general process you can follow to get started: Step 1: Audit Your IT Infrastructure You need to assess your current situation before you start shopping around for solutions, installing software, or implementing new policies. This will make it much easier for you to identify your needs and ultimately find an AI tool that supports them. Depending on the size of your organization, this step can be a lengthy process. Start with some simple, quick wins to build momentum, and continue from there. If you have existing systems in place for IT security, endpoint protection, or network monitoring, those tools can help provide you with some valuable insights. You’ll eventually want a list of devices, software, applications, and servers on your network. From there, you can take things one step further and look at your users and access control policies. Step 2: Review Your Data Platforms As previously mentioned, data is a huge part of artificial intelligence. So you need to understand where your existing data points are coming from and how that data will be integrated with an AI cybersecurity solution. For midsize organizations and enterprises, there will likely be lots of data coming from multiple sources. In some cases, it might make sense to create a unified platform for all of the data. This would give your AI system a single source of truth to work with. With that in mind, many of the best cybersecurity tools out there are flexible enough to accommodate data from multiple sources. Unifying the data might make more sense for you internally, but it’s not necessarily a requirement to leverage AI. As you’re going through this process, you’ll want to identify your highest-value assets. Piggybacking off of the work done in step one should start to give you some insight into your vulnerabilities. Step 3: Compare Security Vendors Creating your own AI platform and algorithm is possible. But it’s not the best or fastest path to success for most businesses. You’re much better off relying on existing IT security solutions with built-in AI. The companies mentioned earlier in this guide as examples are viable options to consider. But you can conduct your own research to see which vendors truly accommodate your needs. You’ll ultimately be compiling a shortlist of your top contenders. From there, you’ll need to schedule calls and demos to get a better understanding of the products. In addition to your CTO or CSO, you may want to include other IT security staff in the decision-making process. C-level executives are obviously focusing on big-picture initiatives. But they might not be interacting with tools the same way as your staff working on the day-to-day operations. As you’re comparing vendors, you can narrow your options based on factors such as: - Implementation process - Industry-specific solutions - Use-case-specific needs - Employee size - Number of endpoints on your network - Level of customization In some cases, you’ll be able to get a free audit or something like that from a security vendor. This gives them a bit more information about your organization and network. Compatibility is really important here, as not every IT security solution will play nice with servers or existing systems. If you need to make small tweaks here and there, it shouldn’t be a dealbreaker. But you shouldn’t have to reconfigure your entire network. You’ll also want to consider the level of support you’re getting from the vendor outside of the AI and security tools. For example, some vendors offer managed detection and response—meaning they will handle threats. If you already have the in-house resources for response and remediation, you may not need that extra level of support from your vendor. Step 4: Deployment and Training Once you’ve settled on a tool, it’s time to deploy it on your IT infrastructure. This could be a cloud deployment, on-premises deployment, or potentially a hybrid deployment. Again, this is all based on your organization and its specific needs. Large-scale deployments typically take some time. You might not be able to apply the system to your entire IT infrastructure at once, and that’s ok. While AI can handle a ton of work on its own, it doesn’t run completely without human interaction. You still need to train your staff to make sure the AI is doing what you want it to do. Your staff must also understand how to make sense of the reports and alerts. For example, let’s say your AI tool detects a potential threat and alerts your security team. Now what? Effective training could take several months based on the scale of the deployment. Many enterprise security vendors will walk you through this process, but it’s not a guarantee. Step 5: Risk Assessment Continuous improvement is a key part of using AI for cybersecurity. As the technology gets smarter over time, you need to harness the outputs and understand what everything means for your network. Where are your weak points? What types of threats have penetrated your system? What systems, applications, servers, users, or data sets are being targeted the most? Artificial intelligence in cyber security is not a “set it and forget it” initiative. In addition to reporting, you can also use AI tools to help you with other aspects of risk assessment. AI-based penetration testing has been a popular use case for risk assessment. It can be a human replacement, or at least a human-assisted alternative, to traditional pen tests.
<urn:uuid:2261d090-904e-4571-8ae6-c0b19e4baf54>
CC-MAIN-2022-40
https://nira.com/artificial-intelligence-in-cyber-security-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00139.warc.gz
en
0.932004
2,562
3.171875
3
Means personal data that is more sensitive and therefore require more protection then “regular” personal data. Special category data is often referred to as “sensitive data”. The different types of data that can be categorised as special is stated in Article 9 (1) GDPR that says: “Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation shall be prohibited.” In order to process special category data, the controller also needs to be aware of the specific demands stated in Article 9 (2) GDPR.
<urn:uuid:100310c8-c59a-4114-bff2-e0cbbc80e371>
CC-MAIN-2022-40
https://www.gdprsummary.com/gdpr-definitions/special-categories-of-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00139.warc.gz
en
0.881513
162
3.078125
3
What is 3GPP? The Third Generation Partnership Project (3GPP) is a collaboration between groups of telecommunications associations to make a third generation (3G) mobile phone system specification within the scope of the International Mobile Telecommunications-2000 (IMT-2000) project of the International Telecommunication Union (ITU). 3GPP was formed in December 1998. The initial scope of 3GPP was to produce a globally applicable third generation mobile phone system specification based on the GSM family of standards. The specification was to be created through the harmonization of existing DOCOMO, ETSI, T1, TIA, and 3GPP specifications.
<urn:uuid:dce7e569-5a2b-4a97-af39-5aec28e38648>
CC-MAIN-2022-40
https://inseego.com/resources/5g-glossary/what-is-3gpp/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00139.warc.gz
en
0.926685
135
2.953125
3
Raw code for ´unbreakable´ quantum encryption has been generated at record speed over optical fiber at NIST. The work is a step toward using conventional high-speed networks such as broadband Internet and local-area networks to transmit ultra-secure video for applications such as surveillance. The NIST quantum key distribution (QKD) system uses single photons, the smallest particles of light, in different orientations to produce a continuous binary code, or “key,” for encrypting information. The rules of quantum mechanics ensure that anyone intercepting the key is detected, thus providing highly secure key exchange. The laboratory system produced this “raw” key at a rate of more than 4 million bits per second over 1km of optical fiber, twice the speed of NIST´s previous record, reported just last month. The system also worked successfully, although more slowly, over 4 km of fiber. After raw key is generated and processed, the secret key is used to encrypt and decrypt video signals transmitted over the Internet between two computers in the same laboratory. The high speed of the system enables use of the most secure cipher known for ensuring the privacy of a communications channel, in which one secret key bit, known only to the communicating parties, is used only once to encrypt one video bit. Compressed video has been encrypted, transmitted and decrypted at a rate of 30 frames per second, sufficient for smooth streaming images, in Web-quality resolution, 320 by 240 pixels per frame. Applications for high-speed QKD might include distribution of sensitive remote video, such as satellite imagery, or commercially valuable material such as intellectual property, or confidential healthcare and financial data. In addition, high-volume secure communications are needed for military operations to service large numbers of users simultaneously and provide multimedia capabilities as well as database access.
<urn:uuid:042e9763-01c5-441c-a642-669285dc2389>
CC-MAIN-2022-40
https://it-observer.com/code-unbreakable-quantum-encryption.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00139.warc.gz
en
0.92716
373
3.125
3
Toyota and the US Department of Energy are working on a prototype 1MW fuel-cell power generation system which could eventually be a drop-in replacement to a conventional generator. The DOE's National Renewable Energy Laboratory (NREL) in Arvada Colorado, is working with Toyota Motor North America to install and evaluate a 1MW proton exchange membrane (PEM) fuel cell power generation system at NREL's Flatirons Campus. The announcement comes after Microsoft, working with Plug Power, recently successfully demonstrated a prototype 3MW backup system based on fuel cells. Data centers are looking for ways to replace diesel generators that run on fossil fuel, and hydrogen fuel calls are a strong contender, although the energy source (H2) and the fuel cell modules are currently much more expensive than conventional diesel generators. The three-year, $6.5 million collaboration is part-funded by DOE's Hydrogen and Fuel Cell Technologies Office as part of the DOE's effort to establish a market for clean hydrogen across multiple sectors. The 1MW fuel cell system being built at NREL will integrate multiple Toyota fuel cell modules into a larger system to provide "responsive stationary power" - i.e. backup power that can be delivered swiftly. NREL has already funded a previous research project, involving HPE and Daimler, which investigated using automotive fuel cells to power a data center, and successfully integrated a 70kW fuel cell with IT racks (PDF). According to the NREL report on that effort, Daimler and HPE planned to take this on to a 250kW version and eventually to 3MW, but the project ended in 2020: "Further testing and analysis were cut short when the project was decommissioned in July 2020 due to Covid and lack of secured funding to operate it further." The new Toyota system moves to a larger scale, and will be capable of direct current and alternating current output. The Toyota fuel cells, developed originally for the light-duty fuel cell electric vehicle market, are being integrated by Telios, for delivery to NREL. Toyota has developed an integrated control system to operate the fuel cell modules for maximum efficiency and system life. Much like Microsoft and Plug Power, the Toyota team plans to simplify and streamline the design so it can be produced in large numbers as a drop-in replacement for diesel generators. "Achieving carbon neutrality requires all of us to explore new applications of zero-emission technology, including how that technology will integrate with other systems, which the project with NREL will identify," said Christopher Yang, group vice president, Business Development, Fuel Cell Solutions, Toyota. "The application of our modules in deployments of this magnitude shows the scalability of Toyota's fuel cell technology, whether it is a single fuel cell module for one passenger vehicle or multiple systems combined to power heavy-duty equipment." NREL researchers will be stretching the system, push operational boundaries to identify performance limitations and degradation over time, which should produce valuable real-world data which fuel cell developers can use in future applications. The project will also assess how the system performs when integrated with energy storage and renewable energy generation systems, such as solar photovoltaic and wind. "We will study the scaling of PEM fuel cell systems for stationary power generation to understand what the performance, durability, and system integration challenges are," said Daniel Leighton, an NREL research engineer and principal investigator on the project. "This fuel cell generator system also creates a new megawatt-scale fuel cell research capability at NREL." The fuel cell generator is part of the Advanced Research on Integrated Energy Systems (ARIES) megawatt-scale hydrogen project at NREL's Flatirons Campus. The project also includes a 1.25MW PEM electrolyzer, 600kg hydrogen storage system, and 1MW fuel cell generator, giving a platform to show renewable hydrogen production, energy storage, power production, and grid integration at the megawatt scale. The fuel cell generator system will be installed this summer, and the full system will be commissioned later in 2022.
<urn:uuid:95dea52b-27d1-41d2-acfd-db3de41d651a>
CC-MAIN-2022-40
https://www.datacenterdynamics.com/en/news/toyota-and-nrel-to-build-1mw-fuel-cell-backup/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00139.warc.gz
en
0.925815
836
2.703125
3
We learned the lists of notable historical women in our schooling and hear about modern progressive women in the news each day. But, in lobbying for the future, we often forget the inspiration and passion of those before us in the development of action and persistence Elizabeth Blackwell attended Geneva College and became the first American women to achieve a medical degree – after being rejected by six other schools because of her gender. After graduating, she continued to empower women by founding a women’s medical college. Grace Hopper was a pioneer of computer programming and a United States Navy rear admiral. She led the team in creating the first computer language, a program that converted instructions into computer interpreted binary code. Today, an annual Women in Computing conference is held to honor her contributions to technology. Both of these women were reformist for their time and inspired gender parity – taking action to #PressforProgress and achieve! Current movements empower women more than ever before; and, a woman has the opportunity to pursue almost any career she chooses. But, with the #PressforProgress, comes the responsibility to take action. Most notable figures did not lead their cause with an agenda to revolutionize and you certainly do not need to lead the charge to inspire change. It’s not about a single female or one crusade. It’s all of us, participating in ways that best match our talents, to make a collective difference. Find an organization or cause that sparks passion. Get involved and be part of advancement. It’s great to join an organization or collective effort, but even sitting on an Executive Board holds no power without engaging to advance their cause – inspire with a new perspective. Actions speak louder than words, so let’s #PressforProgess by taking action and getting involved!
<urn:uuid:795e1071-166d-434c-8e9e-e3faf5de9d97>
CC-MAIN-2022-40
https://edwps.com/pressforprogress/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00139.warc.gz
en
0.96246
369
2.515625
3
Secure your business with CyberHoot Today!!! An Error correction code (ECC) checks read or transmitted data for errors and corrects them as soon as they are found. ECC is similar to parity checking except that it corrects errors immediately upon detection. ECC is common in the field of data storage and network transmission hardware, especially with the increase in data rates and corresponding errors. Error correction code is applied to data storage through the following steps: - When a data byte or word is stored in RAM or peripheral storage, a code-specifying bit sequence is estimated and stored. Each fixed number of bits per word has an additional fixed number of bits to store this code. - When the byte or word is called for reading, a code for the retrieved word is calculated according to the original algorithm and then compared to the stored byte’s extra fixed bits. - If the codes match, the data is error-free and is forwarded for processing. - If the codes do not match, the changed bits are caught through a mathematical algorithm and the bits are immediately corrected. Data is not verified during its storage period but is tested for errors when it is requested. If required, the error correction phase follows detection. Frequent recurring errors at the same storage address indicate a permanent hardware error. In the case of a storage device such as a hard disk drive or solid-state drive, spare capacity is assigned to make up for any address location that is marked as bad and unavailable. What does this mean for an SMB? Additional Cybersecurity Recommendations Additionally, these recommendations below will help you and your business stay secure with the various threats you may face on a day-to-day basis. All of the suggestions listed below can be gained by hiring CyberHoot’s vCISO Program development services. - Govern employees with policies and procedures. You need a password policy, an acceptable use policy, an information handling policy, and a written information security program (WISP) at a minimum. - Train employees on how to spot and avoid phishing attacks. Adopt a Learning Management system like CyberHoot to teach employees the skills they need to be more confident, productive, and secure. - Test employees with Phishing attacks to practice. CyberHoot’s Phish testing allows businesses to test employees with believable phishing attacks and put those that fail into remedial phish training. - Deploy critical cybersecurity technology including two-factor authentication on all critical accounts. Enable email SPAM filtering, validate backups, deploy DNS protection, antivirus, and anti-malware on all your endpoints. - In the modern Work-from-Home era, make sure you’re managing personal devices connecting to your network by validating their security (patching, antivirus, DNS protections, etc) or prohibiting their use entirely. - If you haven’t had a risk assessment by a 3rd party in the last 2 years, you should have one now. Establishing a risk management framework in your organization is critical to addressing your most egregious risks with your finite time and money. - Buy Cyber-Insurance to protect you in a catastrophic failure situation. Cyber-Insurance is no different than Car, Fire, Flood, or Life insurance. It’s there when you need it most. All of these recommendations are built into CyberHoot the product or CyberHoot’s vCISO Services. With CyberHoot you can govern, train, assess, and test your employees. Visit CyberHoot.com and sign up for our services today. At the very least continue to learn by enrolling in our monthly Cybersecurity newsletters to stay on top of current cybersecurity updates. For a deeper dive into Error Correction Codes (ECCs), watch this short 3-minute video: CyberHoot does have some other resources available for your use. Below are links to all of our resources, feel free to check them out whenever you like: - Cybrary (Cyber Library) - Press Releases - Instructional Videos (HowTo) – very helpful for our SuperUsers! Note: If you’d like to subscribe to our newsletter, visit any link above (besides infographics) and enter your email address on the right-hand side of the page, and click ‘Send Me Newsletters’.
<urn:uuid:cded9b5d-d879-4cac-806d-4d43023789dd>
CC-MAIN-2022-40
https://cyberhoot.com/cybrary/error-correction-code-ecc/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00340.warc.gz
en
0.914512
917
2.859375
3
Where is a Stateful Firewall used? A stateful firewall is used to monitor the traffic that flows in and out of a network. It can be configured to allow or deny packets based on the information contained within them, such as destination address, port number, protocol type, etc. Additionally, the firewall can be configured to maintain a limit on traffic of any given sort. This is known as stateful inspection because it looks at the source and destination addresses in relation to each other rather than just packet type or port number. When should I use a stateful firewall? A stateful firewall is necessary to provide protection on a network where there are multiple internal hosts communicating with different external services. Since it’s easy to spoof a source address for packets from the internal network, it would be difficult to use a stateless firewall on such networks. When should I not use a Stateful Firewall? If all requests are initiated from external IP addresses and destined towards an external service, then you do not need a stateful inspection. In this case, if you have access rules based on UDP or TCP/IP port numbers that will suffice as your only set of filtering criteria. This is commonly seen in data centers where servers can initiate requests outbound but cannot receive anything coming back inbound due to return traffic being blocked by ingress filters at the Internet edge. Because they don’t inspect application layer information inside each packet, stateless firewalls are typically faster than stateful firewalls. What layer is a stateful firewall? A stateful firewall is a Network Layer (Layer three) device that operates at the Transport layer (layer four), meaning it monitors and inspects incoming data traffic by looking into each packet to determine what type of application it belongs to. For example, in the case of FTP traffic, it will look inside each packet for specific sequences that denote when a user is about to send or receive data. What does a stateful firewall do?? A stateful firewall is a network security system that tracks the data flow in and out of each host on an internal network, whether it originates from or is destined for an outside host. It can filter based on the source and destination IP address, protocol type (such as TCP, UDP), port numbers, and other information inside packet headers. A stateful firewall is a type of network security system that inspects packets as they enter or exit an interface, and looks for certain information based on the application being used. For example, in an FTP session, it will look for specific sequences that denote when data is about to be transferred from one host to another. Security features of a Stateful Firewall A stateful firewall is designed to provide the highest level of network security by inspecting incoming and outgoing data traffic. This allows it to filter based on port numbers, IP addresses, and other application-layer information contained within each packet header. Its ability to allow or deny traffic coming from one internal host destined for another, based on the application being used, is what differentiates it from a stateless solution. Advantages of Stateful Firewalls - Stateful firewalls are aware of the state of a connection. - Stateful firewalls do not have to open up a large range of ports to allow communication. - Stateful firewalls prevent more kinds of DoS attacks than packet-filtering firewalls and have more robust logging. Disadvantages of Stateful Firewalls - Stateful firewalls are slower than packet filtering because they need to keep track and update of states. - Stateful firewalls require more CPU and memory resources than packet filtering because they need to keep track of the state. - Stateful firewalls are unable to determine the state of a connection if packets have been altered by attackers. Is Windows Firewall a stateful firewall? No, the Windows Firewall is a stateless firewall. Its main purpose is to provide basic protection for home/personal users against unwanted network traffic entering or leaving their PC. It does not look at application layer information within each packet nor maintain any sort of limit on outgoing connections initiated from the localhost. What are some common protocols used in a stateful firewall? Common protocols used in a stateful firewall are TCP, UDP and ICMP. Protocols like FTP or DNS use both User Datagram Protocol (UDP) for connectionless communication as well as Transmission Control Protocol (TCP) to provide reliability. When it comes to DHCP, the client sends outbound packets using UDP, but the server uses both TCP and UDP. For Dynamic Host Configuration Protocol (DHCP), a stateful firewall will inspect inbound requests from clients looking to obtain an IP address using UDP, while it will look at all traffic coming back from the DHCP server over TCP as well. A stateful firewall is used to provide the highest level of network security by inspecting incoming and outgoing data traffic. It tracks the state of connections and allows or denies traffic based on that information. It’s different from a stateless firewall because it does not have to open up a large range of ports to allow communication.
<urn:uuid:60015298-1d6a-4b45-aff1-7b3fc1cae1fc>
CC-MAIN-2022-40
https://gigmocha.com/where-is-a-stateful-firewall-used/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00340.warc.gz
en
0.92575
1,068
2.859375
3
The 911 Act, Emergency Services and Privacy Concerns The Wireless Communication and Public Safety Act of 1999, known as the 911 Act for short, is a federal law that was passed by the U.S. Congress in 1999. The 911 Act amended the Telecommunications Act of 1996 and was passed to “promote and enhance public safety through the use of 911 as the universal emergency assistance number, and for other purposes.” For areas and counties around the country that did not have basic 911 access at the time when the law was passed in 1999, the 911 Act still allowed for mobile calls made from these areas to be forwarded to the appropriate local emergency authority. To this end, the 911 Act also mandated that wireless phones in the U.S. begin providing location information for the purposes of enabling and coordinating communication around the country. While providing American citizens with an improved ability to access emergency services in times of crisis through a dedicated phone line was a welcome and much-needed development, the passing of the 911 Act also led to some concerns regarding personal privacy. While cell phone tracking and storing the location data of an applicable user has become a routine daily occurrence in 2021, such collection of personal information was not a standard practice when the 911 Act was passed in 1999. As such, many opponents of the law countered by pointing out the potential ways in which providing such consistent access to the personal information of millions of American citizens could lead to adverse consequences. These criticisms are still pertinent today, as the level of privacy invasion that can occur resulting from cell phone usage remains an ongoing debate. What was the purpose of the Wireless Communication and Public Safety Act of 1999? The main purpose of the Wireless Communication and Public Safety Act of 1999 was to take advantage of the development of cellphone technology as it pertained to access to emergency services. As stated by Hon. Bobby L. Rush, a representative in Congress from the state of Illinois “In the age where technology is evolving and wireless telephones are prevalent in our society, it is important that in emergency situations wireless customers have access to enhanced 911 or E911. Having access to E911 allows wireless phone users to dial 911 and have the call routed to an attendant who has information on the caller’s telephone number and location”. “Unfortunately, as we sit here today most wireless telecommunications services do not have E911 capabilities. On the other hand, emergency attendants that do have access to 911, usually lack the capability of determining a user’s location. Therefore, in an emergency situation or a life-threatening situation a wireless user who dials 911 may not receive proper medical attention because an operator cannot determine his exact location. The Wireless Communications and Public Safety Enhancement Act addresses this problem by enacting 911 as a universal emergency number. This Act will save lives by reducing the response time for emergency assistance”. As stated by Rush, the goal of the 911 Act was to help improve American citizens’ ability to access emergency services via cellphone. While the number 911 had first been used to contact emergency services in Haleyville, Alabama in 1968, there had been no universal number that could be contacted for the purposes of utilizing emergency services. While this had not been a problem in the decades prior to the 1990s, as cell phone technology was still in its stages of infancy and had yet to become mainstream, the increasing use of cell phones in the late 1990s opened up new possibilities. For the first time in American history, citizens of the country could make phone calls without having to use a landline or payphone. Subsequently, The Wireless Communication and Public Safety Act of 1999 created a link between cell phone usage and the contacting of emergency calls and services. Why did the passing of the 911 Act lead to privacy concerns? While the 911 Act was based on the intention of trying to provide American citizens with a more effective way of contacting emergency services, this access to services via cellphone was on the basis of American consumers sharing their location data. Mobile location data is defined as “geolocation and proximity information from mobile phones and other devices”. When an individual calls 911 from their cell phone, the operator on the other end of the line is able to render assistance by accessing the mobile location data of the caller. However, this ability to essentially track individuals through their cell phones presents both privacy and ethical concerns. Perhaps the greatest example of the potential privacy issues that can arise from well-intentioned legislation and subsequent technological use is the case of former computer intelligence consultant Edward Snowden. While working for the National Security Agency or NSA, Snowden amassed several classified documents that contained proof of the intelligence agencies’ effective spying of American citizens through the guise of helping shield said citizens from threats of terror. What’s more, Snowden also revealed that many popular tech companies, including cell phone service providers, were also complicit in this unconstitutional invasion of privacy. However, when the NSA was confronted with these claims, they countered by reaffirming their intentions of supporting counter-terrorism efforts. According to the NSA, American citizens who had nothing to hide would have nothing to worry about. This stance would appear to contract the U.S. Constitution’s Fourth Amendment right of people “to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized”. To this point, one of the main ways in which the NSA had gone about collecting data from American citizens was by accessing their mobile location data. The Wireless Communication and Public Safety Act of 1999 marked a new era in American history in more ways than one. While the law provided citizens with newfound access to emergency services through cell phone usage, the means by which this cell phone data was accessed opened the door for unintended consequences. Despite all of this, the ability to dial 911 via cell phone has undoubtedly been beneficial to the American populace, particularly in more rural and remote areas of the country where accessing a payphone has been historically difficult. Nevertheless, the ways in which mobile location data is collected and disclosed will always be an area of concern as it relates to the U.S. Constitution and in turn, the privacy of American citizens.
<urn:uuid:1ee9023a-52fc-47cd-a053-01638fd74af2>
CC-MAIN-2022-40
https://caseguard.com/articles/the-911-act-emergency-services-and-privacy-concerns/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00340.warc.gz
en
0.965754
1,286
3.28125
3
Because you asked. Adaptive IP. IP, or more formally referred to as Internet Protocol, is the common language that enables billions of interconnected humans and machines to “talk” to each other on a daily basis for business and consumer applications and use cases. IP is the “language” and foundation of the largest human construction project ever created – the internet – and it works because it’s based on open industry standards. The internet has evolved over time and will continue to do so well into the future, as more humans and machines come online with new and evolving applications and use cases, such as 5G, Fiber Deep’s Converged Interconnect Network (CIN) architecture, and IP Business Services. This means that the way IP networks are designed, deployed, and managed also needs to evolve to maintain pace. IP is constantly evolving over time Over the decades since its introduction in the 1970s, by the legendary Vint Cerf and Bob Kahn, IP has continually evolved to maintain pace with ever-changing application and end-user demands. This evolution has also led to new RFCs and protocols being standardized, adopted, and deployed within routers (at last count there were over 8,000 RFCs and protocols). It has more importantly led to many of these protocols associated with IP no longer being required, updated, or maintained. This is analogous to human languages where words, phrases, and even whole languages, such as Latin, are no longer commonly used over time. What do we do with these obsolete protocols? We can eliminate them from modern IP networks to reduce storage, compute, complexity, and operating costs. We call such IP networks “lean” and it allows operators to move away from traditional box-centric IP network designs running ever larger and more complex monolithic software stacks, as many traditional IP vendors have and continue to implement today. Operators are asking for something different. They are asking for Adaptive IPTM, a leaner and simpler way to deliver IP-based network connectivity. Expanding IP closer to where content lives This traditional approach to evolving IP no longer makes sense. Investing in legacy hardware-centric infrastructure, entangled with a monolithic operational system based on proprietary protocols, will not provide the level of flexibility, scale, and intelligence required from the IP layer. New application spaces, such as 5G, Fiber Deep’s Converged Interconnect Network (CIN) architecture, and IP Business Services require increasing amounts of IP capabilities provided closer to the network edge, where content lives. Given the number of network elements is growing exponentially as one gets closer to the network edge, the delivery of standards-based IP connectivity must be highly scalable and optimized, while at the same time making sure cost and complexity will not increase linearly alongside IP bandwidth growth. Leaner access to metro network elements, coupled with analytics-driven automation derived from real-time streaming telemetry, is a much better way to support expanded IP connectivity, optimize ongoing operating costs, and accelerate service velocity for a faster time-to-revenue cycle. It’s time to simplify by downsizing Continuing to support obsolete protocols unnecessarily consumes memory, storage, and compute resources, which translates into inefficient consumption of both power and space while complicating provisioning, maintenance, and troubleshooting. Several protocols (ex. RIP, PPP, HDLC, SLIP, and more) are now obsolete, making them no longer required in most cases. This means that they are no longer updated and thus represent potential security liabilities. Downsizing obsolete and/or unrequired protocols results in networks that are simpler to deploy, manage, maintain, and troubleshoot. This is one of the primary drivers for centralized Path Computation Engine (PCE)-based Segment Routing in that it reduces the need for node-level signaling protocol support by making routing decisions in a centralized manner by overseeing the entire end-to-end network, enabling optimized traffic engineering, troubleshooting, and service velocity. This allows for leaner protocol support on hardware-based routing platforms as capabilities are shifted from box-centric platforms to software-based centralized controllers. Adaptive IP intercepts these simpler, leaner benefits of Segment Routing. Many network operators – our customers – have recognized the need for a new and different approach to evolving and managing their IP networks. As they struggle to maintain pace with surging growth, coupled with the associated complexity of delivering IP network connectivity from access to core, and everything in between, they told us they’re looking for a leaner IP solution as a logical way to streamline and optimize their ever-expanding network assets. They still want the open and standards-based IP connectivity, albeit in a different and simpler way that’s not only leaner, but also automated and open – enabling agile, best-in-breed IP networks for a differentiated competitive advantage. Last year, we introduced Ciena’s Adaptive IP approach, based on our Adaptive NetworkTM vision, specifically to deliver IP differently. The foundation of the approach is lean IP-capable programmable infrastructure supported by multiple Ciena Routing and Switching platforms, but we didn’t just stop there. While 5G, IP business services, Fiber Deep, and other bandwidth-hungry applications are driving the need for more IP at the network edge, the need for more capacity delivered with the lowest power and smallest footprint has also become key. This is particularly true for power/space-constrained DCI applications, as well as outside plant environments for cable access or 4G/5G applications. It is not surprising, then, that we are starting to see demand in the access network and for some applications in the metro for the integration of coherent optics within routing and switching platforms. As part of our Adaptive IP approach, our routing and switching platforms can leverage Ciena’s WaveLogic 5 Nano pluggable optics to deliver the industry-leading coherent technology in a footprint and power-optimized form factor. Intelligent analytics-driven automation A critical success factor in redefining how IP networks are built is simplification. Operational complexity in IP networks is a critical complaint from customers. To solve for this, we are leveraging analytics-driven automation provided by our Manage, Control and Plan (MCP) Advanced Apps suite. How does this make things easy? Because our software solutions are designed to abstract the complexity of dealing with multiple IP domains across multi-vendor environments. For example, our our Adaptive IP Apps, part of Ciena’s MCP Advanced Apps suite, captures information about network topology, latency, and routing to create a unified virtual IP network map. This map provides a streamlined, real-time view into how routing behavior is affecting service delivery and acts as a PCE to determine what IP network parts need to be optimized. So rather than having to struggle through a labyrinth of information across multiple domains and vendors, we provide a simplified IP network view right at our customers’ fingertips. To make things even easier, this information can be passed through open APIs to Ciena's MCP to automatically configure service and traffic flows for optimal IP network performance. Together with our routing and switching platforms, these building blocks create a complete approach offering closed-loop automation that delivers optimal IP connectivity, from access to metro networks. Figure 1: Adaptive IP closed-loop IP network automation, from access to metro IP networks must be “open” for business IP is the language of the internet and its common, standards-based implementation is why we can seamlessly engage in eSports, video streaming, instant messaging, virtually hailing taxis, and so much more on a daily basis. Ciena’s Adaptive IP approach was designed from inception to be open and standards-based, across all network layers. This allows for easy integration into existing IP networks, from access to metro, ensuring a smooth and elegant migration path. This also allows network operators to gracefully simplify and expand IP connectivity closer to where content is both created and consumed – the network edge – by leveraging open analytics-driven automation to increase service velocity while overcoming complexity. IP delivered differently Our customers asked for (and in some cases demanded) a simpler and more efficient way to provide IP connectivity than traditional methods. Some of the largest network providers in the world have already engaged with Ciena to help them deliver IP differently by leveraging modern automation software coupled with lean, open, and programmable IP infrastructure that together is Adaptive IP. If you’re a network operator challenged with providing IP connectivity associated with 4G to 5G migration, Business Services, Fiber Deep, cloud, or edge computing, we can help you reimagine your IP networks today! Are you ready for a change? Let’s talk.
<urn:uuid:a97f7a45-d94c-4ba4-b03e-f59ac0416f45>
CC-MAIN-2022-40
https://www.ciena.com/insights/articles/because-you-asked-adaptive-ip.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00340.warc.gz
en
0.930681
1,825
2.5625
3
With so much of our lives spent online, digital security has never been more important. A single mistake or oversight can lead to our personal information being released to the public for all the world to see. The simple truth is, as long you’re using the internet, you’re vulnerable. Unless, of course, you’re using highly secured network access, or you’re accessing websites and data online through secured encryption methods. In this article, we’ll discuss two tried and tested ways of increasing your online security: SOCKS5 proxy and VPN, we’ll discuss the difference between them and help you figure out which one will work best for you. Encryption methods: HTTPS is great, except when it isn’t Some aspects of online encryption and data security are taken care of for you. That’s because most websites have a vested interest in preventing hackers from grabbing your data. Data breaches are a growing problem and an extremely expensive one at that. Yahoo can certainly attest to this, having paid over $350 million in legal fees after more than 3 billion accounts were breached in 2016. There’s been a rapid uptick in the number of websites using HTTPS, with around 95 percent of all sites listed by Google encrypting user traffic this way. Websites with HTTPS encryption typically utilize Secure Sockets Layer (SSL) encryption. This creates a secure connection between your computer and the website’s hosting servers. Once connected, the data transfers between your computer and the website’s servers are encrypted. The information is scrambled, as is typical of encryption methods, with the data unscrambled using secure keys on both ends. It’s pretty easy to identify when a website is using HTTPS as well. Just look to the website address: Even Google uses encryption methods for its search engine. But don’t be fooled. Although HTTPS encryption is far more secure than the encryption-lacking HTTP protocol, it’s far from an end-all security method. Anyone snooping in on your online activity or the website’s servers can still see what you’re viewing and steal data or your identity. On a similar note, as wonderful as HTTPS encryption is, it does not hide your location. Someone can still identify who you are through your IP address and connect that to publically available information. HTTPS is a good, secure method to help ensure your data isn’t stolen during the transfer process, but it’s not securing your identity online, and it’s not keeping your data wholly private. The tried and true virtual private network (VPN) Virtual Private Networks (VPNs) are perhaps the most common method for securely connecting, downloading, and surfing online. All computers that use internet connections are part of a larger, shared network. If you connect to the internet through your internet service provider, the ISP assigns your computer a unique IP address and sends you on your merry way through their network, connecting you to the rest of the web. Unfortunately, it’s then very simple for the ISP to monitor your activity. And any government or government organizations that have access to the ISP’s data can do so too. VPNs, however, create a secure tunnel between you and a private server. You connect online through your ISP as usual, but you then connect securely and directly to a specific server. NordVPN, one of the largest VPN services available, provides a helpful graphic to visualize what we mean: The secure tunnel created by a VPN means a direct connection is created between you and the websites you’re trying to access. Hackers, governments, and even your own ISP can no longer see what you’re doing. The benefits are pretty clear, but a VPN is also limited in many ways. Because your data has to travel further and is encrypted and decrypted several times over in the process, your original internet speed gets significantly reduced. On a similar note, if you’re connecting to a VPN server that’s far away from you, you’ll experience a reduction in speed. Take, for example, what happens when you’re in the U.S. and connect to a VPN server in Australia versus when you’re not connected to a VPN server at all: The ping (the time it takes for a packet of data to travel back and forth between two locations) rises dramatically. The increased ping should not increase users too much, in so far as a high ping does not always equate to slower download speeds. However, higher ping can also be an indication of a rather packed server, something that is often the case with more popular VPN services and free VPN services in particular. Usually, VPN services get around this by increasing the number of servers they offer. As for download speeds, here’s a simple comparison between what speeds (in seconds) are like downloading through a paid VPN service (NordVPN), free VPN (TunnelBear) and no VPN. Note that we used British VPN servers to keep things as fair as possible: As evidenced by the results, downloading without a VPN gives the best speeds, while VPNs significantly impact download speeds. As is common, the paid VPN service was faster than the free service. For more information on VPN speeds, check out our article on the fastest VPN services and on why measuring VPN speed is not straightforward. VPNs are also limited by the security measures implemented by the hosting VPN company. Most paid VPN services ramp up their security efforts, offering all forms of high-level encryption as well as maintaining strict log-free policies in most cases. The SOCKS5 proxy server question So, is there a way to connect securely to a VPN without the added data slowdown? Yes, proxy servers do just that. Indeed, a proxy server acts just like a VPN, but without the added encryption. A proxy server assigns you a new IP address as you connect to the server and then routes you to wherever you’re trying to go. There is some security in that. Anyone looking in will perceive your location differently due to the new IP address. However, the lack of encryption means that the data is basically unsecured. One of the most secure kinds of proxy, SOCKS5, works to alleviate some of the insecurity involved. VPN services NordVPN and Private Internet Access each provide SOCKS5 proxy servers to subscribers. NordVPN describes the benefit of SOCKS5 in this way: “[The] main advantage of SOCKS5 is the additional ability to provide authentication so only authorized users may access a server. This makes it more secure than other proxy servers.” Indeed, this is a problem we addressed earlier in our brief discussion of HTTPS. While you may be able to connect securely to a website and transfer data back and forth without anyone stealing or snooping on it, if someone has access to the server you’re connecting to, all of that security is effectively meaningless. Because SOCKS5 proxy servers use an SSH (secure socket shell) protocol, they can only be accessed through verification. Not just anyone can connect, and someone trying to gain access improperly has a large amount of encryption to deal with. The benefit to a SOCKS5 proxy service is speed. The lack of encryption with a proxy server, and even a more secure SOCKS5, help ensure that faster speed. Keep in mind that for the most part, the data going back and forth is not really encrypted, just the access to the SOCKS5 proxy server. Most services that provide SOCKS5 proxies give advanced warning that the proxies are less secure. Take the upfront warning given by NordVPN, for example: “SOCKS5 is not as secure or as fast as a VPN. It’s easy to confuse a SOCKS5 proxy with a VPN, but there are crucial differences. Like most proxies, SOCKS5 won’t encrypt your data, and will lower internet speed and stability.” This is why SOCKS5 proxies are more popularly used for web activities that don’t involve connecting to different websites where your data may be easily obtained. Outside of less encryption, proxy servers also often log activity. This means that anyone who can get access to those logs can find out both who you are and what you were doing while routing data through the proxy. Additionally, proxy servers that have been hacked are more likely to push malware and viruses onto your machine. SOCKS5 proxy VS VPN: which should you use? As you might expect, there are clear benefits to using SOCKS5 proxies over VPNs and vice versa, it depends on what you are trying to achieve. Here is when to use a SOCKS5 proxy vs VPN: Best uses for SOCKS5 proxy servers: - More bandwidth required - Torrenting or using peer-to-peer services (speed purposes only)* - Hiding your location - Bypassing geographic or content blocking - Improved security over regular proxies Best uses for a VPN: - Secure web browsing - Torrenting or using peer-to-peer services (when more encryption and security is desired) - Getting past firewalls - Bypassing geographic or content blocking - Better privacy protections than SOCKS5 SOCKS5 proxy vs VPN: which to use for torrenting and P2P If your entire goal is to torrent or utilize a P2P service with the fastest speed, your best option may be to use a SOCKS5 proxy. However, the faster speed will come at the cost of significantly reduced privacy, at the risk of drawing attention to yourself from your VPN. However, if your goal is to torrent with the maximum amount of encryption and privacy, go with a VPN. The speed will be noticeably slower, but with the right VPN service, the damage can be lessened a bit (see our article on the best VPNs for torrenting for more details). Additionally, SOCKS5 proxies only connect select applications to the web, so it’s important to remember that some of your computer’s internet-related activities will not be hidden while connected to a SOCKS5 proxy. If your goal is to do a large amount of web surfing, get past firewalls, or simply protect your entire online presence regardless of the online activity, you’re better off using a VPN. The speeds will be lower, but not unbearably, so if you utilize a higher-quality paid service like those from IPVanish and NordVPN. Both services, in fact, have both SOCKS5 proxies and VPN servers. *Torrenting, while connected to a proxy, is not wholly anonymous. Your ISP may still be able to see that you’re uploading and downloading large amounts of data. The primary benefit to using a SOCKS5 proxy for P2P is the better upload and download speed, as well as the fact that monitoring organizations cannot connect the newly assigned IP address back to you. This results in more privacy than complete anonymity. See also: The Best Socks5 VPNs SOCKS5 proxy vs VPN: FAQs Should I use both a proxy and a VPN? You can technically use a proxy and a VPN at the same time, but there aren't really very many cases where you'd want to. After all, a VPN encrypts your traffic and hides your location, so changing your IP address again using a proxy doesn't really do very much for you unless you don't trust your VPN provider to keep your source IP address private (in which case, we'd suggest switching to a different VPN provider entirely, ideally one that keeps no logs). Even in this situation, there's a heavy price to pay. Proxies and VPNs both slow down your internet speeds, and with both active, your connection will be bottlenecked by the slower of the two services. In short, we'd recommend choosing one or the other based on your activity instead of trying to use both simultaneously. Which costs more: a proxy or a VPN? Proxies and VPNs are both available at a variety of price points, though generally, you can expect to pay a little more for a high-quality VPN provider since these have higher operating costs (thanks to their network of servers all over the world and the technical teams required to keep their encryption up-to-date). It's also possible to find free providers for both services, although this often comes at a cost of lesser security. Will a free VPN keep me safe online? Many people are hesitant to cough up money for a VPN service when there are countless VPN services available online for free. However, there’s a good reason why it’s better to pay for a VPN service than to go with a free one. Free VPNs often cap data or bandwidth, log your activity, and/or inject advertisements into your web browser. Not only do free services have fewer resources to pour into bandwidth and servers, but many are also lacking in security features. Free VPNs are often ad-driven, displaying ads in their own apps, injecting ads into your browser, or both. As many of us may know by now, advertisements can be used by hackers (called “malvertising”) to infect computers with viruses and malware of various sorts. Ad-driven VPNs are hardly secure. While price is important, we strongly suggest looking at other factors first. For instance, how fast is your chosen service? Is it secure enough for your needs? Has it handed over personally-identifiable data to authorities in the past? These are fundamental things to consider, especially if you're going to trust a company with your personal information. And if possible, we recommend opting for a trustworthy, reliable paid VPN. Can ISPs see Socks5 proxies? Socks5 proxies can be used to mask a user's IP address and maintain anonymity while online. However, it is important to note that ISPs can still see the user's traffic when using a Socks5 proxy. This is because the Socks5 proxy only encrypts the data between the user and the proxy server.
<urn:uuid:54de3e9d-72e8-4e7a-b14b-07bc9cf427e0>
CC-MAIN-2022-40
https://www.comparitech.com/blog/vpn-privacy/socks5-proxy-vs-vpn-which-should-you-use/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00340.warc.gz
en
0.944286
2,952
2.671875
3
Russia destroyed one of its defunct Soviet-era satellites in an anti-satellite (ASAT) weapon test on November 15, resulting in thousands of pieces of debris in Lower Earth Orbit (LEO). The missile that was launched from Russia destroyed Cosmos-1408, which had been orbiting Earth for almost 40 years. This action threatened the safety of astronauts aboard the International Space Station. The crew sheltered in Soyuz and Dragon capsules that could get them to safety in the face of a potential collision. The debris cloud is orbiting at a rate of 17,000 miles per hour, 10 times faster than a bullet, right above the orbit of the International Space Station (ISS). Astronauts aboard China’s Tiangong space station are also at risk. This is a threat to all nations and to a sustainable commercial space ecosystem. The debris generated from this event will remain in orbit for decades threatening the safety of satellites in LEO. Russia’s actions were dangerous, irresponsible and antithetical to what the international community should be doing to ensure a sustainable infrastructure for the growing commercial space industries. Space debris is a dangerous problem It’s estimated that one million pieces of space debris, inoperative objects in space and fragments of past collision events and breakups, are currently orbiting Earth. According to McKinsey, “we’re at the point where about 70,000 satellites could enter orbit if proposed plans come to fruition—an explosion of interest based on potential new markets, innovative architectures, and more sophisticated technologies.” Space debris has caused prior problems for other satellites, like when a piece of debris that was too small to track punched a hole in the ISS Canadian Space Agency’s robotic arm, Canadarm2, in May 2021. A Chinese satellite, Yunhai 1-02, was hit by pieces of a Russian spacecraft a couple months earlier. The debris makes it more of a challenge to conduct safe space operations in the LEO environment. Many of them are from prior missile tests, and include non-functional spacecraft, abandoned vehicles, and fragmented pieces of debris from broken parts or launch vehicles. What happens in orbit, stays in orbit Collisions with space debris in LEO remain a concern across the space industry. Moving at about 5 miles per second, even a small fragment carries a significant amount of energy. As the space junk problem continues to grow, debris will increasingly threaten the safety of satellites and astronauts in space. According to NASA, “Debris left in orbits below 370 miles (600 km) normally fall back to Earth within several years. At altitudes of 500 miles (800 km), the time for orbital decay is often measured in decades. Above 620 miles (1,000 km), orbital debris normally will continue circling Earth for a century or more.” Fragments generated as a result of an explosive collision, such as an ASAT test, could reach higher altitudes than the original orbit of the parent satellite due to the additional energy created by the explosion, or because of the significant change in the shape of the orbits of the resulting fragments making them highly eccentric. Hence, the fragments generated from a catastrophic event in orbit can threaten satellites that orbit at a higher altitude than where the event took place, as well as those that are in lower altitudes. This Russian ASAT test is particularly concerning since it occurred at an altitude range where many commercial satellites and mega-constellations are operating. Solving the space junk problem Commercial industries and governments alike are planning to launch more than 100,000 satellites into space over the next decade, increasing the congestion in LEO. Operators are not able to maintain pace with the growing congestion risk based on their existing processes, which are manual and prone to delays and errors, which puts everyone at risk. At Kayhan Space, we are building technologies that support satellite operators in avoiding collisions, including a platform with autonomous astrodynamics capabilities that remove human error and delays in responses. Satellite operators need access to timely, optimal, and reliable solutions, along with high-performance spaceflight safety software that is built on accurate data and responses..
<urn:uuid:fd93e9c0-4af3-4019-ae5a-819bf41dfee5>
CC-MAIN-2022-40
https://www.datacenterdynamics.com/en/opinions/why-anti-satellite-weapon-tests-are-so-problematic/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00540.warc.gz
en
0.941011
841
3.5
4
The auto-fill feature that makes it easy to enter in usernames and passwords on various websites may be putting your information at risk. While auto-fill is a convenient way to keep track of the many combinations of letters, numbers and special characters you need to access sites, the feature is also being used by advertisers and hackers. That’s why many security experts are suggesting turning off the auto-complete feature in your web browser. Password manager programs embedded in browsers are a simple way to get access to a password-protected website. The password manager auto-fills your details, giving you one-click access to account information meant to be kept private. How Hackers Get Access If hackers get access to a compromised website, they can put an invisible form on the site and easily collect users’ login information. If your browser automatically enters this information when it sees the appropriate boxes on a web form, it adds the info everywhere those boxes are found on a page, whether they’re seen by the user or not. Because most web users use the same username and password for multiple sites, the theft of this information on just one website can expose your information on many others. Not Just Hackers It may come as a surprise to learn that hackers are not the only ones trying to use your login information. Some ad networks are using tracking scripts to grab email addresses stored in your password manager for auto-filling. That tech can be used to grab passwords too, whether stored on a browser or an independent password management site. The ad networks are using the same technique as hackers — an invisible form that captures your credentials provided by the password manager. Here’s a helpful demo page that shows you how it works. Ad networks are using this information not to hack your data, but to understand what sites you navigate to better target ads to you. And while they claim to only be grabbing email addresses, the potential for further abuse is there. What Computer Users Can Do Password managers by themselves are still useful tools, especially given the number of codewords we need to go about daily web browsing. It’s the auto-fill mechanism that needs to be disabled. That’s simple to do. - Go to Settings - Search for Passwords and click on the Passwords arrow - Toggle the Auto Sign-In tab to the left (it should be grayed out not blue) - For more protection, you can stop Chrome from saving any passwords by toggling the Offer to save passwords to the left - Open Options - Click on Privacy & Security in the left-hand navigation - Click on History - Select Firefox will: Use custom settings for history - A new submenu will appear - Unclick on Remember search and form history - To fully disable saving any passwords, go to the Logins & Passwords section (just above History) and unclick Ask to save logins and passwords for websites On Safari (Desktop) - Open the Preferences window - Click on the Auto-fill tab - Turn off all features related to usernames and passwords On Safari (iOS) - Go to Settings - Scroll down to Passwords & Accounts and click on it - Toggle the AutoFill Passwords tab to the left Disabling the auto-fill features means spending a little more time finding and entering usernames and passwords manually. However, these steps protect you from prying eyes looking to gain more information about you and your accounts. Experience and strategy are what set us apart from other San Jose, Silicon Valley & South Bay IT companies. We deliver consistently optimal results following our carefully developed and mature set of IT practices and procedures.
<urn:uuid:a0455be6-50cb-43ca-b137-8b793d1d085f>
CC-MAIN-2022-40
https://www.bcnetworks.com/the-risks-of-using-auto-complete-for-passwords/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00540.warc.gz
en
0.873305
772
2.59375
3
Cloud computing is an important part of the future of innovation. It has already changed… The open-source movement in the technology world has been growing steadily. Organisations have been working hard to contribute code and documentation to open-source projects which can be used by other developers. Open-source software is licensed to use without cost, and often this license also has conditions that allow anyone to make changes to the code. One of the best things about open-source software is that it can be used for any purpose, as long as the license terms are met. With Azure’s open-source support, you can now use open-source tools on the cloud with all of Azure’s benefits: security, scalability, and manageability. Open-source software has many benefits; Microsoft itself stated that open source provides, “autonomy and control for developers to flexibly choose their infrastructure and give them options to build, migrate, and deploy across multiple environments on premises, in the cloud, or at the end”. What is open source? Open source is a type of software where the source code is open and accessible to the public. The part of the app that most users see is the “user interface”. Programmers who have access to a computer program’s source code can improve the software by adding features or fixing parts; they can also reuse the code or reformulate the software to fit their needs. The opposite to open-source code is proprietary or closed source code, in which some software only allows the creators or organisations to modify its code. There are many reasons people may prefer to use open-source software. One of the key reasons is that it offers convenience in terms of customisation. Open-source software is also often more cost-effective than its proprietary counterparts, and has a wider range of features in general. Some believe in the ideology behind open-source software; they feel that the spirit of creative commons in open-source projects is admirable and should be encouraged. Open-source fosters innovation, and benefits collaboration among people worldwide. It allows many talented engineers to work on it together at any given point in time, which helps in solving bugs and developing projects. Open source and Azure Microsoft Azure provides a leading open-source platform for enterprise developers and teams. The platform itself is built with open source and offers functionality that is unmatched on the market. Azure’s integration with open source is a powerful asset that many companies have been using to their advantage. It’s not just about using open-source code for a single person’s project, but the ability to contribute back to a project and integrate it into Azure. This allows Azure users to access free and quality pieces of code from anywhere in the world when they need it for their projects. It is also particularly useful for those who want to build a web or phone app using open-source languages like Python or Java. Why is Azure useful for open-source users? Microsoft Azure is a highly secure cloud platform. While developers are still responsible for the security of their app, there is less chance of their data or projects being compromised. Microsoft Azure is well-protected against cyberattacks. Microsoft is doing its part to make machine learning more accessible, and offers free, open-sourced toolkits to ensure developers approach AI responsibly. Microsoft Azure provides a growing selection of open-source tools and infrastructure to help power the next era of application development. As more companies seek to innovate faster and more securely, they are turning to Azure as their platform of choice for building modern apps on any device, in any industry or geography, at any scale. Azure is a great way for developers to release their applications without worrying about scalability, high availability, or security. All these are managed by Azure. All you need to do is create your app, and then publish it using Azure’s self-service portal. Discover more with the Azure experts Over the years, open source on Azure has helped many businesses with scalability, lower cost, and faster time to market. Give your business an upgrade by getting out of the data-centre business and into the cloud. As a Microsoft Gold Partner, INTELLIWORX provides all the expertise, knowledge, and management of Azure that you could ever need. If you’re looking for a better way to deliver business and support new projects, we can provide flexible, secure, and scalable infrastructure through Microsoft Azure. Contact our Azure specialists today. Modernise your aging apps or infrastructure with Azure to ensure you’re future-proofing your work.
<urn:uuid:cd6b4648-b1d3-4851-9941-76b98a5132d4>
CC-MAIN-2022-40
https://intelliworx.co/au/blog/open-source-on-the-cloud-with-azure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00540.warc.gz
en
0.94687
961
3.140625
3
The 10 Best Computer History BooksMarch 2 , 2018 Fans of movies like "Steve Jobs and "The Imitation Game" or TV series like "Silicon Valley" and "Mr. Robot" may be eager to know more of the stories behind the modern technological innovations that run our world. However, not everyone is a programming expert. These are the 10 best books on computer history that explain difficult concepts through powerful narratives. If you're ready to take the plunge and explore the world of coding, try this list of the best computer science textbooks. What Are The 10 Best Books About Computer History? - "Turing's Vision: The Birth of Computer Science" by Chris Bernhardt - "Troublemakers: Silicon Valley’s Coming of Age" by Leslie Berlin - "Where Wizards Stay Up Late: The Origins Of The Internet" by Katie Hafner - "The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution" by Walter Isaacson - "The Story of the Computer: A Technical and Business History" by Stephen Marshall - "When Computing Got Personal: A history of the desktop computer" by Matt Nicholson - "Computer: A History Of The Information Machine" by William Aspray - "Computers: The Life Story of a Technology" by Eric G. Swedin and David L. Ferro - "Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age" by Michael A. Hiltzik - "Dark Territory: The Secret History of Cyber War" by Fred Kaplan Steve Wozniak Debunks Myths About Apple We live in a society where technology is a given, to the point where it's hard to remember the days before smartphones, Yelp, and Twitter. It's amazing how computers have gone from punch cards to desktops to social media. For those keen on learning more about how we got where we are today, we've compiled this list of the 10 best books about computer history. Starting us off at #1 is "Turing's Vision: The Birth of Computer Science." This is a great choice for those new to the subject as it explains Alan Turing's theory of computation in terms that are easy to understand. Author Chris Bernhardt examines the cultural impact of Turing's work and how his ideas led us to the connected world we have today. At #2 we have "Troublemakers: Silicon Valley's Coming of Age." Leslie Berlin takes us through the birth of Silicon Valley in the 1970s as industry pioneers brought technology into homes and created the industries of personal computing, video games, biotechnology, semiconductors, and modern venture capital. By examining the impact of seven lesser-known innovators, the book shows how these men and women shaped the modern computing age. Our #3 choice is "Where Wizards Stay Up Late: The Origins Of The Internet," which goes back to the 1960s and the ARPANET program, and takes readers behind the scenes of the pivotal decisions and happy accidents that gave us the modern Internet. Understanding the difference between the Internet and the Web and learning about the conflict surrounding the adoption of the "@" symbol may spark a new appreciation of just how lucky we are that these scientists were around. #4 on our list is "The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution." Walter Isaacson, author of popular biographies on figures like Leonardo da Vinci and Steve Jobs, traces the history of computer technology by examining the personalities and collaborations that drove new discoveries, and why some of those partnerships succeeded and some didn't. Our #5 choice is for those who really want to dive deep. The 754 pages of "The Story of the Computer: A Technical and Business History" offer a comprehensive examination of the advances in mathematics and engineering that are the building blocks of computing. It covers early efforts in math, modern computer graphics, and everything in between. Our #6 entry is "When Computing Got Personal: A history of the desktop computer." Though most readers won't remember the days before every home and office featured a personal computer, there was a time when the idea was quite revolutionary. This text profiles the entrepreneurs behind the PCs which paved the way for your laptop and iPad. At #7, we have a book that offers historical analysis from a business perspective. "Computer: A History Of The Information Machine," part of the Sloan Technology Series, examines how government and business intersected, going from the original tabulator at IBM to the strategy behind Microsoft Windows and the growth of the World Wide Web. Our #8 selection is "Computers: The Life Story of a Technology." Since we take the monitors on our desks for granted, it's sometimes tough to wrap our heads around the larger idea of exactly what makes something a computer. This volume provides a succinct overview of technology, moving from simple machines to the tablets and smartphones we now enjoy. Coming in at #9 is "Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age." This lengthy narrative explores the efforts of the Xerox Corporation, and their group of innovators known as PARC. This department pioneered many of the technologies that would come to define the computer age, such as the laser printer and the graphical interface. However, the company was unable to take advantage of these ideas, and ultimately lost out on history. Rounding out the list at #10 is "Dark Territory: The Secret History of Cyber War." This volume explores a very specific aspect of computer history, the role of technology in war. Author Fred Kaplan provides an overview of how cyber warfare has developed over the past several decades, and what role computer security may play in conflicts of the future. It may surprise some readers to learn how quickly information warfare became a threat, and how vulnerable we may be now. One thing these books all have in common is that they all acknowledge just how rapidly the computing industry has grown. It's important to pay tribute to the innovators who provided the foundations for our modern technology. We hope you found an interesting read among these recommendations, since we could all stand to be a little more educated about the devices we use. Please note that jackbe.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for jackbe.com to earn fees by linking to Amazon.com and affiliated sites.
<urn:uuid:8817db2f-8c85-421d-964a-1c19d52af1d7>
CC-MAIN-2022-40
https://jackbe.com/a/Cz0Yoe1Zetv9E
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00540.warc.gz
en
0.930553
1,328
3.078125
3
by Ira Chernous This isn’t a story of Halloween costumes and candy. It’s a story about a cyberattack in which the victim always pays for the trick. This type of story almost never has a happy ending. What is Ransomware? Ransomware is a type of malicious software that uses encryption to hold a targeted victim’s information at ransom. Over the last few years, this type of cyberattack has become increasingly popular despite the complexity of its implementation. To execute a ransomware attack, the fraudster needs to be proficient in many areas, from social engineering through cryptography to programming. Ransomware is one of the most dangerous cyberattacks today and it always entails a loss of money or data. You may have heard of the recent high-profile ransomware cyberattacks known as Petya or WannaCry. Both give a very accurate idea of the scale and damage involved. Ransomware, Step by Step As already mentioned, ransomware is an evasive cyberattack that requires advanced skills to be executed. Such attacks are well prepared not just technically – they also use principles of social engineering. A ransomware attack is much like a delectable cake with orange-flavored base layer, creamy chocolate frosting and a cherry on top. Let’s disassemble the cake to analyze its ingredients. To do this, we will use the data of a recently attempted ransomware attack that was prevented by Cyren Inbox Security. A malicious email was sent from an external webmail domain, gmail.com. The short, one-word subject line (“docs”) indicates that the attachment includes a document. Once data is encrypted successfully, the ransomware drops a .txt note, the cherry on top: Predictably, the victim would only have two choices: pay the ransom or have the cake and put up with the data loss. Detected and Protected by Cyren Inbox Security Fortunately, Cyren Inbox Security was able to scan and automatically detect this suspicious evasive attack. Our 24×7 Incident Response Service immediately investigated all incoming and received emails and confirmed them as malicious for all Cyren Inbox Security customers.
<urn:uuid:43d8afa6-e1ba-4363-8c30-acb62163f76d>
CC-MAIN-2022-40
https://www.cyren.com/blog/articles/ransomware-step-by-step
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00540.warc.gz
en
0.940613
654
2.828125
3
The popularity of Artificial Intelligence (AI) has finally coincided with the expansion of cloud computing. Using AI in the cloud can improve cloud performance and efficiency while driving digital transformation in enterprises. AI capabilities in the cloud computing environment are crucial to making business operations more efficient, strategic, and insight-driven while also providing additional flexibility, agility, and cost savings. How AI is Affecting Cloud Computing On existing cloud computing platforms, AI techniques deploy to deliver extra value. SaaS (Software-as-a-Service) companies incorporate AI technologies into larger software packages to give end-users more functionality. Undoubtedly, AI and cloud computing have improved countless lives. Every day, people use digital assistants like Siri, Google Home, and Amazon's Alexa, allowing for an easy spoken command that can purchase an item, adjust a smart home temperature, or play music on a connected speaker, among many other functionalities. The technical aspect and connectivity of this function are one of which many users are unaware. Many do not know that intuitive experiences are made possible by a personalized blend of the two technology domains — Artificial Intelligence and Cloud Computing. AI capabilities help firms become more efficient, strategic, and insight-driven in the business cloud computing environment. Businesses gain flexibility, agility, and cost savings when you host data and apps in the cloud. AI and cloud computing now enable firms to manage data, find patterns and insights in data, create consumer experiences, and enhance workflows. More specifically, here are ways how AI is affecting cloud computing: Using AI to Power a Self-Managing Cloud AI is integrated into information technology infrastructure to ensure smooth workloads and automate repetitive processes. As AI grows more complex, some experts believe private, and public cloud use cases will increasingly rely on these technologies to monitor and manage their instances and even "self-heal" when a problem occurs. You can use AI to automate essential operations at first, and eventually, with the development of analytical capabilities, it can be utilized to design superior processes that are mostly independent. System-assisted management of routine procedures further assists IT teams in realizing the benefits of cloud computing while freeing up their time to devote to higher-value strategic initiatives. Taking Advantage of Dynamic Cloud Services Artificial Intelligence as a service is also having an impact on the way businesses use their tools. Imagine a cloud-based retail module capable of helping brands sell their products with ease. Moreover, the module has a pricing tool that allows you to automatically alter the price of a product based on factors such as: - Inventory levels - Competition sales - Market trends This AI-powered pricing module keeps a company's pricing optimum. It is not just about improved data use; it is about analyzing it and acting on it with minimal, if not without, human participation. Using Artificial Intelligence to Improve Data Management Cloud AI tools also improve data management. Including initiatives such as recognizing data, ingesting it, classifying it, and managing it through time, undoubtedly, today's organizations generate and collect massive amounts of data. Current AI tools are being employed in cloud computing environments to assist with specific aspects of the data processing cycle. For example, even the smallest financial institution may be expected to gauge thousands of transactions each day. In this aspect, AI solutions can assist financial institutions in providing more accurate real-time data to clients. The same method can help detect fraud or other risks. Ultimately, improving marketing, customer service, and supply chain data management is possible. The Benefits of AI in Cloud Computing AI has changed the cloud computing landscape forever, providing the following benefits: Thanks to artificial intelligence-driven cloud computing, businesses may become more efficient, strategic, and insight-driven. Artificial intelligence can automate time-consuming and repetitive tasks and perform data analysis without human intervention, increasing overall efficiency. IT teams can also utilize artificial intelligence to control and monitor critical workflows. While AI handles tedious tasks, IT teams can concentrate on strategic operations that drive genuine business value. Cut Down Costs Compared to on-premise data centers, cloud computing has the obvious advantage of reducing the costs associated with hardware administration and maintenance. With AI projects, upfront costs can be burdensome, but businesses can access these technologies for a monthly subscription in the cloud. This subscription-based usability makes R&D costs more affordable. Additionally, AI systems can extract insights from data and evaluate it without direct human intervention. Seamless Data Management In the processing, management, and structuring of data, AI plays a critical role. AI can significantly improve across organizational departments by utilizing more reliable real-time data. Ultimately, AI tools make acquiring, modifying, and managing data easier. AI and cloud computing are redefining business. AI and cloud computing help companies make sense of huge amounts of data, expedite complex processes, and improve product and service delivery. As the market becomes more competitive by the hour, start looking at how combining AI and cloud computing might help you: - Yield excellent customer experiences - Work more efficiently - Get the most out of the data and insights you collect With AI and Cloud Computing in your arsenal, there is no challenge that your business cannot overcome.
<urn:uuid:2b34475e-9fbc-4815-9443-f5422ec46e6b>
CC-MAIN-2022-40
https://www.datacenters.com/news/artificial-intelligence-in-cloud-computing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00540.warc.gz
en
0.914576
1,074
2.78125
3
Blockchain, the underlying technology behind cryptocurrencies like Bitcoin, is quickly gaining popularity in the digital space. More and more companies are recognizing its revolutionary potential and are choosing to adopt this new technology for their daily operations, making blockchain less of a buzzword and more of a forward-thinking mantra. In short, a blockchain is a continuously growing list of public records broken up into “blocks” based on specific windows of time. A community of users controls how this information is edited and updated, and all blocks are chained together chronologically. While more organizations are becoming aware of the applications for blockchain in the enterprise, there is less familiarity with the differences between public and private blockchains. The Similarities of Public and Private Blockchains Before we touch on the differences, it is important to understand the similarities between public and private blockchains. Both of them: - Are decentralized peer-to-peer networks built on a community of users, meaning that no one entity (like a bank or broker) is in charge of authorizing transactions - Rely on numerous users to authenticate edits to the distributed ledger, thereby creating a new master copy that is accessible to everyone at all times - Are completely immutable, meaning the verified block can never be erased or modified once it is authenticated by users So, how are they different? Public blockchain is the model of Bitcoin, Ethereum, and Litecoin and is essentially considered to be the original distributed ledger structure. This type of blockchain is completely open and anyone can join and participate in the network. It can receive and send transactions from anybody in the world, and can also be audited by anyone who is in the system. Each node (a computer connected to the network) has as much transmission and power as any other, making public blockchains not only decentralized, but fully distributed, as well. In order for a transaction to be considered valid, it must be authorized by each of its constituent nodes through the consensus process. Once this authorization takes place, the record is added to the chain. Public blockchains typically have incentives to encourage people to join the network as well as to authenticate transactions. One of the biggest disadvantages of a public blockchain is its complete openness. This type of transparency implies little to no privacy for transactions and supports a weak concept of security. Another drawback is the substantial amount of computing power that is necessary for the maintenance of the ledger. With so many nodes and transactions as part of the network, this type of scale requires extensive effort to achieve consensus. Private blockchains, on the other hand, are essentially forks of the originator but are deployed in what is called a permissioned manner. In order to gain access to a private blockchain network, one must be invited and then validated by either the network starter or by specific rules that were put into place by the network starter. Once the invitation is accepted, the new entity can contribute to the maintenance of the blockchain in the customary manner. Due to the fact that the blockchain is on a closed network, it offers the benefits of the technology but not necessarily the distributed characteristics of the public blockchain. The extent to which the entity can view or validate transactions is up to the network starter to determine. A typical way for enterprises to use private blockchains is intrabusiness, ensuring that only company members have access. This is a useful business solution if there is no reason anyone outside of the company should be part of the chain as data can be restricted to certain individuals on a need-to-know basis. With fewer people as part of the chain, they are typically quicker and more efficient with an easier consensus process. While this sort of structure may not be entirely different from older digital structures as the public blockchain is, the technology is still highly powerful and the strong cryptography and auditability offers more security than traditional protocols. Public vs Private Blockchain- What to Choose? Whether organizations choose to turn to the more established public blockchains, or rest assured with higher security measures for private ones, the potential of blockchain technology for enterprises is still wildly untapped. As security measures for public blockchains become stronger, their value will further increase, making the use of private blockchains less essential. However, when it comes to having more control and the ability to restrict access to specific individuals, private blockchains can’t be beat. In the end, the choice of whether to use a public or private blockchain for business lies with each organization that utilizes it.
<urn:uuid:ccacc23c-e89c-4188-8b6e-6e6047d15aa6>
CC-MAIN-2022-40
https://www.bmc.com/blogs/public-vs-private-blockchain/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00540.warc.gz
en
0.952426
914
3.25
3
As a long time CTO, I’m often asked about technology trends. Before I talk about the trends, it’s important that we understand that the rate of technological change driving these trends is accelerating exponentially. Exponential growth of technology capability is hard for humans to grasp. 30 linear steps of advancement are just that, 30 steps forward, or across the room. Thirty exponential steps are the equivalent of more than 25 times around the world! This is why you hear so many voices talking about the impact of technology on our world in the next 10 years. So, what are some of the key trends? In my view, the most critical digital technologies being explored today all have a common thread. They are focused on augmenting the human experience with digital capabilities. Let’s consider the list. - Voice: Voice technology is exploding. Google is said to have sold one new Google Home every second during the 2017 holiday season. Voice is a critical “augmentation” technology because our current interface to computers is slow and inconvenient. Entering words into a computer one character at a time is the equivalent of writing a book by carving into a stone tablet. Computers today can process huge amounts of information, and our interface is to slowly key in our requests? Voice is about increasing the human to machine bandwidth in a “frictionless” way. Consider voice as one step forward on our “input” with the machine. For a lengthy but wonderful discussion on bandwidth between computers and humans, see the waitbutwhy blog on Elon Musk’s Neuralink adventure. - Augmented / Virtual Reality: If we update the input, we need to update the output. This is where augmented and virtual reality come in. Augmented reality is available on your smartphone today for everything from projecting restaurant reviews on the restaurant in real time to translating a menu into your native tongue. This is our approach to upgrading the “output” to better connect us with the machine. - Internet of Things (IoT): This is about taking Alexa and putting her in every smoke alarm in your house. IoT enables the augmentation theme by surrounding us with our cloud based artificial friend. We achieve this ubiquitous access today by carrying our smartphones with us everywhere we go. Wearables are a step forward, making sure we always know our heart rates, and signaling us with a subtle tap on the wrist when it’s time to go, but these are primitive compared to where IoT will take us. As computer input and output is embedded in our surroundings, we will no longer need to worry about shouting at Alexa in the corner or recharging our phone. The input and output devices will be embedded in the world around us—always on, always listening, sensing and ready to respond to our words and gestures. Yes, the vision of kiosks in the mall that recognize us and call out a offer specific to our desires, a la “Minority Report,” is coming true. - Artificial intelligence (AI): I feel a little funny calling something coined in the 1950s an emerging technology, but the level of sophistication and the rate of change makes this the decade of AI. (see my prior blog for more on why!) AI is the cognitive piece of the puzzle that we are using to augment our world, and ultimately ourselves. In summary, our emerging technologies are designed to merge humanity with the machine; augmenting our human intelligence with artificial intelligence. This brings me to the final emerging technology on the list—really more of an example of how this all comes together in a particular field. - Self-driving cars: These are a great example of an application of AI (machine learning neural networks to analyze sensor inputs to “drive”), IoT sensors to enable vehicle-to-vehicle communication and data collection all around the car, and can even incorporate augmented reality, showing your path and the obstacles ahead via a heads-up display projected on your windshield. I repeat, our exponentially accelerating technology trends are impacting our world in significant ways. Soon, these changes will be merged with us in even more significant ways. As technologists, we need to consider the human and societal impact. Remember, we’re already working in an augmented world. Ask anyone you know what movie won Best Picture in 1976 or who won the 1983 Super Bowl, or even what the square root of pi is. Chances are, their friend named Alexa, Siri or Google will serve up the answer to them, because artificial intelligence augmentation is already here.
<urn:uuid:a9fa8ec5-03af-4300-9d4f-0b1c8365375d>
CC-MAIN-2022-40
https://www.cio.com/article/228674/merging-with-the-machine.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00540.warc.gz
en
0.931988
929
2.640625
3
The skills shortage across the cybersecurity sector has been well documented, but with the computing sector generally, and cybersecurity specifically still struggling to attract female candidates, it’s vital that men in the cybersecurity sector do more to make it an attractive place for women to work. A few years ago, estimates suggested there was a shortage of around 3 million cybersecurity professionals across the world. Roughly three-quarters of those who apply for cybersecurity roles were deemed to be insufficiently qualified to do the job properly. Across the computing world, this kind of shortage has prompted a concerted effort to increase the number of women operating in the sector. It’s an area of particular weakness in cybersecurity, with PwC data from 2017 revealing that just 14% of the cybersecurity workforce in the United States was female, which compares to 48% across the workforce in general. What’s more, it’s a problem that’s even worse outside of the United States. Research published in Nature reveals that just 10% of cybersecurity professionals in the Asia-Pacific region are female, with just 7% in Europe, and a paltry 5% in the Middle East. As in other disciplines, this then filters through to an even smaller representation of women in senior management positions, with one study showing just 1% of female cybersecurity professionals occupying managerial roles. Many of the challenges we face in attracting more women into the cybersecurity profession have deep societal roots. For instance, recent research from Carnegie Mellon highlights how the language we use can undermine efforts to attract women to STEM-related fields. The study found that biases are formed in even the youngest children about the kind of careers men and women are 'suited' to, with women typically associated with caring and supportive roles. “What’s not obvious is that a lot of information that is contained in language, including information about cultural stereotypes, [occurs not as] direct statements but in large-scale statistical relationships between words,” the researchers say. “Even without encountering direct statements, it is possible to learn that there is stereotype embedded in the language of women being better at some things and men at others.” Research from Rutgers University highlights how this pervades adult life, with the study revealing how distorted things like stock image libraries are when visually portraying various professions. For instance, when images of nurses or librarians were produced, they were predominantly featuring women in those roles, whereas when images of programmers or engineers were produced, they were predominantly men. Why this matters Does this matter? In short, yes. When determining the skills shortages facing the sector, technical skills make up a relatively small part of the cybersecurity pros toolkit. Soft skills, such as the ability to work in teams, strong communication ability, and personality traits such as integrity and empathy were found to be crucial to good cybersecurity. Women also tend to offer very different viewpoints and perspectives than men, which can be crucial in assessing and addressing cyber risks. The challenge is part of a wider issue around female representation in science, technology, engineering, and mathematics (STEM) fields, with just 30% of scientists and engineers women. This helps to forge a societal perception that cybersecurity is a predominantly male profession, despite there clearly being nothing that predisposes men to be either more interested or more capable in the role than women. This misconception can be further bolstered by an industry that is often only too happy to represent cybersecurity is solely requiring strong technical skills. A good first step in overcoming this imbalance is to do a better job of awareness building among women. A recent survey found that across the IT field, 69% of women didn’t pursue opportunities in the sector because they were largely unaware of them. Another survey, by security firm Tessian, revealed that just 50% of organizations felt they themselves were doing enough to attract women to cybersecurity roles. A diverse approach Diversity matters because female leaders often not only explore areas that men might overlook, but go about their work in a different way. In many ways, this is a consequence of their professional backgrounds, with data showing that nearly half of women in cybersecurity roles have business or social science degrees, versus just 30% of men. What’s more, women working in cybersecurity appear to place greater emphasis on training and education among their teams. This research also suggests that female cybersecurity professionals are better at working with other organizations, which as cybersecurity increasingly involves a wide range of stakeholders, is likely to be crucial to maintaining the security of complex networks. A good example of an initiative that has helped to attract more women to cybersecurity is Shift, which is a joint project between Israel’s Defense Ministry, Startup Nation, and the Rashi Foundation. The project aims to identify girls in high school who have the desire, curiosity, and aptitude to learn IT skills, and then works with them to help develop those skills. The girls are provided with access to a range of training programs and hackathons, while also having support and guidance from a team of female mentors, some of whom herald from the crack technology units in the Israeli military. The program provides participants with a wide range of technical training, including a number of programming languages, network analysis, and even hacking skills. By 2018, it’s estimated that 2,000 girls had participated in the initiative. This is a good example, but there are many others in place around the world to try and attract a more diverse pool of talent into the field. The projects underline the multidisciplinary approach that is likely to be required, including ensuring job adverts are appealing to female applicants, that attempts to target schools with high female enrollment are made, and that once in the workplace, female employees are given ample career opportunities to choose from. Finding male allies It’s a situation that U.S. Naval War College professors David Smith and Brad Johnson believe will require strong allies among men already in the industry to overcome it. In their latest book, Good Guys, they highlight how too often, men can turn a blind eye to problems that don’t affect them, especially if they have a zero-sum mindset that prompts them to think that any gains for women will come at the expense of men. This prevents them from developing the kind of skills required to support all under-represented groups in the workplace, including along racial and ethnic lines. “Whether you work for, alongside, or manage women, deliberately engaging with them in the workplace is the only real solution to overcoming the systemic sexism and inequality that keep all of us from maximizing potential and our organization from thriving,” they write. They recommend a number of actions men can make to become better allies to women in their workplace, including: - Sharpening your situational awareness. Be vigilant in observing how female colleagues experience meetings and other gatherings to be alert for any disparities you observe. - Cure your gynophobia. Publicly push back on false narratives about the risks of engaging with women at work, while also deliberately and transparently initiating conversations and mentorships with female colleagues. - Ask about women’s experiences. With genuine curiosity and humility, learn about the gendered workplace experiences of the women you work with. - Recognize that all women are not the same. Be attuned to the unique experiences of the women you work with. - Own and deploy your privilege. Recognize and fully own your privilege as a man, while leveraging it for the benefit of women and other marginalized groups. Getting more women into cybersecurity is not only good for the women themselves, but good for the organizations that recruit them, and indeed for society as a whole. If we are to win the cybersecurity battle with hackers, then it’s vital that we don’t leave huge amounts of talent on the sidelines. Projects such as Shift have shown the way. Now we need to make their efforts mainstream.
<urn:uuid:fc582bd4-d0d7-4a66-bac6-10b31c96372e>
CC-MAIN-2022-40
https://cybernews.com/security/helping-women-enter-the-cybersecurity-field/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00740.warc.gz
en
0.961681
1,622
2.71875
3
Following on from yesterday’s report on how Microsoft and Amazon are giving cloud computing grants to the data science eco-warriors of the future, another Climate Data Initiative-affiliated initiative. Yesterday saw the launch of the latest version of he Los Angeles Solar & Efficiency Report (LASER), a mapping tool fuelled by data which can help communities understand climate and pollution risks in their area, and hopefully combat climate change. In a blogpost for the Huffington Post Jorge Madrid, Senior Partnerships Coordinator for EDF’s CA Climate Team stated: By releasing these maps today, we humbly join an international movement of individuals, public entities, and private companies who are maximizing the potential of shared information to inform, educate, and usher in a new wave of innovation and opportunity. The maps paint a startling picture for the future of LA. Using groundbreaking climate projections by Dr. Alex Hall and the UCLA Department of Atmospheric Oceanic Sciences, the LASER maps show that by the middle of the century, LA’s urban core will experience three times as many extreme heat days (95 degrees Fahrenheit and above) as it does now, and the valleys will experience four times as many. However, opportunities for cutting energy usage in LA are rife. Despite being famed for its sunny climate, the citizens of LA are currently leaving around 98 percent of their rooftop solar potential untapped. Harnessing just 10 percent more would lead to the creation of 47,000 more solar installation jobs and could reduce carbon pollution by 2.5 million tons annually. LASER’s maps show that some of the communities most vulnerable to air pollution and extreme heat- such as East Los Angeles, South Los Angeles, and the San Fernando Valley- also have some of the highest potential for job creation through rooftop solar installation and energy efficiency, which both also reduce climate and air pollution. The maps offer a powerful insight for communities into exactly how they can cut pollution and energy use. As Madrid concludes, “Information is power. It’s not just for the data crunchers, the politicians, or even the climate nerds; it’s for everyone and can start a conversation that everyone can access”. Interested in more content like this? Sign up to our newsletter, and you wont miss a thing!
<urn:uuid:b6bac336-73ed-413c-b276-ee5a2e6f611f>
CC-MAIN-2022-40
https://dataconomy.com/2014/07/maps-show-big-data-can-fight-climate-change/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00740.warc.gz
en
0.939132
478
2.765625
3
NASA will end the Earth science mission of the U.S.-German Gravity Recovery and Climate Experiment satellite duo in November after 15 years worth of service in space, Space News reported Wednesday. The GRACE satellite duo will conduct a final science collection mission in mid-October to early November prior to its decommissioning. Alan Buis, spokesman for the NASA Jet Propulsion Laboratory, said that the decommissioning activities of the GRACE spacecraft will involve steps to make the satellites inert. Buis added that GRACE will begin its uncontrolled reentry to Earth in early 2018 and most of the spacecraft will burn during its descent with only a few small pieces anticipated to reach the ground. The report noted that the GRACE satellites have faced various challenges including connection losses, power failures and potential fuel depletion. NASA will launch the GRACE Follow On pair of replacement satellites which feature an integrated laser interferometer designed to boost the accuracy of separation measurements. Iridium signed a rideshare agreement with SpaceX in January to launch the NASA and GFZ German Research Center for Geosciences’ GRACE-FO satellites alongside five Iridium NEXT satellites in early 2018.
<urn:uuid:477c9918-162b-4753-ae04-f6d7960fb595>
CC-MAIN-2022-40
https://executivegov.com/2017/09/nasa-to-decommission-grace-earth-science-satellite-duo-alan-buis-comments/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00740.warc.gz
en
0.933625
240
2.921875
3
All security vulnerabilities are the result of human error. Most web application vulnerabilities and API security issues are introduced by developers. Therefore, the best approach to building secure applications is to do all that is possible to avoid introducing such errors in the first place instead of fixing them. You can find several detailed guides on how to create secure code during application development, for example, the one provided by the Open Web Application Security Project (OWASP). They focus on such details as input validation, output encoding, access control, communication security, data protection, cryptographic practices, error handling, the principle of least privilege, etc. Instead, we would like you to look at this software security issue from a strategic point of view. Principle 1: Spread awareness and educate In most cases, developers introduce security risks into the source code simply because they are not aware of such risks. While universities often focus on teaching such details as formal verification, many of them do not offer dedicated courses on cybersecurity and don’t even mention topics such as injection attacks or cross-site scripting (XSS). This is especially the case for older developers who have taken such courses several years ago when there was no hype about security yet. Universities also teach a limited number of programming languages so developers are in most cases self-taught, and some security problems are very specific to the programming language. For example, you won’t find a risk of buffer overflows in Java or C#. Even if the course teaches a language in detail, it rarely focuses on coding best practices related to application security in that language. To make sure that your software development teams don’t make mistakes due to lack of awareness, understanding, or gaps in education, you must approach the issue strategically: - Your development managers must not only be aware of security risks but they must be the driving force behind security. A developer with no security awareness can be educated but a development manager who does not realize the importance of security will never become the security leader. - Don’t make any assumptions as to developer knowledge. Validate it first and if it’s not sufficient, provide in-house or external training sessions dedicated strictly to secure coding standards. It’s not the best idea to absolutely demand security knowledge from new hires because this will limit your recruitment capabilities greatly and developers can easily learn as they progress. - Do realize that no matter how well your developers understand security, new techniques and attacks appear very often due to the speed with which technology is progressing. Some of these techniques require very specific security knowledge that can only be expected from someone with a full-time security-related position. Expect your developers to make mistakes and don’t punish them. - Don’t keep your development teams separated from your security teams. The two should work very closely together. Developers can learn a lot from security professionals. - Don’t assume that the nature of your software reduces your security requirements in any way. For example, even if your web application is not accessible publicly but only to authenticated customers, it should be just as secure as a public one. In general, don’t go for any excuses. Principle 2: Introduce multiple layers of verification Even the most aware and best-educated developers still make mistakes so simply trusting them to write secure code is not enough. You need automatic auditing tools that work in real-time during development to help them realize their mistakes and follow up with suitable mitigation. In an ideal situation, software should be tested using the following tools and methods: - A code analysis tool that is built into the development environment. Such a tool prevents basic errors immediately as the developer is typing in the code. - A SAST (static application security testing) solution that works as part of the CI/CD pipeline. Such a solution analyzes the source code before it is built and points out potential software vulnerabilities. Unfortunately, SAST has a lot of disadvantages, including a high level of false positives. - An SCA (software composition analysis) solution that works as part of the CI/CD pipeline. Since most code nowadays comes not directly from your developers but from open-source libraries that they use, you have to help them make sure that they are using secure versions of such libraries. Otherwise, you will have ticking-bomb vulnerabilities just waiting to explode. - A DAST (dynamic application security testing) solution that works as part of the CI/CD pipeline. Such a solution analyzes the application at runtime (after you compile it – with no access to the source code) and points out real security vulnerabilities. In the case of such software, performance is very important (scans are very intensive) and so is the certainty that reported errors are real (proof-of-exploit). - Additional manual penetration testing for errors that cannot be discovered automatically, for example, business logic errors. However, this requires specialized security personnel and takes a lot of time so it’s often performed only in the last stages of the software development life cycle (SDLC). However, early security testing takes a lot of time and resources. Therefore, a compromise is often needed between the time and effort required to perform tests and the quality of the results. If such a compromise is required, selecting a fast DAST scanner that provides proof-of-exploit and comes with SCA functionality is the best choice. Principle 3: Test as early as possible to promote responsibility To attain top code quality it’s not enough to have secure coding requirements and secure coding guidelines in place along with a test infrastructure. Teams must not only feel obliged to follow secure coding principles during the development process and do so because their code will be tested, but they must also feel that writing secure code is in their best interest as well. Secure coding doesn’t just need rules and enforcement, it needs the right attitude. A shift-left approach, such as the one described above, has many advantages, one of them being that developers realize that they’re an integral part of the security landscape. They feel responsible for code security and realize that if they make a mistake, they are going to have to fix it immediately and not count on someone else doing it later instead. Of course, you can test your application for security vulnerabilities just before it goes into production or even in production (shift right). However, it will cost you much more than it would if you shifted left. The software will have to go through all stages again, which involves other resources, not only developers. The developer won’t remember the code that they worked on or the fix may be assigned to a different developer than the original one and as a result, the developer will need more time to find and remove the vulnerability. As a consequence, late testing may delay the release even by several weeks. Not just security policies In conclusion, we would like you to realize that the security policies, while necessary, are not enough if they are perceived as a limitation, not an enhancement. Security begins with the right attitude when building applications. And even the best tools used to maintain security must be used in the correct way in the process so that they are perceived as helpful, not as a burden. Get the latest content on web security in your inbox each week.
<urn:uuid:9c8cc7f6-2711-4dfe-81b6-70afd8ceee64>
CC-MAIN-2022-40
https://www.acunetix.com/blog/web-security-zone/secure-coding-practices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00740.warc.gz
en
0.952162
1,501
2.671875
3
What Is XAI? Why and How Transparency Increases the Success of AI Solutions Artificial Intelligence (AI) is increasingly popular in business initiatives in the healthcare and financial services industries, amongst many others, as well as in corporate business functions such as finance and sales. Did you know that startups in AI raise more money than those outside of AI? And that within the next decade, every individual is expected to interact with AI-based technology on a daily basis? AI is a technology trend related to any development to automate or optimize tasks that have traditionally required human intelligence. Experts in the industry prefer to nuance this broad definition of AI by distinguishing machine learning, statistics, IT and rule-based systems. The availability of huge amounts of data and more processing power — not major technological innovations — make machine learning for better predictions the most popular technique today. However, I will argue that other AI techniques are equally important. The consequences of AI innovations for humanity have been huge and were, at the time, difficult to oversee. There were pioneers, visionaries, investments, and failures needed to get us to where we are today. I am so grateful with the results. Every day I use a computer, a smart phone, and other technology to provide me travel advice, ways to socialize, recommendations on what to do or buy, and other new knowledge. Many of these innovations are related to technology developed by researchers in Artificial Intelligence, and the full potential has not yet been realized. But there are also concerns Artificial intelligence solutions are accepted to be a black box: they provide answers without an explanation, like an oracle. You may already have seen the results in our society: AI is said to be biased; governments raise concerns about the ethical consequences of AI, and regulators require more transparency. We should be embracing the potential improvements that AI can bring to improve human decision-making in companies, but instead, people have become skeptical about AI technology — not only because they fear for losing their jobs but also because, as the experts, they are aware of all the uncertainties that surround their work. How can an AI algorithm deal with these aspects? Examples of AI biases AI systems have been demonstrated to be prejudiced based on gender (promoting males for job offers) and biased based on ethnicity (classifying pictures of black people as gorillas). These biases are a result of the data used to train the algorithms — containing fewer female job seekers and more pictures of non-colored people. Let's not forget that this data is created and selected by humans who are biased themselves. Perhaps you need to make choices and guide your company to compete using AI. What approach could you follow without losing the trust of your own employees or customers? Figure 1. Hype Cycle for Emerging Technologies. Now that AI technology is at the peak in the hype cycle for emerging technologies (see Figure 1), more conservative businesses want to use the benefits of AI-based solutions in their operations. However, they require an answer to some or all of these above mentioned concerns. Figure 2. Deployment of AI Initiatives in 2018. To benefit from the potential of AI, the resulting decisions must be explainable. For me this is a no-brainer since I have been promoting transparency in decision-making for years using rule-based technology. In my vision, a decision-support system needs to be integrated in the value cycle of an organization. Business stakeholders should feel responsible for the knowledge and behavior of the system and confident of its outcome. This may sound logical and easy, but everyone with experience in the corporate world knows it is not. The gap between business and IT is filled with misunderstandings, as well as differences in presentation and expectations. It takes two to tango. The business — represented by subject matter experts, policy makers, managers, executives, and sometimes external stakeholders or operations — should take responsibility using knowledge representations they understand, and IT should create integrated systems directly related to the policies, values, and KPIs of a business. Generating explanations for decisions plays a crucial role. We should do the same for AI-based decisions: Choose AI technology when needed, and use explanations to make it a success; that is, explainable AI — known by the acronym 'XAI'. Five reasons why XAI solutions are more successful Five reasons why XAI solutions are more successful than an 'oracle' based on AI (or any black box IT system) are as follows: - Decision support systems that explain themselves have a better return on investment because explanations close the feedback loop between strategy and operations, resulting in timely adaptation to changes, longer system lifetime, and better integration with business values. - Offering explanations enhances stakeholder trust because the decisions are credible for your customer and make your business accountable towards regulators. - Decisions with explanations become better decisions because the explanations show (unwanted) biases and help to include missing, commonsense knowledge. - It is feasible to implement AI solutions that generate explanations without a huge drop in performance with the six-step method that I developed and technology expected from increased research activity. - It is preparation for the increased demand for transparency based on concerns about the ethics of AI and the effect for the fundaments of a democratic society. In my upcoming book (available on Amazon) entitled "AIX: Artificial Intelligence needs eXplanation," I detail each of the above reasons, with examples and practical guidance. This will provide you with a good understanding of what it takes to explain solutions that support or automate a decision task and the value that explanations add to your organization. Silvie Spreeuwenberg, "What is XAI? How to Apply XAI," Business Rules Journal, Vol. 20, No. 7, (Jul. 2019) URL: http://www.brcommunity.com/a2019/c002.html # # # About our Contributor: All About Concepts, Policies, Rules, Decisions & Requirements We want to share some insights with you that will positively rock your world. They will absolutely change the way you think and go about your work. We would like to give you high-leverage opportunities to add value to your initiatives, and give you innovative new techniques for developing great business solutions.
<urn:uuid:6d9309ee-e2b1-4484-9a30-5f3cfdccb94f>
CC-MAIN-2022-40
https://www.brcommunity.com/articles.php?id=c014
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00740.warc.gz
en
0.94919
1,287
3.140625
3
Our brains are littered with passwords and alphanumeric combinations that span all levels of necessary corporate and personal security—from bank accounts and PINs, to work-related e-mail and network log-ons, to e-commerce and social networking sites. MORE ON CIO.com How Do We Manage Our Expanding Collections of Passwords and PINs? Who’s Stealing Your Passwords? Global Hackers Create a New Online Crime Economy Trojan Lurks, Waiting to Steal Admin Passwords A neurological researcher who is fully appreciative of the dizzying increase in passwords and other things to memorize, however, argues that we all can remember much more if we practice visualizing the information we want to recall. But first: What should we actually try to commit to memory these days? It seems like a legitimate question. (Personally, here’s how many computer-related passwords I can remember off the top of my head: three. I figure I have a total of 50 or so passwords which I need to recall during a typical month.) Recent research from Ian Robertson, professor of psychology at the Institute of Neuroscience and School of Psychology at Trinity College in Dublin, Ireland, illustrates the growing amount of alphanumeric clutter in our heads: the average person now has to remember five passwords, five PIN numbers, two number plates, three security ID numbers and three bank account numbers just to get through everyday life. A 2007 study of Web users by Microsoft Research found that the average user has 6.5 Web passwords, each of which is shared across almost four different websites. In addition, each user has about 25 accounts that require passwords, and types an average of 8 passwords per day. Too Much To Remember? Not surprisingly, Robertson’s research found that nearly 60 percent of those studied felt like they couldn’t possibly remember all of these numbers and letters that they were supposed to. A consequence of this “information overload” was that most users today create weak passwords (dog’s or child’s name usually top the list) or rely heavily on technology to create or store of the alphanumeric data. Vendor solutions to the password problem, such as Passface’s facial-recognition technology, abound. Researchers, like those at Microsoft, have explored the value of tech-assisted visual aides, like digital ink blots. And security experts such as Bruce Schneier have laid out their own strategies. (See CSO’s take: “How to Write Good Passwords.”) To which Robertson responds: “People are incapable [of remembering passwords] because of the particular ways they have been taught to remember,” he says. “We can use our brains much more than we do. And if we could be bothered, we could happily remember two dozen passwords using some fairly standard memory methods.” Flexing the Brain’s Memory Muscles The brain is just like any other part of our body when it comes to use, Robertson contends: “Use it or lose it.” His recent study showed the generational differences and how the brain can seemingly atrophy. In Robertson’s survey, for example, almost a third of those under 30 couldn’t remember their home telephone number, which was usually stored on their mobile device or on a piece of paper. “I was astonished that a significant percentage of people didn’t know their own cell phone number or landline number without looking it up,” Robertson says. “And this was much more so of younger people, who are reliant on technology, and that’s leading to the underuse of certain areas of the brain.” (A recent article in The Atlantic, by Nicholas Carr, asked if Google was making us stupid—was the Internet negatively influencing our brain’s processing abilities, especially in how we read?) The under-30 generation fared even worse with important dates, such the birthdays of close family members: 87 percent over 50 could remember the details, compared with just 40 percent of those under 30. Other research proves the idea that the brain needs to be “exercised” to make it stronger. In 2007, Stanford University researchers discovered that “the brain’s ability to suppress irrelevant memories makes it easier for humans to remember what’s really important,” notes a Stanford News Service article. For example, passwords that have to be changed every six months are an opportunity to forget an old one and remember a new one. “The extent to which these brain mechanisms weaken the old password, then they don’t have to be used as much in future attempts to remember the new one,” says Anthony Wagner, a professor in Stanford’s psychology department. “From a neural standpoint, forgetting the old password makes the brain more efficient.” How to Visualize Passwords Robertson offers one somewhat easy way to remember numerical-based passwords, using what he calls visual imagery. (“This is not mine but a longstanding method,” he adds.) First, you need to create an easy-to-recall rhyming word for each number, one through 10. “One is bun, two is shoe, three is tree, four is door, five is hive, six is sticks, seven is heaven, eight is gate, nine is wine and 10 is hen,” Robertson offers. So if, say, your code is 6329, you would first visualize a pile of sticks (for six), that are then spread all around a tree (three), and then there’s a shoe (two) hanging on the tree, and lastly a glass of wine (nine) is pouring over the tree. “If you care to spend a few minutes to do that and assemble the image, then the very act of doing that will make it very easy for you to remember that number,” Robertson says. “The first few times will be time consuming and labor intensive. But if you get into a habit, you could remember two or three dozen visual images.” The same approach can be used for alphanumeric passwords—letters receive an image: A is apple, B is bee, C is cat and so on. “The links there embed themselves in the brain much more deeply and widely, such that you will remember that image much more readily than you will remember the verbal encoding” of a password, Robertson says. Robertson is clear that technology itself is not necessarily a bad thing, however. He says that the jury is still out on whether the tasks that we are presenting the brain today are not having some benefit in ways we don’t appreciate right now. “It may be that that underuse is more than compensated for by the incredibly complex and demanding Grand Theft Auto and other computing games played today,” he says. “It might be that much more valuable parts of the brain are being stimulated. “I’m just saying, just as a bit of a warning signal to the younger generation,” Robertson adds, “as more and more cognitive aids and technologies come out, is to realize that if you don’t use your memory you won’t be able to remember.”
<urn:uuid:355d96c0-71ee-406a-a151-906442d34bb5>
CC-MAIN-2022-40
https://www.cio.com/article/276252/it-strategy-password-brain-teaser-too-many-passwords-or-not-enough-brain-power.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00740.warc.gz
en
0.949708
1,523
2.65625
3
This article is intended for security specialists operating under a contract; all information provided in it is for educational purposes only. Neither the author nor the Editorial Board can be held liable for any damages caused by improper usage of this publication. Distribution of malware, disruption of systems, and violation of secrecy of correspondence are prosecuted by law. All major corporate networks use dynamic routing. Routers on the network exchange routing information with each other automatically so that the network admin doesn’t have to set up all their routes manually. In most cases, the admin doesn’t configure the protective mechanisms at all. And this opens the way for exploitation. Both OSPF and EIGRP belong to the class of interior gateway protocols (IGP). Such protocols are used to transmit routing information within the same Autonomous System (AS). For convenience purposes, you can imagine it as a network of some organization. Problems, impact, and weapon OSPF (Open Shortest Path First) belongs to a type of protocols based on link state tracking. Attacking OSPF is a bit more difficult job in comparison with its relative EIGRP. Two things can complicate an attack: - multiple OSPF zones. Engineers can design an OSPF routing domain with multiple zones to reduce the load on computing resources of the routers. This must be taken into account when you penetrate into an OSPF routing domain. Also keep in mind that packets can be transmitted between these zones. For instance, if you are going to perform a route injection; and - lack of response to queries. To connect to a routing domain, the attacker must ensure that the fake router generates and receives Hellomessages from neighbors and simulates the establishment of neighborship with them. Otherwise, the fake router will be recognized ‘dead’ and excluded from the routing domain, thus, making it impossible for the attacker to take further steps. The best way to attack an OSPF domain is to gain control over a legitimate router on the network. Alternatively, you can create an ‘evil’ virtual router on your side and connect to the domain. However, to do this, have to parse OSPF multicast packets and examine the following parameters in the packet: - OSPF Hello Interval; - OSPF Dead Interval; and - presence of authentication. Even if OSPF is protected by authentication, and the passwords are stored as MD5 hashes, there is still a chance for the attacker to guess them. EIGRP (Enhanced Interior Gateway Routing Protocol) belongs to the class of distance-vector protocols. It was developed by Cisco Systems as a replacement for the IGRP protocol. EIGRP routers exchange routing information using the Diffuse Update Algorithm (DUAL) to plot routes within the same AS. EIGRP stores all available backup routes to destination networks in the routing table, which makes it possible to quickly switch to a backup route if necessary. Did you know that EIGRP is no longer a Cisco proprietary protocol and it’s now open to other network equipment vendors? EIGRP became an open standard in 2013, and the respective documentation RFC 7868 was published in 2016. To connect to an EIGRP routing domain, the attacker must know the autonomous system number. It can be extracted, for instance, from a traffic dump using Wireshark. Unlike an OSPF domain that can be divided into multiple routing zones, an EIGRP autonomous system is flat: if you make a route injection, your route will likely spread throughout the entire domain. Attacks ON Dynamic routing can be divided into three types: - Network enumeration. If an attacker connects to the routing domain, they can perform network reconnaissance and discover some subnets. This is a rather useful trick since classical scans (e.g. with Nmap) take a long time. In addition, they can trigger IDS/IPS security systems. After connecting to a routing domain, you can collect information about subnets advertised by neighboring routers. Note that this method doesn’t guarantee the detection of all subnets in the organization. But it can bring you an easier win when you conduct a pentesting study. - MITM (man in the middle). In essence, you inject a route to intercept traffic from the target host or network. After connecting to the routing domain, you can make an advertisement in the domain that literally looks as follow: “Dear all, the host whose IP address is 192.168.1.43/24 is now accessible via me, 192.168.1.150.” The routers in the domain will accept the new information and add the route you have advertised to the routing table. Important: routers use metrics to make routing decisions. If your route is worse than others in terms of path cost, it won’t be included in the routing table. Why? Because the routing table stores only the best paths to destination networks. - DoS (Denial of Service). Routing table overflow: all router’s CPU and RAM resources are depleted. If the routing table is overflowed, it becomes impossible to add a new legitimate route to it. The router won’t be able to add to its table the route to a new network that has just appeared. Lethal weapon: FRRouting FRRouting is an open source solution that creates a virtual router in Unix/Linux. The virtual router supports such protocols as BGP, OSPF, EIGRP, RIP, etc. Using FRRouting, you can deploy a ‘rogue’ router on your side, start routing, and connect to the target routing domain. Why is this required? Because, if you perform a route injection without joining the domain and establishing a neighborship, then the routes you advertise won’t be included into the neighbors’ routing tables. Instead, they will just disappear. I strongly recommend reviewing the FRRouting documentation; it addresses all important aspects, including installation and configuration. First, you have to enable daemons in the daemons configuration file. You will need the eigrpd daemons. It’s also necessary to enable the staticd daemon to ensure that custom static routes are redistributed correctly. Then you set a password to connect to the router control panel via VTY lines: Finally, you enable traffic forwarding. By default, it’s disabled in Linux distributions. sudo sysctl -w net.ipv4.ip_forward=1 systemctl start frr vtysh command, you access the control panel of the FRRouting virtual router. The networks shown below will be used as test polygons. In the context of an attack on OSPF, I will examine a route injection with subsequent traffic interception. With regards to EIGRP, I will explain how to deliver a destructive attack involving a routing table overflow (to avoid demonstrating the same attack). Thus, two attack variants will be addressed. OSPF can be attacked in the same way as EIGRP. However, keep in mind that destructive attacks are less practical in terms of production. Perhaps, such scenario can be useful for the Red Team as a diversionary maneuver… Route injection and traffic interception in an OSPF domain To successfully perform route injection, you have to connect to the OSPF routing domain and advertise the network. Specify area 0. c0ldheim@kali:~$ sudo vtyshkali# conf tkali(config)# router ospfkali(config-router)# network 172.20.20.50/32 area 0.0.0.0 Enable redistribution of static routes with the lowest metrics so that the injected route has the lowest cost. kali(config-router)# redistribute static metric 0 Advertise a static route in the OSPF domain: “The host whose IP address is 172.20.20.20 is now accessible via me, 172.20.20.50.” kali(config)# ip route 172.20.20.20/32 eth0 Checking the routing table on router R2: R2#show ip routeCodes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGPD - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter areaN1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2E1 - OSPF external type 1, E2 - OSPF external type 2i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2ia - IS-IS inter area, * - candidate default, U - per-user static routeo - ODR, P - periodic downloaded static route, H - NHRP, l - LISPa - application route+ - replicated route, % - next hop override, p - overrides from PfRGateway of last resort is not set10.0.0.0/8 is variably subnetted, 2 subnets, 2 masksC 10.1.1.0/24 is directly connected, GigabitEthernet0/0L 10.1.1.2/32 is directly connected, GigabitEthernet0/0172.20.0.0/16 is variably subnetted, 4 subnets, 2 masksC 172.20.20.0/24 is directly connected, GigabitEthernet0/2O E2 172.20.20.20/32[110/0] via 172.20.20.50, 00:02:27, GigabitEthernet0/2L 172.20.20.254/32 is directly connected, GigabitEthernet0/2O 172.20.30.0/24 [110/2] via 10.1.1.3, 01:08:17, GigabitEthernet0/0R2# As you can see, the route injection was successful. R2 now believes that the host at 172.20.20.20 is accessible via your attacking machine. Next, let’s try to connect from the DevOps host to the FTP server at 172.20.30.100. As a result, you have penetrated between the host and the FTP server and intercepted the unencrypted credentials. Route injection and routing table overflow in an EIGRP domain First, you connect to an autonomous EIGRP system and advertise a network. c0ldheim@kali:~$ sudo vtyshkali# conf tkali(config)# router eigrp 1kali(config-router)# network 172.20.20.50/32 This time, I am going to use the Scapy networking library to inject EIGRP routes. c0ldheim@kali:~$ sudo scapy3>>> from scapy.contrib.eigrp import * # Import module to work with EIGRP headers>>> frame = Ether(dst="01:00:5e:00:00:0a") # Build Ethernet frame with the MAC address of the multicast advertisement destination>>> ip = IP(src="172.20.20.50", dst="126.96.36.199") # Build IP packet with the IP address of the multicast EIGRP advertisement destination>>> eigrp = EIGRP(opcode=1, asn=1, seq=0, ack=0, tlvlist=[EIGRPExtRoute(dst=RandIP(), nexthop="172.20.20.50")]) # Build EIGRP packet with the Update option>>> crafted = frame/ip/eigrp # Assemble the three layers together>>> sendp(crafted, loop=1, iface="eth0") # Put on a loop the transmission of the assembled malicious EIGRP packet If you check the router control panel, you’ll see that the CPU load has increased significantly reaching 87%. R2#show ip route summaryIP routing table name is default (0x0)IP routing table maximum-paths is 32Route Source Networks Subnets Replicates Overhead Memory (bytes)connected 0 6 0 408 1080static 1 0 0 68 180application 0 0 0 0 0eigrp 1 481 3088 0 328348 642420internal 1192 358080Total 1674 3094 0 328824 1001760 If the routing table is overflowed, the router cannot add new routes to its routing table. How to prevent attacks on routing domains Use passive interfaces. Configuring passive interfaces in the dynamic routing context prevents the router from sending advertisements via certain interfaces. By default (i.e. if you don’t configure passive interfaces), it sends advertisements to all interfaces, which puts the routing domain at great risk. A legitimate user on the network can deploy a virtual router in the same way as shown above and attack the routing domain. In this article, I stick to the Cisco IOS CLI principles and commands. A configuration example for OSPF: R2#conf t # Enter global configuration modeR2(config)# router ospf 1 # Enter the OSPF configuration mode as process 1R2(config-router)# passive-interface GigabitEthernet 0/2 # Make the interface passive An example for EIGRP: R2#conf t # Enter global configuration modeR2(config)# router eigrp 1 # Enter the EIGRP configuration mode in autonomous system 1R2(config-router)# passive-interface GigabitEthernet 0/2 # Make the interface passive Use Authentication. The use of authentication in routing domains ensures that only authorized, legitimate routers can log in. Authentication involves passwords. If you want to secure the routing domain using authentication, make sure the passwords you use are strong enough. Keep in mind that they are hashed using cryptographic hash functions, and the attacker can read hash values from the traffic dump and brute-force a password. Having a password, the attacker can easily connect to the routing domain. Configuring authentication for OSPF using MD5: R2#conf t # Enter global configuration modeR2(config)# interface GigabitEthernet 0/1 # Enter interface configuration modeR2(config-if)# ip ospf authentication message-digest # Enable MD5 authenticationR2(config-if)# ip ospf message-digest-key 1 md5 y0ur_f4ult # Set password with key-id 1 Configuring authentication for EIGRP using key-chain and MD5: R2#conf t # Enter global configuration modeR2(config)# key chain SecureRouting # Create a key chain called SecureRoutingR2(config-keychain)# key 1 # Create first keyR2(config-keychain-key)# key-string y0ur_f4ult # Set passwordR2(config-keychain-key)# accept-lifetime 20:00:00 mar 1 2022 20:00:00 mar 2 2022 # Specify period of time during which the router will accept this key from a neighborR2(config-keychain-key)# send-lifetime 20:00:00 mar 1 2022 20:00:00 mar 2 2022 # Specify period of time during which the router will send this key to a neighborR2(config-keychain)# key 2 # After the expiry of the first key, the second key will be used automatically. Create second keyR2(config-keychain-key)# key-string y0ur_des1re # Set passwordR2(config-keychain-key)# accept-lifetime 20:00:00 mar 2 2022 20:00:00 mar 3 2022 # Specify period of time during which the router will accept this key from a neighborR2(config-keychain-key)# send-lifetime 20:00:00 mar 2 2022 20:00:00 mar 3 2022 # Specify period of time during which the router will send this key to a neighborR2(config)# interface GigabitEthernet 0/1 # Enter interface configuration modeR2(config-if)# ip authentication mode eigrp 1 md5 # Enable MD5 authentication for autonomous system EIGRP 1R2(config-if)# ip authentication key-chain eigrp 1 SecureRouting # Specify key-chain to be used for authentication in autonomous system EIGRP 1 This article analyzes attack scenarios targeting the OSPF and EIGRP dynamic routing protocols. Based on my personal pentesting experience, in most cases, network admins don’t configure protection mechanisms embedded in these protocols. The most common OSPF/EIGRP configuration neither does include authentication, nor its passive interfaces are configured. This puts the security of the local network at great risk. Hopefully, the above information will be of interest both for pentesters (who can implement these attacks) and network admins (who can boost the security of their routing domains).
<urn:uuid:68824e9f-b180-4a06-84be-e652e1fe22ce>
CC-MAIN-2022-40
https://hackmag.com/security/routing-nightmare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00140.warc.gz
en
0.810251
3,750
2.59375
3
The European Commission hopes to guarantee that commonly connected appliances are less susceptible to cyberattacks by requiring manufacturers to bolster security during their entire lifecycles. The Cyber Resilience Act, unveiled on Thursday in Brussels, seeks to establish itself as a global leader by mandating cybersecurity standards for all products with digital components, also known as the Internet of Things, and by educating consumers about the cybersecurity implications of the products they purchase. “When it comes to cybersecurity, Europe is only as strong as its weakest link: be it a vulnerable Member State, or an unsafe product along the supply chain,” Commissioner for the Internal Market Thierry Breton stated. He further said that hundreds of millions of linked gadgets, such as computers, phones, home appliances, virtual assistance devices, vehicles, toys, etc., can potentially be a point of entry for a cyberattack. Yet today’s majority of hardware and software products are exempt from any cyber security requirements. The Cyber Resilience Act will assist in safeguarding Europe’s economy and our collective security by establishing cybersecurity by design. Data from the Commission reveals that a ransomware attack occurs every 11 seconds. In 2017, the cost of cybercrime was predicted to reach €5.5 trillion globally, with the economic effect of ransomware attacks totaling roughly €20 billion worldwide. With respect to designing and developing their products, manufacturers will be required by the Commission’s proposals to consider cybersecurity. They will also need to ensure that any vulnerabilities are effectively addressed for the anticipated product lifetime or five years, whichever is shorter. Along with providing security upgrades for at least five years and “clear and understandable instructions” for using items with digital components, they will also need to report exploited vulnerabilities and occurrences actively. Manufacturers violating the law will risk having their goods permanently or temporarily banned from the Single Market and a punishment ranging from 2% to 5% of worldwide sales. After receiving final approval from Parliament and the Council, the proposed legislation must be implemented two years later. Ursula Pachl, deputy director general of the consumers’ umbrella organization, The European Consumer Organisation (BEUC), hailed the plan as “really good news for consumers.” She said that insufficient cybersecurity on connected devices, like smart door locks, baby monitors, toys, washing machines, and fridges, may be a big concern for our society and economy because key infrastructure may quickly be disrupted if anything gets hacked. The Commission’s decision to finally put this idea on the table is crucial. In response to the Commission’s proposals, MEP Dr. Patrick Breyer of the German Pirate Party said that it is long overdue to finally hold commercial manufacturers accountable for the threat posed by “insecure technology.” He also called for a change, claiming that the concept is flawed in some ways and goes too far in others. “On the one hand, there is a lack of a clear obligation for commercial manufacturers to immediately fix known security gaps. Commercial manufacturers must be held liable for self-inflicted security loopholes in order to make IT security financially worthwhile! On the other hand, the voluntary development of free software is threatened because the same requirements are to be placed on commercial producers and on volunteers,” he explained.
<urn:uuid:df9f8ce8-3849-460e-bc86-969d1a9184a5>
CC-MAIN-2022-40
https://cyberintelmag.com/iot/brussels-intends-to-propose-guidelines-for-connected-devices-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00140.warc.gz
en
0.956878
666
2.828125
3
What are fingerprint access control devices? Fingerprint access control is a relatively newer entrant on the physical access control scene. Computerized access control has been traditionally based either on something that one has such as a magnetic or a chip card, or it has been based on something that one knows such as a password or a PIN. Both of these ways of authenticating a person are error-prone since a card can be misplaced and passwords can be forgotten or shared. To counter the inherent disadvantages of a password or a chip card based access control, biometrics slowly started getting recognition as a uniquely identifying characteristic(s) of any individual. To use fingerprint-based access control system, an organization needs to have a database where all enrolled employees’ fingerprints are digitally stored. Access to desired physical areas or rooms can then be controlled with fingerprint scanners installed on their entrance. Any person who wishes to enter these areas then needs to swipe his or her finger on the scanner, which scans the prints and sends them to the server where pattern matching algorithms check for a match against the stored fingerprints. If a positive match is found, then the server activates the door opening mechanism with no human intervention required. The below diagram illustrates fingerprint based biometric authentication mechanism – Fingerprint Biometrics: Choice of modality Biometrics offers an alternative way of authentication based on something that really characterizes a person uniquely such as his iris/retinal scan or his fingerprints. With advancements in scanning devices and pattern matching algorithms, automation of biometric authentications became a possibility. Of the various forms of biometric authentications, fingerprint based authentication is one of the most widely accepted forms due to its affordability and decent levels of accuracy. Organizations which were using the chip or smart-card based physical access control mechanisms were increasingly bearing the brunt of the practice known as buddy punching. In buddy punching, employees would just swipe for their colleagues or pass them their own smart cards to allow them to gain physical access to an access-controlled area. Fingerprint-based authentication systems brought in an approach which was difficult to abuse as well as was within the budgets of companies. Thus, fingerprint based authentication solutions have slowly gained acceptance and have become the de-facto mode of biometric authentication systems. Employee time-keeping using fingerprint access control devices With the increasing acceptance and use of fingerprint-based access control, employers started getting a wealth of information regarding their employees. These included the exact time when they first “logged-in” or came to office, the time spent on tea and lunch, and finally the time when they “logged-out” or left office. Organizations started realizing that maintaining logbooks or registers of attendance was an unnecessary burden they could avoid as all that data was already in their database in the form of captured times. All they needed to do was retrieve it in a form that it made sense. With this realization, organizations started developing time-keeping applications or solutions which was integrated with the fingerprinting-based access control solutions. The times at which employee’s fingerprint data was captured at the entry and exit scanners would be directly fed into the timekeeping and attendance systems. Not only did fingerprint based biometric authentication systems make employees life easier because they no longer needed to make a register entry for their daily attendance, it also made the working of the organization much more efficient. Have a look at the diagram below – The above diagram shows a fingerprint based access control system which authenticates users enrolled in the system by searching for a match in the centralized database containing the fingerprint images of all the users. To use this information for attendance and time keeping, the centralized database for access control needs to be synced with the attendance and timekeeping server. The attendance and timekeeping server can then be integrated with finance and payroll department and the HR department. Some popular fingerprint based biometric access control devices – Who can use fingerprint access control Finance and payroll department needs the employee access data for calculating the days of the month the employee attended office and for how long a duration. Most of the professional organizations provide flexi-timing i.e. an employee can walk-in and go out whenever he wants but with a caveat. There is the condition that for qualifying for a full day’s work credit they need to work for minimum of say X amount of hours (let’ say X= 9 hours). Similarly, they need to work for a minimum of say Y amount of hours (let’s say Y= 4 hours) for them to be considered for a half-day’s pay. The centralized database for access control already contains the day-wise information of employee in and out times. Using this data all that remains is to apply a simple formula to calculate the time employee spent in office – An employee will then fall into one of the three brackets every day – - Time Spent in office >= X, i.e. equal to or more than 9 hrs, implies he gets a full-day salary - Time Spent in office < Y, i.e. less than 4 hrs, implies he doesn’t get any salary for that day - Y <= Time Spent in office < X, i.e. between 4 and 9 hours, implies that he gets half-day’s salary Let’s now take the case of three employees – Tom, Dick and Harry for April 1st of 2016. Tom logs-in at 9 am and logs-out at 8 pm. He spends 11 hours in office which is greater than 9 hours. Hence, Tom gets a full day’s salary. Dick logs-in at 11 am and logs-out at 4 pm. He spends 5 hours in office which is less than 9 hours required for a full day’s pay but greater than 4 hours required for a half-day’s pay. Dick then gets only half day’s salary for April 1st 2016. Harry logs-in at 11 am but has to rush home due to an emergency. So, he logs-out at 2 pm. He spends 3 hours in office which is less than the 4 hours required for half day’s pay. Harry is paid no salary for the day! Such a calculation per month per employee per day is very simple and straight-forward for a software report to calculate and print in a matter of minutes. Employee time-keeping then becomes a job which can be completed in minutes using the already established infrastructure of fingerprint access control and the database in which employee access times are captured. Similar to the finance department, HR Department, when integrated with the access control database, can also use the employee log-in and log-out times for calculating leave information. An employee CL or Casual Leaves can then be automatically detected by the system and logged with the HR department via such an integrated system. Employee productivity improvement by using fingerprint authentication Fingerprint-based access control and time-clock systems have the direct benefits of reducing time-keeping overheads along with low rate of faults or mistakes committed in the computations. Organizations which have implemented fingerprint-based attendance systems have noticed a derived benefit of employee productivity getting enhanced after access control and timekeeping based on fingerprint-controlled access were started by them. Let us now have a look at the benefits which have been observed and noted in a variety of studies around such implementations – Prevents employee time theft and buddy punching When employees earn salary for time that they haven’t actually spent on working then it is known as time theft. In traditional timekeeping systems employees accomplish this by various means such as signing the register and going out from office, somebody else signing or punching in for them (known as buddy punching), or even loitering around inside the office in common areas such as canteens. Biometric-based automated attendance systems eliminate all of these tricks of time stealing. Payroll department becomes more efficient as payroll calculation is automated In a traditional time-keeping systems, attendance information is maintained in log books or registers. At the end of the month, someone from the finance department needs to sit and spend hours in calculating the actual number of days worked for each employee. While in the case of an automated fingerprint biometrics based attendance system, it is just a matter of printing out the right report or simply syncing the attendance data to the payroll producing database, thus directly calculating the payrolls from the attendance data itself. Payroll department’s productivity is thus increased by a few notches. HR Department is less burdened with Employee Self Service (ESS) Most of the companies implementing fingerprint based attendance systems realize that the data for daily log-ins and log-outs can be made available to the employees themselves to view and check with regards to any incorrectness or exceptions. Through such systems known as Employee Self Service Systems (ESS), employees can be made aware of any such exception conditions arising in their attendance records by means of automatic alerts being emailed out to them. On receiving such alerts, the employees can themselves take care of their attendance with correction privileges provided to their immediate supervisors. Thus, the productivity of HR department is improved and they can focus more on employee development rather than wasting time and resources in doing manual attendance-keeping in a traditional time-keeping system. More engaged employees due to ESS lead to less absenteeism, low employee turnover Using an Employee Self Service System or ESS, an employee takes care of his own attendance and potential payroll problems arising out of incorrect attendance records. Such an employee will be significantly less disgruntled with the management for such issues as he himself will be managing and correcting it. This kind of a self-service culture has been proven to be beneficial in improving employee engagement. A more engaged employee will naturally be less absent and less likely to leave the company leading to enhanced productivity. Inter-department integration and online payroll data transfer leads to improved efficiency As depicted in the Figure 2 earlier finance & payroll department and HR department can be integrated online with the time and attendance management system. All of the employee time-keeping related data needs of these two departments can then be met instantaneously with data integration between their respective systems. Such integration of the systems at database level to get the in-time and out-time records of employees was much more efficient than the manual and low-productivity data capture involved in a traditional timekeeping system. Easier to identify employees for attendance related rewards or penalties In traditional time-keeping and attendance systems, if one needs to identify the employees with exceptionally high or low attendance then a manual sifting of the logged records in logbooks or registers is required. This tedious and low-productivity activity can be turned into an instantaneous report by storing pre-built queries and reports in an automated fingerprinting based access and time-keeping system. The companies implementing fingerprint-based access control systems with linked time-keeping systems observed sizeable improvements in employee productivity. Employee productivity benefits coupled with the associated cost benefits has led to this kind of an integrated solution gaining acceptability among an increasing number of companies. With the increasing accuracy of fingerprint scanning devices coupled with their falling prices, this trend is only going to strengthen in the coming days.
<urn:uuid:c5257336-f10f-4be4-a7dd-0beb4f23a657>
CC-MAIN-2022-40
https://www.bayometric.com/fingerprint-biometrics-improve-employee-productivity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00140.warc.gz
en
0.956101
2,299
2.71875
3
In line with expectations, Google has published more details about the inner workings of Titan, its proprietary chip developed to protect server and networking equipment against tampering. The tiny piece of silicon secures Google Compute Platform (GCP) hardware by verifying the integrity of essential software - like firmware and BIOS - at boot time, using cryptographic signatures. Titan is installed on every motherboard deployed in Google’s data centers, and establishes something known as a hardware root of trust, or a trust anchor - a cryptographic element that cannot be compromised. This approach is not new - it sounds similar to the Trusted Platform Module (TPM) technology, standardised in 2009 and widely used by system vendors including Dell EMC, Cisco and Lenovo. All about trust Titan was originally unveiled in March at the Google Cloud Next ‘17 conference in San Francisco. This purpose-built chip is used to securely identify and authenticate legitimate access at the hardware level, minimizing the chances of running altered software. The chip consists of a secure application processor, a cryptographic co-processor, a hardware random number generator, embedded static RAM, embedded flash storage and a read-only memory block. “In our data centers, we protect the boot process with secure boot. Our machines boot a known firmware/software stack, cryptographically verify this stack and then gain (or fail to gain) access to resources on our network based on the status of that verification,” a team from Google explained in a blog post. “Titan integrates with this process and offers additional layers of protection.” One of these layers is the ability of the chip to verify first-instruction integrity - the earliest code that runs on each machine’s startup cycle, something current TPMs cannot do. Titan also runs a built-in memory self-test every time the chip boots to ensure that all memory (including ROM) has not been tampered with. Once Titan has booted its own firmware in a secure fashion, it will turn its attention to the host’s boot firmware flash, and verify its contents using public key cryptography. According to Google, Titan is capable of not simply preventing, but even solving security issues: in the event that bugs in its own firmware are found, they can be immediately patched to re-establish trust. “In addition to enabling secure boot, we’ve developed an end-to-end cryptographic identity system based on Titan that can act as the root of trust for varied cryptographic operations in our data centers,” the team explained. “The Titan-based identity system enables back-end systems to securely provision secrets and keys to individual Titan-enabled machines, or jobs running on those machines. Titan is also able to chain and sign critical audit logs, making those logs tamper-evident.”
<urn:uuid:04ffaf9c-25e1-4cd9-a878-77fca1ced979>
CC-MAIN-2022-40
https://www.datacenterdynamics.com/en/news/google-reveals-details-about-titan-its-server-security-chip/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00140.warc.gz
en
0.930859
584
2.6875
3
A Look at Transportation Challenges and their Future Over the past 15 months, many problem/solutions involving transportation were short-term and related to regulations caused by the COVID pandemic. As the world continues to get vaccinated and the end of the pandemic is finally in view, the transportation industry can now set its sights on overcoming challenges related to bigger picture issues such as emissions control, autonomous vehicles, other advances in artificial intelligence, and increased efficiency. As the transportation industry is currently the second-largest spender in the IoT, changes in the industry will affect any and all industries that rely on transportation (which is only increasing as a larger dependence on e-commerce and product delivery is expected to continue after being necessity during the pandemic). Here is a look at 3 areas of transportation and how they are expected to change in the coming years. Many aspects of autonomous travel have been seen “here and there” over the last few years, including navigation systems that can direct drivers around traffic and automatic breaking features when a vehicle detects something in its path. The ultimate goal for autonomous vehicles is fully automated travel from point A to point B, and companies such as Tesla are making serious advancements on the private side. Ultimately, these cars would have to communicate with street signs and signals, known as V2I technology, or vehicle to infrastructure, which is currently (and ironically) the “stopping point.” Skeptics of automated car travel have some fair points, and in test runs there have been a few accidents. However, more than 3,000 people die each month in auto accidents that do not involve automation, a much higher percentage than those that did. Alcohol is still the biggest cause of driving accidents, and automation would, indeed, remove drunk driving from the equation. Other perks include lower emissions, more parking (which can lead to increases in local economies), and less emissions. And, there are many challenges in designing autonomous vehicles. Whether they like it or not (which they probably don’t), laws related to emissions are going to become increasingly strict for transportation companies, and their continued existence depends on utilizing emission-lowering technology relative to the future of transportation. For the ground transportation sector, the aforementioned autonomous vehicles are the main way to reduce emissions, as the majority are electric vehicles. Many companies have made vows to move to all-electric fleets within the next several years, as has the government for federally funded services like the USPS. For the aviation industry, however, challenges in training algorithms are a bit more difficult to overcome, though needed, as it is responsible for 5% of global warming. In more relatable terms, a single flight across the Atlantic creates twice the amount of CO2 emissions per person as a family car does in a year in the United States. Most countries drive less than the U.S. does, as well. Many proponents on the green side of the proverbial fence say, “Just fly less” as a fix, but that’s just not practical for some businesses and individuals, and in some places like Mexico, passenger travel via train is all but obsolete (there is one passenger train in the country). Thus, the responsibility is twofold, with the government needing to enforce rules more strictly, and the aviation industry needing to make concerted efforts to lower emissions, as even amidst the green revolution of the past decade, planes and jets have become increasingly bad for the planet, and humans have become increasingly dependent on flight. More about Smart City Logistics & Transportation Though autonomous ground travel is heavily dependent on AI, many other uses are helping shape the future of transportation. Traffic patterns, for instance, have dictated the creation of programs that allow for machine learning, ultimately making things like streetlights and highway ramp signals to be more precise in their aims to keep traffic flowing smoothly. Supply chain logistics are also heavily dependent on AI in a similar sense to the traffic patterns. Allowing delivery teams to travel at low-traffic times, means less work, less gasoline (money) and less emissions! Empty trucks are also a waste of resources, and AI can help match retailers with transport teams to send them back to their origins with full trucks, resulting in another win-win situation. On-vehicle sensors, like the ones being advertised by many insurance companies to track your driving and lower your premiums, are also utilizing machine learning to make advancements in recommended speeds, routes, and travel times to help limit costs. Ultimately, efficiency in some aspect is the target for all of these advancements, and whether it’s efficiency relative to the global issues of emissions, or ones solely for monetary gain, the motivations to continue advancing are aplenty. About the Author This article was written by Ryan Ayers. He has consulted a number of Fortune 500 companies within multiple industries including information technology and big data. After earning his MBA in 2010, Ayers also began working with start-up companies and aspiring entrepreneurs, with a keen focus on data collection and analysis.
<urn:uuid:9382e568-6427-425b-a597-f9a1c8de4d8b>
CC-MAIN-2022-40
https://www.iiot-world.com/industrial-iot/connected-industry/a-look-at-transportation-challenges-and-their-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00140.warc.gz
en
0.966015
1,035
2.59375
3
British Prime Minister, David Cameron recently pledged a £300m government support for a big data human-genome mapping project in England. The four year project will see scientists decipher a 100,000 human genomes. Cameron disclosed a partnership between Genomics England, a Department of Health owned initiative and a San Diego based biotechnology company, Illumina, whose services will cost £78 million. Illumina will invest £162 million over four years, providing “infrastructure and expertise”. Cameron said: “This agreement will see the UK lead the world in genetic research within years. I am determined to do all I can to support the health and scientific sector to unlock the power of DNA, turning an important scientific breakthrough into something that will help deliver better tests, better drugs and above all better care for patients”. The prime minister says that the project will map 100,000 human genomes by 2017. The idea is to advance cancer and other rare disease treatment by sequencing the genomes of people already afflicted. It is hoped that new and better tests, drugs and treatment will follow. Patients with rare diseases, and their families, as well as patients with common cancers are being focused on at present while the project currently remains in its pilot phase and the main project begins in 2015. Read more here. (Image credit: Mehmet Pinarci)
<urn:uuid:27cbc35a-8c75-4e99-ae52-8bd50d2cc444>
CC-MAIN-2022-40
https://dataconomy.com/2014/08/human-genome-database-project-finds-300m-government-support-declares-cameron/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00140.warc.gz
en
0.948394
280
2.5625
3
Posted on: January 9, 2019 We are just a few weeks into the winter season and already have experienced several severe weather events across the world. Severe weather can potentially have a significant impact on the business environment. While there is new evidence for the emerging impact of human-caused climate change as the stimulant for an increase in severe weather events, there is undisputed recognition that their impacts are costlier and more frequently occurring. Although the United States is amid a government shutdown, which limits access to updated and current National Oceanic and Atmospheric Administration [NOAA] statistical data, take a gander at severe weather stats through September 2018. As statistical data bears out, severe weather events carrying nature’s extraordinary power are on the rise and continue to illuminate their catastrophic repercussions. - Through September 2018, there were 11 weather and climate events (mega disasters) with over $1 billion in damages each. This does not include Hurricane Michael or the California wildfires which are estimated to easily surpass the $1 billion level each. - According to NOAA’s 2017 report, the seven years with the most billion-dollar disasters have all come in the last decade, with 2017 damages totaling $312.7 billion. - Since 1980, there have been 238 weather and climate disaster events that have exceeded $1 billion to impact the U.S., with total costs exceeding $1.5 trillion. While inclement weather business continuity planning has long been established at many organizations (classic initial BCPs for most resiliency programs), does your organization have a BCP for severe weather events? The traditional inclement weather BCP typically is built with the assumption that any inclement weather event is short term, perhaps a day or two in duration, with limited impact to the business. Given this approach and planning on a condensed event time-period, most BCPs focus on the basics – communications with employees and establishing a cadence for update monitoring. These plans are rudimentary and tend to focus on employee logistics like working from home options, defining key personnel, scheduling update meetings, and making sure everyone is accounted for. These plans typically do not address potential damages/risks and often do not take into consideration essential business operations, key process recovery procedures and steps, third-party dependencies, supply chain, and more. Hence, the significant problems an organization can and will face if a severe weather event occurs and its impact and duration is not planned for. Let’s not discard the tried and true inclement weather BCP at your organization (if it exists). Use this plan as the foundation and build upon it in developing the severe weather BCP but with different assumptions. These new assumptions (do not underestimate the impact of mother nature!) should consider that the event will be longer term, will deny access to your facilities/sites, and adversely affect your workforce. Given these likely scenarios, a comprehensive business impact analysis or a BIA re-fresh should be conducted to prioritize essential business processes and functions so that the organization can focus on developing actionable BCPs for its highest at-risk business operations. The BIA will assist in identifying/mapping items like required technologies, dependencies, infrastructure components, personnel, vendors, third-parties, resources, and vital records. These will be the building blocks for your severe weather BCP. Once the BIA is completed and validated, careful consideration should be given in developing actionable recovery procedures and steps. Addressing recovery and re-establishment of operations within the required recovery time objective should be the focus of the BCP. Defining actions to be taken and assigning ownership to those tasks are paramount in developing a high-quality severe weather BCP. There is no doubt that in the future severe weather events will happen more often and with greater intensity. It is only a matter of where and when, so be prepared and start developing your plan accordingly.
<urn:uuid:6eef6a24-91bd-4e5e-846c-7b9347d70b4d>
CC-MAIN-2022-40
https://www.fusionrm.com/blogs/severe-weather-warning-have-you-upgraded-your-inclement-weather-business-continuity-plan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00140.warc.gz
en
0.951788
798
2.546875
3