text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Scientists at the U.S. Department of Energy’s Ames Laboratory are now able to capture the moment less than one trillionth of a second. A particle of light hits a solar cell material and becomes energy. Describe the physics of the charge carrier and atom movement for the first time.
The generation and dissociation of bound electron and hole pairs, namely excitons, are key processes in solar cell and photovoltaic technologies. yet it is challenging to follow their initial dynamics and electronic coherence.
Using time-resolved low frequency spectroscopy in the terahertz spectral region. Furthermore, researchers explored the photo-excitations of a new class of photovoltaic materials known as organometal halide perovskites. Organometallics are wonder materials for light-harvesting and electronic transport devices. And they combine best of both worlds the high energy conversion performance of traditional inorganic photovoltaic devices. With the economic material costs and fabrication methods of organic versions.
Whereas, ames Laboratory researchers wanted to know not only the generation and dissociation of bound electron and hole pairs. Namely excitons, happened in the material, they wanted to find out the quantum pathways and time interval of that event.
Because conventional multimeters for measuring electrical states in materials do not work for measuring excitons. Which are electrically neutral quasiparticles with no zero current. Ultrafast terahertz spectroscopy techniques provided a contactless probe that was able to follow their internal structures. However,quantify the photon-to-exciton event with time resolution better than one trillionth of a second.
Moreover, the contributions of researchers from multiple areas of expertise across the Ames Laboratory with the significance of the discovery. | <urn:uuid:ad4be233-0b15-454a-a2cc-42d781d2e4e7> | CC-MAIN-2022-40 | https://areflect.com/2017/06/04/light-to-energy-transfer-in-new-solar-cell-materials/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00552.warc.gz | en | 0.889073 | 369 | 3.46875 | 3 |
Dr. Jens Struckmeier is a founder and CTO of Cloud&Heat Technologies GmbH.
In the first part of this two part series, I outlined why computing power has steadily increased over the years and which challenge it inherently brought for today and in the future. In Part 2, I address why the question of the appropriate cooling system and how additional savings through intelligent waste heat utilization is possible as well as why there are still reservations to water cooling to reduce energy requirements.
Status quo in the air conditioning of data centers is cooling by mechanically cold air.
The entire room is cooled, but more than half of the cold air does not reach the heat hotspots, like the CPU. In doing so, huge sums of money are literally blown into thin air. One of the alternatives to air cooling is to use methods with water or other liquids. But as soon as the data center industry is confronted with "water," it frightens them immediately. Water and IT equipment – they do not fit together. Nevertheless, there are a few operators already who rely on the alternative cooling medium.
The data centers of Green Mountain in Scandinavia use water from a nearby fjord to cool their data centers. Cologix and Equinx from Toronto use the water to air-condition their servers from the Lake Ontario. In both cases, however, the water is only indirectly involved in the cooling process: by means of a heat exchanger, the air in the data center is correspondingly cooled down by the liquid cooling medium and is then used for free cooling of the hardware – air cooling 2.0.
From the thermodynamic and efficiency point of view, however, it is advisable to bring the heat sinks as close as possible to the IT equipment to be cooled and not to air-condition the entire room. The direct cooling of the heat hotspots would be optimal, for example by means of hot water cooling. Hot water with flow temperatures of up to 40 degrees Celsius offers energetic and economic savings potential, because of the physical advantages of water compared to the air: water can absorb 3,330 times as much heat as air and has a 20 times higher conductivity. The closer the cooling medium reaches the heat source, the more efficiently the potential can be used. By means of an intelligent design, concentrated cooling of the sensitive components such as the CPU is achieved.
Because of the higher cooling performance, power densities of 45 kW per rack are possible with a simultaneous reduction in energy consumption compared to a conventional cooling system. A water-cooled system can also provide additional energy savings if the dissipated heat provides additional benefits. Due to the relatively high temperatures in the server racks, water output temperatures of up to 60 degrees Celsius are possible. Water at this temperature level can be used, for example, for hot water or heating systems of buildings.
Save More Through Intelligent Waste Heat Utilization
The search for new synergy effects by increasing the energy efficiency by means of waste heat utilization is currently becoming more and more important. According to a recent survey by the Borderstep Institute, 50 percent of respondents see medium to very high saving potentials due to the reuse of server heat from their data centers. IBM, for example, is heating a nearby swimming pool in Switzerland, the data center at Notre Dame University in Indiana is a greenhouse, while Apple and AWS provide warmth to residential homes near their data centers in Scandinavia.
The reuse of waste heat is, however, usually based on two essential problems. On the one hand, only very low output temperatures of less than 60 degrees Celsius are usually reached. However, in order to reuse the cooling water, for example for heating buildings or for hot water preparation, at least 60 degrees Celsius are necessary. Hence, heat pumps play an important role to compensate for the difference. On the other hand, the transport of heat represents a not negligible challenge. As the previous examples show, the data center waste heat is only used in applications close to the location. Overcoming longer distances would lead to high heat losses and thus to low temperatures for the heat consumers.
The industry needs to overcome its hydrophobia and become more receptive to water cooling in order to achieve long-term significant savings. In addition to the green cooling techniques, the waste heat from data centers can also be used to generate adsorption chillers, that is, for cooling with heat. Combining both systems, homes can be heated in winter and air-conditioned in summer. This allows a year-long utilization of the data center waste heat and a further important step towards the fulfillment of the targets for the reduction of energy consumption.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | <urn:uuid:c7d0d950-25cb-431a-b0ad-ad419433fa36> | CC-MAIN-2022-40 | https://www.datacenterknowledge.com/industry-perspectives/data-center-costs-driving-force-energy-efficiency-part-2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00552.warc.gz | en | 0.944181 | 1,007 | 3.203125 | 3 |
The unprecedented explosion in the amount of information we are generating and collecting, thanks to the arrival of the internet and the always-online society, powers all the incredible advances we see today in the field of artificial intelligence (AI) and Big Data.
With this in mind, a great deal of thought and research has gone into working out the best way to store and organise information during the digital age. The relational database model was developed in the 1970s and organises data into tables consisting of rows and columns – meaning the relationship between different data points can be determined at a glance.
This worked very well in the early days of business computing, where information volumes grew slowly. For more complicated operations, however – such as establishing a relationship between data points stored in many different tables – the necessary operations quickly become complex, slow and cumbersome.
Machine learning – the self-teaching algorithms designed to become more accurate at generating predictions from data as they are fed increasingly large volumes of information – often need to draw data from vast and disparate datasets. It quickly became apparent that a new approach was necessary.
The Knowledge Graph
There have been many attempts to improve on the functionality of the relational database since the model was first developed. One which is quickly growing in popularity due to its flexibility and potential for dealing with complex, interrelated data, is the knowledge graph (sometimes known as a graph database.)
The meaning of the term is not precisely set-in-stone – for example, Google has a specific feature which it calls the Knowledge Graph, which powers the section of its search results page that displays factual information, drawn from recognised sources of authority.
While this is built with several of the same ideas that feed into the broader concept of the knowledge graph, it’s not the be-all and end-all of the technology.
In basic terms, a knowledge graph is a database which stores information in a graphical format – and, importantly, can be used to generate a graphical representation of the relationships between any of its data points.
This means that the apparent advantage over the older, relational style database is that the relationships between any data points can be calculated far more quickly and with less compute power overheads, regardless of whether the data points fit neatly together into a table.
Oracle – which actually released the first commercially available relational database management system in 1978 – is now leading the field in making knowledge graph systems available to the wider business community. Currently, you’re more likely to find them being used by tech giants and research institutions, but this is set to change in the very near future.
Hassan Chafi, senior director of research and advanced development at Oracle Labs, describes the difference between relational and graph databases to me in this way: “With a relational database … it just deals with tables … it allows you to find a row in a table, or take two tables and combine them … and with every join, you’re traversing one hop in these graphs, but you’re reasoning about it in these tabular ways.
“So now what we’re saying is, what if you were to rearrange that same information as a graph? Now it’s visual, and instead of having these tables representing connexions, you have vertices which represent people, or accounts, and you have ‘edges’ which represent relationships. Now I can more quickly say, ‘ok, are Bob and Charlie related? And I can see easily that they are.”
Who uses knowledge graphs?
At the moment, knowledge graphs are widely used by the tech giants that have made gathering and analysing huge volumes of messy, complex data their core business. They power Google’s search engine, as the original page rank algorithm is based on a form of knowledge graph, as well as later additions to its search technology such as the Knowledge Graph.
Facebook also relies on this form of information organisation, to keep track of networks of people and the connexions between them, as well as every other data point they use to build a picture of their users, such as favourite artists and movies, events attended and geographical locations. One of its most significant breakthroughs is considered to be the realisation that the relationships between data points are as valuable as the data points themselves when it comes to building social networks.
Netflix uses knowledge graph technology to organise information on its vast catalogue of content, drawing connexions between movies and TV shows and the actors, directors or producers who put them together. This helps them to predict what customers might like to watch next, and foster the “binge-watching” model of consumption it has built its business around.
Electronics and manufacturing giant Siemens uses knowledge graphs to build accessible models of all of the data it generates and stores, and use it for risk management, process monitoring and building “digital twins” – simulated versions of real-world systems which can be used for design, prototyping and training.
In supply chain logistics, knowledge graphs can be used to keep track of inventories of different components and parts, allowing manufacturers to understand the crossover between materials that are used in different products.
They are also being quickly adopted by the financial services industry, where they are useful for assessing whether or not transactions are fraudulent, as well as many other functions such as marketing and investment analytics.
Chafi tells me “In particular for crime and compliance … for money laundering, one can think of money moving around as a graph, and you need to think about whether those movements are risky or not … If you were to follow the trail that starts from one place, does it come back to the same place after an indefinite number of hops?”
With industries increasingly adopting machine learning, it seems likely that knowledge graph technology will also evolve hand-in-hand. As well as being a useful format for feeding training data to algorithms, machine learning can quickly build and structure graph databases, drawing connexions between data points that would otherwise go unnoticed.
Machine learning is great for answering questions, and knowledge graphs are a step towards enabling machines to more deeply understand data such as video, audio and text that don’t fit neatly into the rows and columns of a relational database.
This could potentially revolutionise fields where the technology undoubtedly has applications that have not yet been fully explored, including healthcare and law.
As with machine learning itself, what started as an academic exercise before being adopted by the most cutting-edge tech companies will no doubt “trickle down”, as tools and frameworks designed to make it accessible become more widely available. | <urn:uuid:9ea595a2-d8c5-446c-b302-767b13128fb5> | CC-MAIN-2022-40 | https://bernardmarr.com/knowledge-graphs-and-machine-learning-the-future-of-ai-analytics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00552.warc.gz | en | 0.963871 | 1,355 | 3.484375 | 3 |
Using Differential Privacy to Conceal Data
Methods to Conceal Data
Today’s advanced technology and that people use it every day leaves trails of data behind every one of us. The data can be gathered by unknown parties and analyzed. These data crunchers can determine your health problems, track your movements throughout the day, and even decide whether you are experiencing depression.
No one wants to leave out their personal data, or health information, much less have someone find data that points to mental distress. Identity theft, bank fraud, and many other crimes are committed by bad apples who steal people’s personally identifiable data. When a company is responsible for handling large amounts of customer data, it must maintain its trust to continue having a good relationship with them. Releasing personal data or even loss of data through a breach could mean extensive losses for a business.
Redaction is one method of concealing data. Redaction is a form of editing in which confidential information is replaced with a black box to indicate its presence, but the data is masked. An alternative term for the practice is called sanitization. When data is sanitized, all personally identifiable information has been concealed or removed.
Anonymization is another method used to conceal data sets while keeping some information intact. The purpose of anonymization is a privacy protection and is also a form of sanitization. Data that should not be released publicly, such as name, social security number, or home address, is removed, leaving the remaining data for research and other purposes.
There is some controversy over the ability of anonymization to conceal identifiable information. Today’s technology is leaping forward by heaps and bounds. With artificial intelligence and the correct algorithms, data sets can be compared, and the missing data can be figured out. When the data is queried together, and a match to a positive identity is found, this results in de-anonymization. The resolution that many are now considering is termed ‘differential privacy.’
As big data corporations continue to soak up data sets like a dry sponge to water, privacy activists are re-thinking anonymization. With the realization that de-identification can be reversed, proponents of a new cybersecurity model known as differential privacy have come forward. It has become apparent with the advent of big data, machine learning, and data science advances that today necessitates a reconsideration of previous privacy methods.
Cybersecurity specialists now claim that using differential privacy (DP) methods can better protect personal data than traditional methods. DP is a state-of-the-art concept based on mathematical algorithms that have been recently developed. This new privacy model’s belief is pushing larger companies to turn to DP methods to protect privacy.
It is already being used by companies such as Apple, Uber, the US Federal Government (Census Bureau), and Google. Differential Privacy or DP’s primary mission is the requirement that a data subject is not harmed by their personal data being entered into the database. It also necessitates maximizing utility and data accuracy for the results.
Corporations that use DP participate in a system for sharing data publicly by describing the data set patterns while withholding the subjects’ personal data. The concept relies on the effect of making a small substitution in the data set, making it nearly impossible to infer details of those in the study. Since data subjects are never identified, it provides a better alternative to privacy. It can also be described as a restriction on the algorithms used to publish large data sets, limiting disclosure of any personal or private data within the collection.
Data meets the standard for differential privacy when the output cannot be used to identify a particular subject’s personal data. When dealing with data breaches and reidentification attacks, DP is likely to resist such an invasion or loss of sensitive data. Since the work of cryptographers developed DP, it is often closely linked to cryptography. Much of the language used in algorithm development comes from cryptography.
Implementing the processes involved with differential privacy can be a matter of adding random noise to the data. You want to publish how many people in the dataset satisfy a given condition. Adversarial companies have nearly the same data that you do and could compare the published results to re-identify the data. Since this is something you are trying to avoid, take a moment to understand how to add noise and never post the exact answers.
If you had an attack on your data, you should assume that they have similar data sets. They don’t have an exact identity or target. It would be like wanting to hit the center target while playing darts. Each ring from the outside in – gets you closer to the answer. Given the small mathematical value, darts can hit a fractional distance from the center and can actually hit the center. The average that you get from this numerical data indicates the exact center, but no answer is so precise as to match it to any existing subject.
We can compute the exact answer in reality, but we add the noise to prevent identifying an actual individual. The noise comes from a probability distribution, also known as Laplace distribution. Each distribution has a parameter that indicates a value that may not be exact but can give researchers the results they need for analysis.
Balancing Utility and Privacy
Data scientists like to assign a numerical value to everything they see. Every part of your day is a data point. The brand of shampoo you use, the coffee you drink, the distance you drive to work – literally everything you do is a data point. While some of us understand this, we often don’t consider the details obtained by this data. Corporations or governments can use it to make inferences about your health, behavior, and lifestyle. The point of using differential privacy is to use data for studies, such as health data for diabetes, without the price of subjects’ private information being exposed or exploited. It is about striking a balance between utility and privacy expectations.
When discussing the term ‘sensitivity’ as it applies to differential privacy, we are talking about parameters. The parameters define how much noise is required in the differential privacy functions to get good results and eliminate data de-identification.
To determine the sensitivity, the maximum change or possible range for results needs to be calculated. It refers to the impact a single change in the data set could have on query results. For example:
- Let xA, XB equal any data set from all the possible data in database X, which differs by a single element.
- In this case, the equation would look something like this:
Sensitivity = max (xA, xB)(CX) |q(xA) – q(xB)|
The queried results are fractionally close to the actual answer. Understanding the maximum and minimum values helps researchers learn more about the effects of their query.
The Laplace mechanism is a mathematical tool for implementing differential privacy on some query or function (f) to be executed on a given database. It is accomplished by adding noise to the output of (f), leaving the outcome or results defined within a given parameter.
Mathematically speaking, a function computing the average or the standard deviation would look much like this:
- Let f(x1, x2, …, xn) be the function used on data within a database or data set.
- ‘f’ can be considered the function that computes and returns the average or the standard deviation for a set of values.
- Let ∆f = Max x, x0 | f(x)−f(x0)
- ∆f is the ‘sensitivity’ of the function. This is also the maximum difference in values the resulting (f) can accept when executed.
- The function is used on database x and x’; the databases are nearly exact but differ in precisely one piece of data.
The output of the function of (f) on some database x is f (x) + b where b is equal to the noise value.
The Laplace Mechanism provides the overall aim of adding noise values to satisfy differential privacy. The algorithm computes f accurately and close to the best data result we can extract from the query.
Keeping Data Under Lock and Key
There are a variety of processes used to help protect sensitive data. To improve everyone’s lives, with the invention of health discoveries, and more, the first thing is to research data sets on large groups of people. These methods allow the data to be utilized while still doing all that is necessary to protect private data. As technology advances, we will have to develop better strategies and more advanced algorithms to keep data safe. | <urn:uuid:3537ac94-e760-43a6-8cde-2960045d7655> | CC-MAIN-2022-40 | https://caseguard.com/articles/using-differential-privacy-to-conceal-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00552.warc.gz | en | 0.927427 | 1,801 | 3.125 | 3 |
About threats, attacks, and vulnerabilities
Welcome to Domain 1, which is all about Threats, Attacks, and Vulnerabilities!
This, I think, is one of the most interesting domains of the CompTIA Security+ because it focuses on various types of attacks that we might perform in our careers if we’re on the red team side, or that we will need to defend against if we’re on the blue team side.
So regardless of what your career aspirations are, understanding the threats, attacks, and vulnerabilities that we as humans will face, or that our IT systems will face, is highly relevant and important.
This domain is broken down into 8 subdomains, so it’s also a fairly lengthy section, which makes sense considering that this domain represents 24% of the exam. It’s the second most important domain after domain 3 which represents 25% of the exam. So between those two domains alone, you’re almost at 50% of what you can expect to be tested on in the exam.
So definitely pay attention in this section!
As you complete the subdomains, you will unlock the digital badges associated with those subdomains, and once you complete the entire domain, you’ll have unlocked all of them.
This is a way to inject a little bit of fun, but also to help you keep track of where you’re at in the course and what all you’ve accomplished. It’s a lengthy course and a hefty exam, so hopefully this provides a little bit of additional structure for your studying.
Let’s take a quick look at each subdomain before we get started.
Subdomain 1 is all about comparing and contrasting social engineering techniques. This is where we’ll learn about the different types of phishing attacks, principles used in social engineering attacks, and more.
Subdomain 2 dives into potential indicators so that you can analyze and determine what type of attack you’re dealing with. Is it malware? If so, what type of malware? What can that type of malware do to systems, and how can you defend against it? Or is it a password attack?
These are the types of questions that we’ll take a look at in subdomain 2.
The 3rd subdomain is all about application attacks. This is one of my personal favorites since I have a background in development and since Cybr was started with web application attack courses. We’ll talk about different types of application attacks, and how to identify those attacks either as they are happening, or after the fact.
Along the same lines, subdomain 4 focuses on networking attacks. This is where we’ll look at various wireless attacks, Man-in-the-middle attacks, DNS attacks, and more.
In subdomain 5, we talk about who might be behind those attacks. Are they malicious? If so, are they well funded, or are they unsophisticated? What vectors are they using?
We’ll also talk about tools, methods, and resources that we can use for intelligence in order to either go on the offense, or to defend ourselves and our applications.
In the 6th subdomain, we talk about the security concerns associated with different types of vulnerabilities. For example, what’s the difference between on-prem vs. cloud-based security? What kinds of vulnerabilities can we expect in those different environments? How can we prevent those vulnerabilities in the first place?
Those are questions we will talk about in that section.
In the second to last subdomain, we will summarize techniques used in security assessments. We’ll talk about threat hunting, vulnerability scanning, using SIEMs, and Security orchestration, automation, and response (aka SOAR).
Finally, in last subdomain for domain 1, we will explain techniques used in penetration testing. We’ll talk about the differences between white box, gray box, and black-box testing. We’ll talk about privilege escalation, persistence, pivoting, and so on…we’ll also talk more about passive versus active reconnaissance and differences between red team, blue team, white team, and purple team exercise types.
Don’t be too overwhelmed by the scope of the first domain. Again, this accounts for 24% of the exam, which is significant, so we are starting with an important domain that will cover a lot of information. Take it one subdomain at a time, and before you know it, you’ll be on domain #2!
That’s it for this introduction. Once you’re ready, go ahead and complete this lesson, and let’s get started! | <urn:uuid:425b29c1-88b9-4416-bb1e-09f765030e7e> | CC-MAIN-2022-40 | https://cybr.com/courses/comptia-security-sy0-601-course/lessons/about-threats-attacks-and-vulnerabilities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00752.warc.gz | en | 0.938632 | 1,013 | 2.59375 | 3 |
A network proxy, actually.
Researchers at Aalto University in Finland have devised a way to cut 3G smartphone power use by up to a staggering 74 percent. How? Rather than forcing phones to maintain a constant Internet connection, which can quickly deplete battery power, the tech uses a “bursty” approach via a specially configured network proxy. Between each of burst, the phone’s modem goes idle.
This is great news for extending battery life. Better yet, it can help bring the Internet to developing areas where ready access to ISPs, let alone reliable grid power is hard to come by.
Aalto professor, Jukka Manner, says, “At the moment, only a small percent can access the Internet from a wired connection, but 90 percent of the African population lives in areas with mobile phone network coverage. Mobile phone usage is increasing rapidly, however the use of mobile Internet services is hindered by users not having access to the power grid to recharge their phones.”
Even here, I wouldn’t mind having to charge my phone less.
Image credit: Aalto University | <urn:uuid:73640504-f34e-4b66-979b-2f73c8691d90> | CC-MAIN-2022-40 | https://www.ecoinsite.com/2011/11/mobile-energy-savings-3g-smartphones-network-proxy.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00752.warc.gz | en | 0.915263 | 232 | 2.515625 | 3 |
Under the GDPR, all sensitive data that could be used to identify an individual is defined as “Personal Data.” A question frequently asked by those who are compliant with the GDPR is how long a business is required to retain personal data, or at what point they can delete that data?
In this quick guide, we’ll take a look at what personal data means, how it is defined under the GDPR, and what the GDPR says about retaining and storing personal data.
In May of 2018, the General Data Protection Regulation (GDPR) was implemented. It necessitates that businesses take efforts to safeguard the personal information they acquire. It also outlines the steps that must be taken to avoid data breaches, as well as a list of eight "data subject rights." With respect to automated decision-making and profiling, they are the right to be informed, the right of access, the right to rectification, the right to erasure, the right to restrict processing, the right to data portability, and the right to object rights. These rights apply to all parties involved, including partners, clients, and staff. Failure to follow them may result in legal action.
Personal data is defined as any information that refers to a live individual who is identified or identifiable. Personal data is made up of several bits of information that, when put together, may be used to identify a specific individual.
The GDPR applies to any personal data that has been removed from identification, especially encrypted, or falsified information, but can still be used to potentially identify a person. Personal data that has been anonymized to the extent of being unable to identify an individual is no longer considered personal data. For data to be actually anonymous, the anonymization of that data must be permanent.
The GDPR ensures that personal data is protected regardless of the tools used to process that data. It is technology neutral, encompassing automated and human processing as long as the data is structured according to pre-defined criteria. It also doesn't matter how the information is stored. In all cases, personal data is subject to the GDPR's data protection requirements.
Personal data can include names, addresses, email addresses, ID card numbers, location information, IP addresses, cookie IDs, pixels or other ad identifiers, and any medical or hospital data.
Data must be kept for the shortest period of time as is feasible. That timeframe should consider the reasons why your firm or organization has to handle the data, as well as any legal duties to store the data for a specific amount of time. National labor, tax, or anti-fraud legislation, for example, that require you to preserve personal data about your employees for a set amount of time, product warranty term, and so on, would be a higher priority.
Your business or organization should set time restrictions for erasing or reviewing data. Personal data may be stored for a longer period for archiving purposes in the public interest or for scientific or historical research purposes, provided that adequate technological and organizational measures such as anonymization, encryption, and other safeguards are in place. Your business or organization must also guarantee that the information it has is accurate and current.
To put it simply: Any organizations in the EU that acquire, use, or keep the personal data of EU citizens must adhere to the GDPR's data retention obligations. As a result, they must destroy or anonymize data as soon as it is no longer needed for processing. As a result, if you only require a staff member's personal information during their employment, you must destroy it when they leave the organization.
The longer data is stored, the more likely it is to become out of date, and the more difficult it is to assure data accuracy. The more data about individuals that is saved, the higher the risk of harm in the case of a data breach. You may not face punishment under the GDPR, but you risk causing harm to your company. And if a data breach occurs, your company might face serious consequences from both your clients and the GDPR enforcers.
Note: Also keep in mind that you should never save data simply in case it becomes helpful later. You must be aware of all applicable national and EU legislation and keep personal data in accordance with them. It's also worth noting that the GDPR requires you to delete personal data once it's no longer needed for processing.
A data retention policy is a collection of recommendations that spells out how long businesses should store certain types of personal information. Every data retention policy should specify the categories of personal data collected by the organization, the processing reasons for each category of data, the various retention periods, and how to dispose of the data once it is no longer needed.
Keep the following steps in mind while creating a successful data retention policy: | <urn:uuid:1c4b066b-8db9-4853-bbca-b62b8302a5e4> | CC-MAIN-2022-40 | https://www.accountablehq.com/page/how-long-should-you-retain-personal-data | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00752.warc.gz | en | 0.941119 | 977 | 3.171875 | 3 |
1G, 2G, 3G, 4G – now 5G is here. And it’s got a lot going for it. But as the fifth generation of mobile networks is poised for a pivotal role in the future of business, society and technology, a recent report by Securing Smart Cities has warned what we might need to think carefully about how to protect our citadels. But first, what exactly is a smart city? And what does 5G have to do with it?
Smart cities are connected
Smart cities usually occupy six dimensions: people, technology, infrastructure, management, economy and government. Put simply; a smart city uses internet of things (IoT) sensors and technology to connect components across a city to make everyday life easier for its citizens.
Sounds complicated, but it’s reasonably simple: sensors and connected devices allow cities to manage and monitor infrastructure, transport and financial systems, postal services and more. Traffic could run more efficiently, payment transactions made more secure and remote emergency surgery (think advanced drones replacing paramedics) could become commonplace.
According to a report from the International Data Corporation (IDC), global smart city tech investment is set to reach $135bn by 2021. So what’s behind this rapid rise?
In short: 5G. This new network is estimated to be up to 100 times faster than the present 4G systems, with up to 25 times lower latency (lag time) and as many as one million devices supported per square km; that’s a staggering one thousand times what’s currently possible! This increased bandwidth brings many new possibilities, like autonomous driving, and better connectivity. But, with these possibilities come significant threats.
The far-reaching risks of 5G
As with every new technology, it’s essential to be aware of how it can affect IT security infrastructures. 5G will serve as the foundation for many future technologies; however, the security concerns are inescapable. It’s evolved from 4G, from which it will inevitably inherit vulnerabilities and misconfigurations. If 5G is to play a crucial role in smart cities, governments and industry leaders should promote secure 5G projects that enhance services but also ensure stability and quality of life for its citizens.
Art by Martin Widenka
So what are the specific risks and challenges to look out for? The 5G Security and Privacy for Smart Cities report, which I co-authored with David Jordan and Alan Seow, has an extensive explanation. Here are the key things you need to know.
From protocol weaknesses to DDoS attacks
As 5G and smart devices connect our cities, it will cover more areas than today’s telecommunications equipment, giving previously non-network devices connectivity and centralized management. This means better visibility, efficiency and performance, but also exposes the population to more risks as the entire system is connected. If one node is attacked, many more may be affected.
For example, 5G will increase the risk and potential damage of large-scale distributed denial-of-service (DDoS) attacks. This is when a hacker overloads a machine or network with traffic to render it useless. DDoS attacks are used to disable the online services of banks and e-commerce platforms, but the city’s critical infrastructure is a significant weak spot. In 2014, a DDoS attack on Boston Children’s Hospital meant staff couldn’t use medical devices, putting patients’ lives in danger and causing damages totaling an estimated US $600,000.
5G also presents some protocol weaknesses, for example in the authentication and key agreement (AKA) – the method of encrypting communication between devices and cellular networks, which has been previously utilized in 3G and 4G networks and is known to be vulnerable to international mobile subscriber identity (IMSI) catchers, interception of traffic and sensitive information.
With both of these threats on the horizon, regular security practices such as supply chain security, access control, patch management, threat hunting and configuration management should be carried out to secure against 5G threats.
But there’s more to do to keep our cities and societies safe.
How do we ensure 5G safety?
There are many solutions to protecting smart cities in the age of 5G, from full audits to anomaly detection. One I’d like to highlight is hybrid authentication.
In 2G to 4G, network authentication – a security process when a computer on a network tries to connect to a server – has previously been a straightforward process between service and network providers, and the user’s device. Network authentication liberates the user from having to authenticate for every service they need access to; a single network authentication is sufficient.
When it comes to an entire network of connected devices, authentication must be as safe and secure as possible. One security recommendation for this relies on network-based authentication.
5G network security will require flexibility for organizations to manage multiple unknown devices with various levels of security, moving away from previous authentication models. A new, unified hybrid framework is needed to coordinate different security methods for each security layer. If devices can’t be authenticated, are misbehaving or not adequately set, we need processes in place to isolate them.
Ultimately, 5G is a technology advancement that will help us combat many of the world’s problems. But we have to make sure we’re wholly certain about the threats that it could bring and help governing bodies and our civic leaders prepare so safer smart cities can benefit our lives. | <urn:uuid:e651d77f-9b3e-426d-b51d-8f17444fa951> | CC-MAIN-2022-40 | https://www.kaspersky.com/blog/secure-futures-magazine/5g-securing-smart-cities/32175/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00752.warc.gz | en | 0.938198 | 1,145 | 2.625 | 3 |
Geeks have been fantasizing for many years about a fully automated lifestyle, where their gadgets magically power themselves and figure out how to best get online without your help. The first piece of the puzzle is already being worked on by a number of companies and researchers, but Apple may be working on letting users create their own little personal networks with the help of radio frequency (RF) modules that will communicate amongst each other in order to maintain constant connectivity.
A recently-published patent application titled "Personal area network systems and devices and methods for use thereof " gives us clues as to how such a system would work. Small devices that don't have access to long-range communications protocols, like the iPod in your pocket or your wristwatch, could still be equipped with short-range protocols, like Bluetooth or WiFi. From there, they could be set up to communicate with devices that do have long-range protocols, like your mobile phone or a some other wireless network.
What would be the benefit to this? For one, the smaller, less-capable gadget could take advantage of the protocols built into the more-capable device(s) near you, allowing you to do things like get online or make phone calls. "For example, a user may place or take a telephone call using the host device by wirelessly communicating with the long-range communications device via the short-range communications protocol. Thus, an advantage of the invention is that the host device can serve as the interface for performing functions on both the host device and long distance communication device," writes Apple.
Apple points out that the user may keep a number of RF modules on his or her person already, or in the home. "This way, a user need not worry about having to carry a long-range communications device wherever he or she may go, as a RF module may be kept in locations frequently visited by the user." When that user moves around from one locale to another, the communications devices can connect to new modules in order to determine what's the best way to make that call or send that text message.
The other obvious benefit to the system is that all of your gadgets are now talking to each other and can interact in ways that we have only just begun to explore in the consumer space. One example that Apple provides is being able to sit down in your car and browse your contact list—stored on your phone—via the car's navigational controls without ever having to plug anything into anything else. You could also change songs on your iPod by fiddling with your wristwatch, or perform any number of other small tasks between devices depending on the functionality of each one—like, say, getting readings on your iPod about the abysmal condition of your running shoes. | <urn:uuid:8131f678-641b-4865-9863-d45abbc70133> | CC-MAIN-2022-40 | https://arstechnica.com/gadgets/2008/10/apple-patent-would-let-your-shoes-talk-to-your-watch-iphone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00752.warc.gz | en | 0.95907 | 556 | 2.8125 | 3 |
The internet has revolutionized communication and places a wealth of knowledge at people’s fingertips, but it can also be a dangerous place. The internet is made safer with A DNS internet filter, which is used to block web-based threats by preventing users from accessing malicious websites.
DNS internet filters are used by internet service providers to keep their customers safe and by businesses to prevent employees and customers from accessing websites that harbor malware, ransomware, and phishing kits.
A DNS internet filter has other important benefits. It can be used to create family-friendly internet access by preventing adult content from being accessed, to improve productivity in workplaces by curbing cyberslacking, and to control bandwidth use, by limiting bandwidth heavy activities such as video streaming.
How Does a DNS Internet Filter Work?
In order to understand how an Internet DNS filter works, you need to know what happens when you try visit a website. If you click a hyperlink in an email or enter a domain name into your web browser’s address bar, several processes must first be completed before the website can be loaded. Those processes involve the Domain Name System (DNS).
When a domain name is purchased from a domain name registrar and is hosted, it is assigned a unique IP address. That unique set of numbers allows the domain to be found over the internet. When an attempt is made to visit that website by entering the domain name into a web browser, the IP address needs to be found. That information is found by sending a query to a DNS server which performs a DNS lookup to obtain the IP address.
First a request is sent to a recursive resolver, which is commonly hosted by the user’s internet service provider. The recursive resolver makes contact with a root nameserver that contains a database of IP addresses for top level domains. A request is sent to a top-level domain nameserver, which directs the recursive resolver to the server hosting the website and the IP address is obtained. With the IP address, the browser can find the website and download the content. The whole process is exceptionally fast. It takes about a tenth of a second from the initial request to the provision of the IP address.
A DNS internet filter is inserted into this process and performs various checks to determine if the website should be loaded. If those checks are passed, the browser is directed to the website. If a check is failed, the IP address is not provided and the attempt to visit the website will be blocked. The user will then be directed to a local block page that tells them why the website cannot be accessed.
DNS Filtering Control Mechanisms
A DNS internet filtering service provider scans the internet and assigns categories to each website based on the content of the site. Users of the DNS internet filter can configure the solution to block certain categories of web content. This is as simple as using a mouse to tick certain checkboxes.
A DNS internet filter also uses whitelists and blacklists. Whitelists are used to allow all content on a particular website to be accessed, regardless of the types of content on that site. If web content on the site violates other policies, whitelisting ensures it can still be accessed. Blacklists are the opposite. If a website is on a blacklist used by a DNS filter, it can never be accessed. Blacklists are maintained by several organizations and include sites that contain illegal content and webpages used for phishing or malware distribution.
Keyword-based filtering may also be used. This involves scanning webpages for certain keywords and assigning a score based on the density of the keyword or keywords. If a certain threshold is reached, the webpage will be blocked.
WebTitan Cloud – A Fast, Effective, and Easy to Use DNS Content Filter for ISPs, MSPs, and SMBs
WebTitan Cloud is a powerful, but easy to use DNS internet filter that can be used by ISPs, MSPs, and SMBs to block web-based threats such as phishing and malware and control the content that end users can access.
The WebTitan Cloud DNS filter is quick and easy to implement. Just point your DNS to WebTitan and you can be filtering the internet in a matter of minutes. You can set DNS filter policies for your entire organization or for user groups and individuals through an intuitive web-based interface. The solution supports time-based filtering and cloud keys can be used if filtering controls ever need to be bypassed.
WebTitan Cloud can be hosted with TitanHQ, in a private AWS cloud, or even within your own environment. MSPs can be provided with WebTitan Cloud in white label form ready to take their logos and color schemes. The solution can also be integrated into MSP’s remote managing and monitoring systems through the TitanHQ API.
For further information on the WebTitan Cloud DNS filter, to arrange a product demonstration, or to register for a free trial, contact TitanHQ today. | <urn:uuid:bbb11fe2-7d1a-4f05-80b6-02a351da08b5> | CC-MAIN-2022-40 | https://www.arctitan.com/what-is-a-dns-internet-filter/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00752.warc.gz | en | 0.926355 | 1,017 | 3.28125 | 3 |
Information is the life blood of our digital existence. Our data, however, can sometimes be subject to corruption. They can find themselves stranded and damaged on an external or internal hard drive, a flash drive, or even on a CD or DVD. The circumstances that lead to a need to recover data can range from deleting or downloading something you shouldn’t have to dropping your drink onto your device. Even though the type of corruption can vary, there are usually just four phases to data recovery: repair the hard drive, image the drive to a new drive, recover file system structures and files, and finally repair damaged files.
Repair the Hard Drive
There is good news and bad news when experiencing hard drive damage. The bad news is that most end users cannot make these repairs due to minor considerations like dust in the environment and major considerations like lack of expertise. The good news is that often times the data on the structurally challenged hard drive can be completely recovered when placed in the hands of a professional in a clean, dust-free environment.
Image the Drive to a New Drive
After a hard drive disk failure, getting the data off the drive is of critical importance. Some drives can be accessed by using a DOS boot disk. This can allow you to boot your computer and access files with a corrupted operating system. If this is unsuccessful, the next step is to try transferring the drive to another computer. This may allow file structures to be looked through using Windows Explorer.
Note on Data Recovery Software
Typical data recovery software will run under different operating systems and be useful in recovering files. Be sure to install this software on a drive other than the one from which you are attempting to recover files because the disk space that contains lost files often looks as if it is available to be written on.
Recover File System Structures and Files
Once the file structures can be accessed, software damage typically referred to as “logical damage” can be searched for. Sometimes specific recovery software can be remotely accessed via the internet but only in situations where there is no physical damage to the hard drive. Some free data recovery software is available online, but it’s a good idea to contact a professional IT technician before using one.
Repair Damaged Files
Once your files have been recovered, the data can be manually reconstructed in a non-damaged sector. This is accomplished by first selecting the partition which was used to store the files you want to recover. It is then possible to cut and paste the desired files to the new sector. A professional may also be able to use a hex editor to manually reconstruct some data.
The best way to protect against data loss is to back up files on a regular basis. Once they are damaged, if the files are valuable to you, call an expert in data recovery in San Marcos. For more information about data recovery, contact tekRESCUE, located in San Marcos, TX. | <urn:uuid:a352fdd2-7d63-4c4a-9658-f97c50d9b8d9> | CC-MAIN-2022-40 | https://mytekrescue.com/the-four-phases-of-data-recovery/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00152.warc.gz | en | 0.937482 | 597 | 2.796875 | 3 |
In our previous tutorial, we looked at the concepts of the Mean Opinion Score (MOS) and subjective voice quality testing that is defined in the International Telecommunications Union—Telecommunication Standardization Sector’s (ITU-T’s) P.800 Recommendation titled Methods for Subjective Determination of Transmission Quality (http://www.itu.int/rec/T-REC-P.800/en).
The ITU-T, being an agency of the United Nations, is quite instrumental in developing methods of testing global communication circuits. Much of the work of interest to VoIP net managers falls under the guidance of ITU-T Study Group 12, the lead study group on quality of service and performance. This group is responsible for four key areas of recommendations:
G-Series: Transmission systems and media, digital systems and networks
I-Series: Integrated Services Digital Network
P-Series: Telephone transmission quality, telephone installations, local line networks
Y-series: Global information infrastructure, Internet protocol aspects and next generation networks
Under the G-series we find ITU-T Recommendation G.107, titled <.i>The E-model, a Computational Model for use in Transmission Planning (http://www.itu.int/rec/T-REC-G.107/en). The E-model is an algorithm originally developed in 1998 and updated annually, which is designed as a transmission planning tool, that can evaluate the effects of transmission degradations that come from several sources. The output of the algorithm is a quality rating value, called R, which varies directly with the overall quality of the conversation. The general premise is that the various transmission factors are additive, and that a composite result will more accurately represent the quality of the communication circuit. The transmission rating factor R is derived as follows:
R = Ro—Is—Id—Ie—eff + A
Ro represents the signal-to-noise ratio, including the circuit noise and room noise
Is represents a combination of impairments which occur simultaneously with the voice signal
Id represents the impairments caused by delay
Ie-eff, the effective equipment factor, represents impairments caused by low bit-rate codecs
A, the advantage factor, compensates for impairment factors when access advantages (such as cellular or satellite circuits) are available to the end user.
While the standard acknowledges some limitations of the E-model, nevertheless, it provides a systematic approach that (one hopes) exceeds the “expert, informed guessing” (as the standard says) that goes into many transmission planning exercises.
In addition to the MOS and E-model tests, the ITU-T has also developed some objective tests that were originally devised for codec testing. These include: ITU-T Recommendation P.861, Objective Quality Measurement of Telephone-band (300-3400 Hz) Speech Codecs (http://www.itu.int/rec/T-REC-P.861/en), and the more recent ITU-T Recommendation P.862, Perceptual Evaluation of Speech Quality (PESQ): An Objective Method for End-to-End Speech Quality Assessment of Narrow-Band Telephone Networks and Speech Codecs, published in 2001, and updated in 2007. These methods were originally developed for the lab testing of codecs, and determine the distortion introduced into the system by comparing the original input signal with an impaired signal at the output.
Other recent work includes ITU-T Recommendation P.561, In-service Non-intrusive Measurement Device—Voice Service Measurements, (http://www.itu.int/rec/T-REC-P.561/en), which defines devices that can be used in-line to measure various voice-grade parameters including speech and noise levels, echo loss, and so on.
ITU-T P.562, Analysis and Interpretation of INMD Voice Service Measurements (http://www.itu.int/rec/T-REC-P.562/en) describes methods to analyze the individual measurement parameters over single and multiple calls, and how those measurements can be applied to network planning and operations activities.
A third document, ITU-T Recommendation P.563, Single-ended Method for Objective Speech Quality Assessment in Narrow-band Telephony Applications (http://www.itu.int/rec/T-REC-P.563/en) describes an objective method for predicting the subjective quality of narrow-band telephony applications. According to the standard, this method is recommended for non-intrusive speech quality assessment and live network monitoring, and is able to predict the speech quality on a perception based Mean Opinion Score—Listening Quality Objective (MOS-LQO) scale as defined in ITU-T Recommendation P.800.1, thus providing some degree of correlation between subjective and objective measurements.
Our next tutorial will continue our examination of this real time performance challenge, by looking at the latest network performance metric: Quality of Experience.
Copyright Acknowledgement: © 2007 DigiNet Corporation ®, All Rights Reserved
Mark A. Miller, P.E., is President of DigiNet Corporation®, a Denver-based consulting engineering firm. He is the author of many books on networking technologies, including Voice over IP Technologies, and Internet Technologies Handbook, both published by John Wiley & Sons. | <urn:uuid:54d4f580-815e-4e7e-bde3-d55a2b8cca5a> | CC-MAIN-2022-40 | https://www.enterprisenetworkingplanet.com/unified-communications/healthy-voip-nets%EF%BF%BDpart-iv%EF%BF%BDquality-of-service-e-model-and-objective-testing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00152.warc.gz | en | 0.895096 | 1,127 | 3.375 | 3 |
(Special thanks to author Anna Faller, from AVG. Contents have been edited for length. Please enjoy this very interesting and informative read!
Ah, the age-old question. Do Macs get viruses? Although by now, I don’t think it’s news to anyone that Mac devices can certainly get infected with malware. As technology continues to advance, Macs face an increasing number of online threats. So, how can you protect your Mac? Read on for our complete rundown of Mac viruses and malware. And learn how an antivirus app can keep your Mac safe.
The rise of viruses on Apple computers
Contrary to popular belief, Mac security is far from infallible. There’s a prevailing misconception that Mac computers are somehow immune to viruses and such. That simply isn’t true. But it is true that malicious software for Macs is relatively new to the cyber-scene.
Macs have been available only since the mid-1980s. Windows PCs have long dominated the personal computer market. With such market saturation, the Windows operating system was the prime target for cybercriminals. Macs, by comparison, simply weren’t worth the effort.
But these days, Mac viruses are steadily increasing as hackers create malicious programs specifically for Macs. Malware targeting Mac computers has grown exponentially since 2012, far outpacing similar growth on the Windows side.
According to Vox, Windows computers saw an average of 5.8 threats per device in 2019. Mac saw 11 per. In other words, Macs faced almost twice as many threats as Windows computers. Wow.
And viruses aren’t even the most potent problem. Instead, dangers can come from a variety of forms like spyware, adware, and Trojans. All of which can infiltrate Apple devices.
But just because malware can infect Macs doesn’t mean your computer has to fall prey. The rise in Mac-specific malware has also triggered an increase in antivirus tools for Macs. And there has never been a better time to get your hands on a good one.
High-quality antivirus software can protect your Mac from all sorts of threats. AVG AntiVirus (https://www.corporatearmor.com/brand-detail/avg/) combines a no-nonsense malware with a virus removal tool. It has the brawn of a malicious software detector and blocker, so you get protection and peace of mind.
How do Macs get viruses?
Macs can get infected by viruses in same way as Windows PCs. Basically, a computer virus is just a piece of code. It harms your computer by corrupting files, destroying data, and wreaking havoc without your permission. But, what really sets a virus apart is that it’s self-replicating. In other words, it can copy itself across files, computers, and data channels. All without your consent.
Since Mac viruses took so long to surface, it’s a fair question: How do Macs get viruses, anyway? As you might imagine, a virus can gain access to your Mac in multiple ways. A few of the most common channels include scareware, which is a phony virus infection notice.
Scareware ads claim that their ‘antivirus’ software will repair the alleged damage.
Infected emails is another. Viruses can be transmitted through email by both downloadable attachments and within the HTML of the actual email text.
Outdated software is another big source. When you don’t install a patch or update, you’re potentially leaving yourself at risk.
Instant messaging apps like Skype and Facebook Messenger are often used to spread computer viruses through infected links sent through chat windows.
And lastly, there’s P2P file sharing. Peer-to-peer apps like Dropbox and SharePoint, can also be used to create mass file corruption. The thing is, services like these sync information to any computer linked to an account. So if someone uploads an infected file, it’s granted immediate access to all connected computers. Ouch.
Now, viruses are just one type of malware, and there are many others. So, while all viruses are malware, not all malware is a virus. What are some other kinds of malware Macs are liable to get? Ransomware, spyware, adware, trojans, you name it.
Doesn’t Apple protect against viruses?
In the past Macs may have been more secure than PCs. This was mostly because hackers were focusing on Windows. When Macs became commercially available, an overwhelming majority of people continued to use PCs. This meant that targeting Macs didn’t make much financial sense for the bad actors.
And Macs aren’t easy targets. Mac’s operating system, macOS, has effective security features that make infiltration tricky. And while you shouldn’t rely completely on these built-in safeguards, they do offer a good first line of defense.
Macs need antivirus protection, too
It’s not just Windows PCs that need protection anymore. Macs need extra security, too. A comprehensive antivirus tool is the best offense for ensuring the safety and performance of your computer. And Corporate Armor partner AVG Antivirus is among the best. So give us a call at 877-449-0458, or email us at [email protected] We are happy to help you our many anti-virus products for Mac and PC systems. Thanks for reading!
AVG Antivirus Highlights
|Real-Time Protection: AVG runs seamlessly in the background without interrupting your work|
|Virus definitions are constantly updated as long as you are connected to the internet|
|Not only protects your workstation, it also scans files located on your network servers|
|Ability to repair some files infected by viruses and a quarantine area| | <urn:uuid:7b38dca6-cfc5-433a-9b32-58259f29958b> | CC-MAIN-2022-40 | https://www.corporatearmor.com/firewall-licensing/can-macs-get-a-computer-virus/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00152.warc.gz | en | 0.933221 | 1,223 | 2.671875 | 3 |
‘Phishing’ has been around for decades now. America Online (AOL) first flagged an algorithm-based phishing concept in the 1990s, which generated random credit card numbers to match with original cards from AOL accounts.
- Ten types of phishing attacks and phishing scams (opens in new tab)
Cybersecurity specialists talk phishing
But by the time AOL had caught up to the scam in 1995, phishers had already moved onto newer technologies…
Phishing has, by far, been one of the fastest evolutions in the history of cyber-crime. Over time, scammers have devised new types of phishing, and attacks have become increasingly sophisticated.
Although the term ‘phishing’ is mainly used to describe email attacks, it can now also be conducted through text message, phone or social media. Attacks like these open up the door for hackers to sabotage systems, access and manipulate sensitive data, steal confidential information and install malicious software such as ransomware.
Banking, technology and healthcare are the most targeted sectors for phishing attacks. This is primarily due to their high volume of users and the massive amounts of data they store. But phishing attacks can hit an organisation of any size and type. So, it’s essential to know what to look out for and how to prevent them from happening.
Phishing from all angles
These days, phishing attacks can typically be classified into five categories: smishing, vishing, spear phishing, whaling and search engine phishing.
Smishing is one of the easiest types of phishing attacks, which target users through SMS alerts. With smishing, users might receive a fake message telling them to look at something via a link or a phoney order with a cancellation link. But clicking on the link will take them to a sham site designed to gather personal details.
Vishing (a combination of ‘voice’ and ‘phishing’) is when phishers call victims pretending to be a friend, relative or company. By using information gained from social media, hackers can confidently communicate with individuals and get the information they need without raising any suspicion.
Traditional phishing often involves sending emails to thousands (even millions!) of unknown people. But spear phishing takes it one step further by carefully targeting and actively scamming a particular user. Phishers carry out a complete social profile check of the user and the company they work for to make the scam appear more legitimate. As such, these attacks are especially risky and tricky to spot. The most common type of spear phishing is payment diversion. This is where a seemingly legitimate bank or utility company contacts a would-be victim with a change in banking details.
Whaling is very similar to spear phishing. However, instead of targeting lower-level employees, these types of attacks go after senior management positions such as CEOs, CFOs and CISOs — who are often the key to information chains in an organisation.
Search engine phishing then refers to the creation of a fake webpage for targeting specific keywords. Phishers wait for users to land on the fake website via legitimate search engines, such as Google, and then steal their data through it.
- Phishing: what it is, how to prevent it and how to respond to an attack (opens in new tab)
How to spot a scam
Phishing emails and text messages may look like they’re from a company you know or trust — such as a bank, credit card provider, social networking site or an online store. So, how do you know if they’re legitimate or not?
Some of the most common red flags to look out for include:
- Mentions of suspicious activity or login attempts
- Claims there’s a problem with your account or your payment information
- Request to change payment details
- Asking you to confirm personal information such as bank details, logins or passwords
- Unexpected invoices
- Asking you to click on a link to make a payment or view something
- Claims you’re eligible for a refund
- Offers of coupons or free products
- Email addresses made up of lots of numbers and letters
- Webpages, emails or text messages that are littered with spelling mistakes
- Generic messages or email addresses that don’t address you by name
However, as attacks become more sophisticated, it is becoming increasingly difficult to spot a phishing attempt.
A multi-layered approach
In the past, defences against phishing have relied exclusively on users being able to identify phishing emails. But these days, scammers will have corrected all the typical red flags — including masking their URL more convincingly and knowing the user’s name.
So, how can companies hope to defend against these advanced attacks without compromising productivity? The secret lies in a multi-layered approach, which harnesses the power of advanced software.
Here are some of the ways you can protect your organisation from phishing attacks:
- Educate users on what to look out for, how to identify a phishing email and what actions they should take if they suspect an email to be malicious.
- Use available tools such as those provided by MetaCompliance to enforce and measure the effectiveness of the education process.
- Reduce the information available to attackers. Consider what visitors to your website or social profiles really need to know (and what could be useful for attackers).
- Use anti-spoofing controls to make it harder for emails from your domains to be spoofed.
- Filter or block incoming phishing emails through a cloud-based email provider’s built-in service or a bespoke service for your email server.
- Use modern, up-to-date browsers that will block known phishing and malware sites.
- Run a proxy service to block any attempts to reach websites which have been identified as hosting malware or phishing campaigns.
- Protect your devices with the latest security software and set it to update automatically, so it is always kept up to date with the latest patches.
- Prevent users from accidentally installing malware from a phishing attempt by limiting administrator accounts through privileged access management.
- Improve identity and access management (IAM) through multi-factor authentication. This will make it harder for scammers to log into your accounts if they do get your username and password.
- It is also worth seeking the help of a cybersecurity consultant or IAM specialist to help implement appropriate technology and processes within your company.
- The best antivirus software in 2020 (opens in new tab)
Richard Menear, management, Burning Tree (opens in new tab) | <urn:uuid:12013d88-6c58-4cd2-8296-340c04f4facd> | CC-MAIN-2022-40 | https://www.itproportal.com/features/private-property-no-phishing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00152.warc.gz | en | 0.930449 | 1,370 | 3.375 | 3 |
What do you think of when you hear “hacker”? From stereotypes what comes to mind is a guy in a hood hunched over a computer or a computer genius. For example, the movie “The Social Network” you see the Mark Zuckerberg character just whizzing through and hacking networks because he’s a super genius.
In the real world, not all cybercriminals are computer masterminds, but most are experts at social engineering. They’re good at pretending to be someone or something they’re not, and exploiting your vulnerabilities to manipulate you into trusting them and handing over personal information. They create “targeted lies designed to get you to let your guard down.”
What to do if you become a victim of a cyber attack:
- Immediately notify your supervisors and IT department.
- Notify your local authorities to file a complaint.
- Keep record of all evidence of the incident and the suspected source of the attack.
Cybercrime has become an international affair in which multiple agencies all over the world join in the fight against it. However, the lack of reporting, coupled with the fact that cybersecurity agencies are significantly understaffed, make it extremely difficult for cyber criminals to be caught, apprehended, and charged. This is why it is important to report cyber attacks.
Report Cybercrime Resource
The faster a cyber attack is reported, the better chance there is for it to be controlled. Not reporting a cyber attack is like not calling 911 after witnessing a car accident or robbery. If authorities don’t know what’s happened, they can’t help.
If you worry about cybersecurity, you’re not alone. Did you know 68% of business managers and owners feel their cybersecurity risks are increasing too? A managed service provider can help you relax by securing your network, helping you construct a cyber security protocol, provide data back up and recovery solutions, as well as equip your employees with security training. | <urn:uuid:34cf50cc-3cea-45e5-81d7-c550fedb4fce> | CC-MAIN-2022-40 | https://www.ctgmanagedit.com/what-is-a-cyber-crime/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00152.warc.gz | en | 0.947494 | 408 | 2.78125 | 3 |
SSL (Secure Socket Layer) and TLS (Transport Layer Security) are popular cryptographic protocols that are used to imbue web communications with integrity, security, and resilience against unauthorized tampering. PKI uses the TLS protocol to establish secure connections between clients and servers over the internet, ensuring that the information relayed is encrypted and unable to be read by an external third party.
Note: SSL was the predecessor of TLS, and the world began moving away from SSL once TLS was introduced in 1999, thanks to the improved security features of the latter. TLS is currently in its third iteration, and is called TLS 1.3. However, SSL continues to be used as a metonym for both protocols in general (for example, the word ‘SSL certificate’ is widely used, but SSL has been completely deprecated and no modern systems support SSL anymore).
Connections that are secured by TLS will indicate their secure status by displaying HTTPS (Hypertext Transfer Protocol Secure) in the address bar of web browsers, as opposed to just HTTP.
While TLS is primarily used to secure client-server connection, it is also used to protect emails, VoIP calls, and other connections.
In theory, web connections are completely possible without TLS to secure them. However, without a security protocol in place, the communication would be rendered completely open to external access. If a browser connected to the website of an online store, and a user had to enter their credentials to log in, those credentials could easily be lifted by an observing party.
TLS, at its core, serves to provide end-to-end encryption for all data transmitted from one point to another, and uses cryptography to ensure that only the two transacting bodies are capable of reading this information. Every service in the world now mandates that connections are secure by TLS – leading browsers do not allow users to access websites without a valid TLS connection.
TLS has the following benefits:
When two systems that leverage TLS attempt to connect, each system will make an effort to verify that the other supports TLS. This process is called the TLS handshake, and it is here that both parties decide upon the TLS version, encryption algorithm, cipher suite etc. that will be used in the procedure. Once a TLS handshake has been successfully executed, both systems start exchanging data on a secure line.
Note: A working knowledge of PKI and its constituents, such as keys, may prove to be useful prior to understanding TLS. You can read more about it in the link above. For now, all you need to know is that encryption and decryption are carried out with the help of cryptographic devices called keys. In public key cryptography, Public keys are used to encrypt information, while secret Private keys can be used to decrypt that information. Since two different keys are involved, this technique is called ‘asymmetric cryptography’, as opposed to ‘symmetric cryptography’, where a single key can perform both encryption and decryption. More on this below.
Every TLS handshake follows the same basic steps. For the sake of simplicity, let’s assume a browser (a client) is attempting to connect to a server, which hosts a website:
The above example was an instance where both asymmetric and symmetric encryption was used to secure a connection.
Asymmetric encryption was used to create the session key. But from that point onwards, the session key was used to bilaterally encrypt and decrypt the entire information flow between both parties. Here’s why:
Asymmetric encryption is mathematically resource-intensive. The encryption-decryption operations involving two keys from both parties takes a heavy toll on the processing unit powering this process. If a system were configured to handle an entire connection this way – by using a public key to encrypt and a private key to decrypt it – it would probably give out within the first few minutes. It’s much more secure than its symmetric counterpart, but cannot be facilitated in an efficient manner.
However, symmetric encryption, which makes use of a single, shared key to encrypt and decrypt, is not that resource-intensive. This is precisely why asymmetric encryption is used to establish a secure link between two parties, and used to generate the session key which, in theory, only those two parties can possibly know about. Now, symmetric encryption can be used to secure the connection, given the additional layer of security added to it by the first step.
Digital certificates were mentioned earlier in this article. In general, digital certificates are digital documents that are ‘signed’ by trusted authorities, and act as documents of ownership of a public key. By extension, they serve to validate the legitimacy of a server or a client. However, there are several types of digital certificates available. The term ‘x.509 certificates’ is used to differentiate SSL/TLS certificates with other kinds of digital certificates (code-signing certificates, for instance).
Let’s take a look at why they’re necessary.
As the previous section made clear, digital certificates are important pieces in the public key cryptography domain. They’re attached to public keys, and are proof that the holder of the public key is actually the legitimate owner. This is because digital certificates are signed, sold, and issued by bodies called ‘Certificate Authorities’ (or CAs, for short), which are trusted bodies responsible for verifying the authenticity of anyone who requests a certificate. Since major operating systems and browsers have ‘trust stores’ consisting of these CAs built into them, browsers will automatically trust certificates issued by major CAs.
Certificates are key to making websites easily recognizable to users as a trusted, secure page. Webpages with valid SSL/TLS certificates installed on them will have ‘https’ preceding the name of the website in the search bar, given that the certificate has been installed correctly. In some browsers, the use of a valid Extended Validation certificates (a type of TLS certificate), will cause the HTTPS padlock to turn green, providing visitors with additional assurance that they are on a legitimate website, like so:
Note: The certificates issued by CAs which are used to secure web connections that use TLS are called TLS certificates – this term is interchangeably used with ‘SSL certificates’ due to the term being coined back when SSL was in mainstream use. | <urn:uuid:fc4cf6af-7fb3-49c9-8b83-0d38c91f8f9e> | CC-MAIN-2022-40 | https://www.appviewx.com/education-center/what-is-tls-ssl-protocol/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00152.warc.gz | en | 0.957187 | 1,302 | 4.09375 | 4 |
OpenSSH (define) is one of the most common mechanisms in use for providing secure remote access to servers. A flaw in a key part of how Debian-based Linux distributions like Ubuntu secure OpenSSH has put potentially millions of servers at risk from a brute force attack. The attack could have major implications for the Internet.
The Internet Storm Center (ISC) at SANS is raising the alarm on the issue with a yellow alert on the flaw. According to ISC handler Bojan Zdrnja, the development of automated scripts exploiting key based SSH authentication looks like a real threat to SSH servers around the world. In a blog post, Zdrnja argued that public keys generated on any Debian based machine between September 2006 and 13th of May 2008 are vulnerable.
“It is obvious that this is highly critical — if you are running a Debian or Ubuntu system, and you are using keys for SSH authentication (ironically, that’s something we’ve been recommending for a long time),” Zdrnja wrote. “In other words, those secure systems can be very easily brute forced.”
Security researcher HD Moore, leaders of the Metasploit
security effort has gone a step further, explaining in a public post how he was able to brute force 1024, 2048 and 4096-bit keys. The flaw itself exists in a Debian-specific version of the OpenSSL package, which generates the keys that are used in OpenSSH. Even though OpenSSL is widely used by other Linux distributions, it is not necessarily at risk according to Moore.
“The flaw in question was introduced by a Debian-specific patch,” Moore told InternetNews.com. “This patch was not pushed upstream to the OpenSSL folks, so only distributions based on Debian have this issue.”
“It’s obviously a very significant issue being a remote exploit,” Canonical CEO Mark Shuttleworth told InternetNews.com.
Shuttleworth added that folks who have applied the update are in a good position. Shuttleworth added that Ubuntu is very responsive on security and their primary focus is to be able to respond to any issue that may arise.
“We do have a substantial amount of pro-active security in the system,” Shuttleworth explained. “Where we design the configuration of the system so services are isolated form one another. So a compromise in one service doesn’t affect the rest of the system.”
Moore noted that even systems that do not use the Debian software need to be audited in case any key is being used that was created on a Debian system. Tools and patches have been released by Debian and Ubuntu to fix the issue and identify any potentially vulnerable keys.
“Any SSH server that uses a host key generated by a flawed system is subject to traffic decryption and a man-in-the-middle attack would be invisible to the users,” Moore explained in his post.
Though Moore was able to crack the keys, the brute force methods used require a certain degree of computing power. Moore noted in an FAQ about the keys that was able to brute force that he used a 31 Xeon core cluster clocked at 2.33 Ghz. Using that large cluster it took two hours to generate the 1024-bit and 2048-bit RSA keys for x86. Other more keys including 4096,
8192 and even 16384 bit keys could also all be generated as well given enough time.
Rohit Dhamankar Senior manager of security research at TippingPoint noted that its not uncommon to see a large volume of brute force attempts against servers. While OpenSSH is now being targeted brute force crackers also typically target Microsoft SQL servers as well trying to guess username and password combinations.
Dhamankar noted that time isn’t the only defense a user might have.
Intrusion Prevention Systems (IPS) can be set to deny IP addresses after a certain number of tries.
Article courtesy of InternetNews.com | <urn:uuid:5bb15f32-b4fe-4905-9c52-b252e23920d2> | CC-MAIN-2022-40 | https://www.enterprisenetworkingplanet.com/security/debian-ubuntu-ssh-under-attack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00152.warc.gz | en | 0.954778 | 832 | 2.796875 | 3 |
The pace of development in AI has taken off recently. While it is now just over two decades since the world’s first robot chess champion, Deep Blue, AI is breaking new ground technologically and, in time will inevitable test what it means to be human.
“We are making great strides in enabling computers to perceive things, so we can build amazing applications that can mimic human behaviour, but it is not intelligence in the way of a human,” says Andrew Herbert, chair of The National Museum of Computing.
One of the most recent breakthroughs came in June Facebook when published research introducing dialog agents with the ability to negotiate. Similar to how people have differing goals, run into conflicts, and then negotiate to come to an agreed-upon compromise, the researchers demonstrated that it is possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes, according to Facebook’s blog.
The bots were switched off after they developed their own language for communicating.
In this series we explore several breakthroughs in AI and machine intelligence.
To start with, let's look at the first big AI breakthrough >>
Click here to read how game play has helped advance AI >> | <urn:uuid:9bf828f8-f59f-414b-ab93-1ca689bf39f4> | CC-MAIN-2022-40 | https://www.computerweekly.com/photostory/450423799/AI-A-brief-history-of-man-versus-machine-intelliegnce/1/Bot-versus-bot | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00152.warc.gz | en | 0.960445 | 271 | 3.328125 | 3 |
Product Liability Issues Can be Prevented
Product Liability and Testing are areas that are of increased concern for product developers and manufacturers. As products become more complex, properly designed test and evaluation programs must verify designs to prevent product liability issues that can greatly damage a company’s reputation.
Product Liability arises from incidents where a product’s performance departs from its intended design. These incidents often involve serious injury or wrongful deaths. These incidents can be caused by design defects, manufacturing defects, and failure to warn or marketing defects.
Penalties imposed for cases of product liability vary from nation to nation and vary between states in the United States. As a trend however, manufacturers are being held to more stringent standards around the world.
The Role of Test and Evaluation
Increased dependency on electronic products in every sector of our lives has created a greater potential for vulnerabilities that may cause serious injury or even death. Product safety standards can and do address a good many of these vulnerabilities. However, because a standard cannot address the wide range of environmental, electrical, and EMI/EMC effects that may be present in a product’s real world applications, it is often beneficial to conduct evaluations over and above those required for compliance to a given marketplace.
Environmental Causes of Product Failures
To assess testing requirements it can be helpful to conduct climatic and dynamic evaluations to examine the environmental stresses likely to occur in the lifetime of a product. A useful tool for this analysis is a Life Cycle Environmental Profile (LCEP). The LCEP is the method employed by MIL-STD-810 to identify stressors in all phases of a product lifetime, from leaving the shipping dock to final disposal.
Although MIL-STD-810 is a DOD standard it is often used in commercial products where safety critical performance is a necessity. Once an LCEP has been performed, realistic environmental issues and criteria can be established that will provide guidance for a test matrix that will be able to identify design deficiencies before production.
EMI/EMC and Electrical Product Failures
Because of the high volume of electronic devices in today’s world, the radio frequency environment is much denser across the spectrum than it ever has been. This also causes abnormalities and disturbances on power distribution systems that these devices share. Susceptibilities to electromagnetic interference are a major cause of operational anomalies.
The causes of these anomalies can be very difficult to predict and reproduce in the lab. Additionally, extreme care must be given to the design of monitoring equipment to catch intermittent failures in an EMI/EMC chamber while the equipment is under test. Furthermore, selection of appropriate methodologies for evaluation can be challenging. A susceptibility analysis can often assist in selection of relevant methods of evaluation.
CVG Strategy Can Help
CVG Strategy has decades of experience in product test and evaluation for equipment with safety critical requirements in a wide array of industries. We have the expertise in both Environmental and EMI/EMC to provide thorough analysis of your product’s potential vulnerabilities. We can then offer a wide array of services to verify a design before release to manufacture so that product liability concerns can be minimized. | <urn:uuid:5be30665-b963-4438-912b-6e7a90b76310> | CC-MAIN-2022-40 | https://cvgstrategy.com/product-liability-and-testing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00152.warc.gz | en | 0.942124 | 639 | 2.515625 | 3 |
What Is a Threat Model?
Every day seems to bring news of a new threat to the security of your information technology: hackers, denial-of-service attacks, ransomware, unauthorized information disclosure. It’s hard to know where to start to address them all. It’s just as hard to know when to stop. Threat modeling can help.
A threat model identifies risks and prioritizes them. Although often associated with information technology, a threat model may be used to identify many types of risk. For instance, a threat model may identify hurricanes as a risk for property owners in the southeastern United States. Once risks have been identified, the threat model helps to prioritize identified risks and weigh the costs and benefits of addressing them. For example, a threat model weighing better windows versus storm shutters may prioritize storm shutters as the better response.
When it comes to information technology, a threat model is used to profile probable attackers and hackers and to identify both the most likely avenues of attack and the hardware and software most likely to be targeted. Defenders can then determine the security controls needed to protect the system from those threats and decide which to implement based on the costs and benefits of each.
Goals of Threat Modeling
Threat modeling evaluates threats and risks to information systems, identifies the likelihood that each threat will succeed and assesses the organization’s ability to respond to each identified threat.
1. Identifying Security Requirements and Vulnerabilities
The threat modeling process requires identifying security requirements and security vulnerabilities. Security vulnerabilities are often best identified by an outside expert. Using an outside expert may actually be the most cost-effective way to assess security controls.
Start by diagramming how data moves through the system, where it enters the system, how it is accessed and who can access it. List all software and other applications in the system and identify the system architecture.
Then use threat modeling to identify potential threats to the system. For example, are there terminals in public spaces that are not password protected? Is the server in an unlocked room? Has sensitive data been encrypted?
2. Quantifying the Criticality of Threats and Vulnerabilities
The average IT system may be vulnerable to thousands, even millions, of potential threats. No organization can afford to treat all threats alike or ignore them all. No organization can afford to treat every potential threat as critical to its survival. Because budgets and time are both limited, more severe threats must be given priority over lesser threats.
The Common Vulnerability Scoring System (CVSS) ranks potential threats from one to 10 according to their inherent severity and whether the vulnerability has been exploited since it was first discovered. A CVSS score of 10 indicates the most severe threat. A CVSS score of one indicates the least severe threat. The CVSS threat scoring system allows security professionals to access a reliable source of threat intelligence developed by others.
A raw CVSS score does not consider the context of a vulnerability or its place within the information technology system. Some vulnerabilities will be more critical to some organizations than to others.
3. Prioritizing Remediation Methods
Once you know how critical each vulnerability is to your organization, you can decide which are the most important to correct, a process called threat analysis. Threat analysis identifies the weak spots in the system and the potential threat posed by attacks using each one. The most critical vulnerabilities may need immediate attention to add security controls. The least critical vulnerabilities may need no attention at all because there is little chance they will be exploited or they pose little danger if they are.
How Should You Approach Threat Modeling?
There are several approaches to threat modeling. Choosing the right methodology begins with a deeper understanding of the process of threat modeling.
Understanding the Process of Threat Modeling
Threat modeling identifies the types of threats to a software application or computer system. It’s best to do threat modeling during the design of the software or system, so that vulnerabilities can be addressed before the system goes live. Changes in software, infrastructure and the threat environment are also important opportunities to revisit threat models.
Threat modeling generally follows the following five steps:
- Set objectives for the analysis.
- Create a visual model of the system to be analyzed.
- Use the visual model to identify the threats to the system.
- Take steps to mitigate the threats.
- Validate that the threats have been mitigated.
Identifying the Differences in Threat Modeling Methodologies
Threat modeling identifies threats by focusing on potential attacks, system assets or the software itself. Asset-centric threat modeling focuses on system assets and the business impact of the loss of each targeted asset. For example, asset-centric threat modeling might ask what the impact on the business would be if a hacker denied access to the online order management system. The answer may be that there is a grave impact. On the other hand, a virus that infects a software program that is used only to track fixed assets may have little business impact because the fixed assets are also tracked on paper.
Attack-centric threat modeling identifies the threats against the system with the greatest chance of success. For example, attack-centric threat modeling asks how likely it is that a hacker could successfully tie up the online order management system in a denial-of-service attack. The answer may be that it is very likely because the system has an inherent and well-known vulnerability.
Finally, system-centric threat modeling focuses on understanding the system being modeled before evaluating the threats against it. For example, system-centric threat modeling begins by asking where the data in the online ordering system reside and how and where the system is accessed.
Choosing the Best Threat Modeling Methodologies
Which threat modeling methodology is best for your system? The right methodology for your system depends on the types of threats you are trying to model. You’ll want to consider the following:
- The types of threats and risks commonly faced by other companies in the industry
- The size and competence of your staff
- The available resources, financial and otherwise
- Your tolerance for risk
2022 CROWDSTRIKE GLOBAL THREAT REPORT
Download now to learn about the most significant cybersecurity events and threats.Download Now
Examples of Threat Modeling Frameworks
Here are some examples of the most popular threat modeling methodologies:
Attack trees are based on decision tree diagrams. The “root” or base of the tree represents the attacker’s goal. The branches and “leaves” of the attack tree represent the ways of reaching that goal. Attack trees demonstrate that attackers often have multiple ways to reach their target.
STRIDE was developed by Microsoft to systematically identify a broad range of potential threats to its products. STRIDE is an acronym for six potential threats:
- Spoofing identity: an attacker may gain access to the system by pretending to be an authorized system user.
- Tampering with data: an attacker may modify data in the system without authorization.
- Repudiation: the attacker claims no responsibility for an action, which may be either true or false.
- Information disclosure: the attacker provides information to someone not authorized to access it.
- Denial of service: the attacker exhausts the resources needed to provide services to legitimate users.
- Elevation of privilege: the attacker does something (such as access confidential data) they are not authorized to do.
Process for Attack Simulation and Threat Analysis (PASTA) views the application as an attacker would. PASTA follows seven steps:
- Define the business objectives, system security requirements and the impact on the business of various threats
- Define the technical scope of the environment and the dependencies between the infrastructure and the software
- Diagram the flow of data within the application
- Run attack simulations attacks on the system
- Map threats to existing vulnerabilities
- Develop attack trees
- Analyze the resulting risks and develop cost-effective measures to counter them
Trike uses threat models to manage, rather than eliminate, risk by defining acceptable levels of risk for various types of assets. For each system asset and each system user, Trike indicates the user’s level of access to each asset (create, read, update and delete) and whether the user has permission to take each action always, sometimes or never.
Visual, Agile and Simple Threat (VAST) is an automated threat modeling process applied to either application threats or operational threats. To model application threats, VAST diagrams the threat to the architecture of the system. To model operational threats, VAST diagrams the threat from the attacker’s perspective.
The Common Vulnerability Scoring System (CVSS) assigns a severity score to each vulnerability. This combines its intrinsic vulnerability, the evolution of the vulnerability over time and the security level of the organization. | <urn:uuid:98e088d6-ba53-48e4-94f2-63d7dbf0e841> | CC-MAIN-2022-40 | https://www.crowdstrike.com/cybersecurity-101/threat-modeling/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00152.warc.gz | en | 0.918292 | 1,827 | 2.90625 | 3 |
NASA is set to award $400,000 each to three proposals that describe mission concepts to help researchers examine various parts of the space weather system. Proposers were chosen based on the feasibility and possible scientific application of their development plans for the nine-month concept studies, the agency said Wednesday.
The Extreme Ultraviolet High-Throughput Spectroscopic Telescope Epsilon Mission hopes to determine the role of hot plasma and magnetic field in solar activity and eruptions. The Aeronomy at Earth: Tools for Heliophysics Exploration and Research mission looks to observe the response of the ionosphere-thermosphere system to geomagnetic storms.
The Electrojet Zeeman Imaging Explorer mission seeks to examine the structure of an electric current called the auroral electrojet using three small satellites.
Principal investigators for the projects are:
- EUVST: Clarence Korendyke, U.S. Naval Research Laboratory
- AETHER: James Clemmons, University of New Hampshire
- EZIE: Jeng-Hwa Yee, Johns Hopkins University Applied Physics Laboratory
NASA said it will allocate up to $55M to one mission through the Heliophysics Explorers program. | <urn:uuid:cc2dc75a-f902-4680-9082-0e567c0288a2> | CC-MAIN-2022-40 | https://executivegov.com/2019/09/nasa-chooses-three-space-weather-research-mission-proposals/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00152.warc.gz | en | 0.870355 | 245 | 2.640625 | 3 |
If you have a laptop, you can get some work done on the go. That is if you have an internet connection. The thing is, we’re not always near a trustworthy network. Tap or click here for four ways to get internet in your RV: Antenna, Wi-Fi extender, booster and hotspot.
You can either pack up your laptop and find a coffee shop or create a hotspot with your iPhone when you need internet. By enabling a couple of settings on your iPhone, you’ll be able to share your mobile data with your computer or any other device.
Setting up a mobile hotspot isn’t as daunting of a task as you might be thinking. Read on to find out how to turn your iPhone into a hotspot quickly.
Create a Wi-Fi hotspot
It only takes a few taps to create a sharable internet connection on your iOS device. Tap Settings, and in the first group of options, tap Personal Hotspot. To make it a secure connection, enter a password in the Wi-Fi Password box. When that is done, toggle the slider next to Allow Others to Join to the right to enable it.
The hotspot is now ready to go and anybody with the password can connect to it. On the other device, the process to join is the same as any other Wi-Fi connection. On the second device, open the Settings app, and tap Wi-Fi.
Look for your created Wi-Fi hotspot and tap on it to connect. You’ll be asked to enter the password, and after that is done, it will connect.
Create a Bluetooth hotspot
The actions for creating a Bluetooth hotspot are almost the same, except that the connection method is slightly different. After following the steps in creating a hotspot with a secure password, you’ll need to pair your iPhone with the device that needs internet.
Go to the Settings app and tap Bluetooth. Enable Bluetooth by sliding the toggle next to it to the right. Now your device is discoverable. When the other device’s Bluetooth is also turned on, it will be displayed in the list of gadgets that you can connect to. Tap the name of the second device.
Usually, you will be asked to verify the passcode that is displayed on both screens. If it is the same code, you know that you are connecting to the right device. Note: For iOS-to-iOS connections, Apple suggests using Wi-Fi instead of Bluetooth.
Using USB to create a hotspot
In cases where your computer or laptop doesn’t have a Bluetooth receiver, the next best option is to connect through a USB cable. Again, go through setting up a hotspot by toggling the Allow Others to Join slider.
For the next step, you’ll need the latest version of iTunes installed on your machine. If you don’t, a notification will pop up when you open it, prompting you to update.
With the created hotspot active, connect your iPhone with a Lightning-to-USB cable to your computer. On your device, you’ll see a popup asking you if you Trust This Computer. Tap the Trust button and iTunes will start to sync data.
The connected computer will automatically use your iPhone’s internet connection, and there is nothing else that you need to do. On a Windows-based computer, look on the right side of the taskbar. You’ll see your iPhone listed as a wired internet connection.
Things to keep in mind
At any point, you can close the hotspot by disabling the slider next to Allow Others to Join in the Personal Hotspot section of Settings. If you are sharing a connection through a USB cable, you can simply unplug the iPhone and the connection will be canceled.
Android fans: How to turn your Android into a mobile hotspot
If you have Family Sharing enabled, family members can join your Personal Hotspot automatically. While it is not advisable, it can be set up without a password. For Family Sharing, go to Settings, tap Personal Hotspot > Family Sharing.
A list of family members will be displayed and you have the option of them joining automatically or only with your approval. For the latter, you need to tap on Approve whenever they want to make a connection. | <urn:uuid:baecbfb9-1da4-47e6-99f7-9d4ea7f4a8fb> | CC-MAIN-2022-40 | https://www.komando.com/tech-tips/iphone-mobile-hotspot/810290/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00152.warc.gz | en | 0.899463 | 896 | 2.53125 | 3 |
Difference between LED and Laser Light Source
As the wide application of fiber optic system, optical light source plays a more and more important part in it. We known a basic optical fiber system consists of a transmitter, an optical fiber and a receiver. The fiber optic light source, as an important component of the transmitter is modulated by a suitable drive circuit in accordance with the signals to be transmitted. Optical light source is also needed for performing fiber optic network testing to measure the fiber optic loss in the cable plant. Light source is offered in a variety of types including LED, halogen and laser. Among which, LED and Laser light source are two types of semiconductor light sources. The following article will discuss about laser vs LED and list the differences between laser and Led light source.
Basically, both kind of light source must be able to turn on and off millions to billions of times per second while projecting a near microscopic beam of light into an optical fiber. During the working process of optical signals, they are both supposed to be switched on and off rapidly and accurately enough to properly transmit the signals.
The general difference between them as that LEDS is the standard light source which is short for light-emitting diodes. Laser light source like gas lasers may be mainly used in some special cases. Lasers are more powerful and operate at faster speeds than LEDs, and they can also transmit light farther with fewer errors. Laser are also much more expensive than LEDs.
LED fiber optic light source isf870 nm are usually
made of materials that influence the wavelengths of light that are emitted. A basic LED light source is a semiconductor diode with a p region and an n region. When the LED is forward biased, current flows through the LED. As current flows through the LED, the junction where the p and n regions meet emits random photons. LEDs emitting in the window of 820 to 870 nm are usually gallium aluminum arsenide (GaAIAs). Laser is also a semiconductor diode with a p and an n region like LED, but it provides stimulated emission rather than the simplex spontaneous emission of LEDs. The main difference between a LED and a laser is that the laser has an optical cavity required for lasting. The cavity is formed by cleaving the opposite end of the chip to form highly parallel, reflective, mirror-like finishes.
VCSEL, known as vertical-cavity surface-emitting laser, is a popular laser source for high speed networking, which consists of two oppositely-doped Distributed Bragg Reflectors (DBR) with a cavity layer. It combines high bandwidth with low cost and is an ideal choice for the gigabit networking options. The idea for vertical light emitting laser started between 1975-1977 to satisfy the planarization constraints of the integrated photonics according to the microelectronic technology available then. Nowadays, apart from the application in optical fiber data transmission, it is also widely used for other applications like analog broadband signal transmission, absorption spectroscopy (TDLAS), laser printers, computer mouse, biological tissue analysis, chip scale atomic clock, etc.
Different wavelengths travel through a fiber at different velocities as a result of material dispersion. What should always keep in mind is that both Laser and LED will not emit a single wavelength, but a range of wavelength that is known as the spectral width of the source. Fiber optic light source is always working with the fiber optic power meter. During the working process, it collimates beams of light and aim right down the center of the narrow single mode core and propagates in essentially a single mode transmission. For more questions about fiber optic test equipment, such as visual fault locators, optical power meter, OTDR testers, etc., please go for FS.COM.
Related Article: Single Mode vs Multimode Fiber: What’s the Difference? | <urn:uuid:01978f66-3a9d-4f60-9e6c-744d82b6878e> | CC-MAIN-2022-40 | https://community.fs.com/blog/difference-between-laser-light-source-and-led-light-source.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00152.warc.gz | en | 0.942864 | 789 | 3.5 | 4 |
An important goal of commercial drone operations has always been long distance use. While there have been plenty of cases in which a drone can be put to work within a mile of the operator’s location, moving beyond that limit makes these small, remote-controlled aircraft much more useful to enterprises.
For example, inspecting a transmission line is useful, but inspecting several miles of transmission line all at once is dramatically more useful. The same is true for a number of business applications whether it’s inspecting real estate or taking photos of events during a disaster. Until now, however, companies normally couldn’t get permission to operate commercial drones beyond a mile, or what the Federal Aviation Administration calls line of sight (LOS).
More recently, a limited number of waivers have been available for ranges up to four miles, but those waivers were hard to get because they required specialized knowledge and expertise to operate such drones. Longer flights weren’t permitted until now.
The recent progress is the result of the FAA’s Pathfinder initiative, which involved industry partners working with the agency to develop guidelines and best practices for operating drones in ways that were previously prohibited, such as over crowds of people and beyond visual line of sight (BVLOS). To accomplish this the industry partners had to find ways to detect and avoid other aircraft while maintaining safe operations.
Drone maker PrecisionHawk and partners Mitre Corp.and the FAA have put together a report that makes a safety case for operations at distances of as much as 50 miles from the operator of the drone. According to PrecisionHawk CEO Michael Chasen, the guidance that users need to follow to operate BVLOS drones has to include an operational checklist, qualified operators with adequate hours operating the specific type of drone being used and a means of detecting aircraft.
Detecting most aircraft is fairly straightforward. An aircraft avoidance system called Automatic Dependent Surveillance—Broadcast (ADS-B) allows an aircraft to know the location of other aircraft with the same system. Chasen calls these “cooperative aircraft.”
But there are thousands of aircraft, especially general aviation and private aircraft that aren’t outfitted with the avoidance technology. To detect those aircraft, PrecisionHawk researchers located remote sensing company SARA (Scientific Applications & Research Associates) that had a technology capable of using sound to locate an aircraft up to ten miles away. The SARA detection system, according to Chasen, is about the size of an iPhone and is light enough to be mounted on a drone.
The drones also use access to a real-time database of aircraft locations provided by Harris Corporation. “We have put together an example of a drone that can fly 50 miles,” Chasen said. “This is a blueprint for companies about how to fly beyond line of sight.”
The Pathfinder Report includes details of how drones can avoid collisions with other aircraft, including handling avoidance maneuvers and near-misses. It also sets requirements for assistive technology (meaning how the drone’s location is visualized by the operator and how it’s controlled), crew training and experience requirements, along with the requirements for the aircraft itself.
The Pathfinder Report was released on May 1, so there aren’t any drone operators that have received the required FAA waivers to begin long-distance commercial drone operations. However PrecisionHawk is already operating in what’s called extended line of sight modes that allow the drone to travel up to four miles beyond the operator’s position.
“The longer range creates greater efficiency,” Chasen said. “People were paying money for aerial intelligence and land intelligence.” That intelligence includes gathering data for surveying, environmental monitoring and even news gathering. A number of news organizations are already using commercial drones in line of sight and extended line of sight operations.
Chasen said that PrecisionHawk has seen a great deal of interest among enterprise customers for BVLOS operations. To supply drones to meet that level of interest the company has developed a BVLOS-enabled multi-rotor drone platform. This platform can automatically identify all cooperative and non-cooperative aircraft in a 10 kilometer radius. Its design was based on the blueprint outlined in the Pathfinder Report.
“Flying drones over long distances—an imperative for inspecting miles of oil and gas pipeline in remote areas or hundreds of acres of crops—has been all but impossible to-date as the FAA requires very high safety standards from drone operators seeking to fly beyond line of sight,” Chasen said in a prepared statement.
The Pathfinder Report is a significant step towards actions such as drone deliveries, as has been envisioned by Amazon and other companies. At this point, long range drones are required to have human pilots, but even that requirement paves the way for a major step in commercial drone operations. While Amazon and others are looking towards future use of autonomous drones, having to use human pilots will still be more efficient than operating a vast fleet of delivery trucks and drivers that have to deal with crazed motorists, rush hour traffic, or remote delivery sites.
It will still be a while before you see routine beyond the visual line of sight drone operations. The FAA is obsessively safety oriented and until drone operators can prove that they can meet safety requirements, their drones won’t be allowed to fly. However, at least some operators such as PrecisionHawk will be able to provide flight services, teach others through their consulting arm and share their expertise in partnership with the FAA.
So your Amazon order won’t be coming by drone next week. But next year? Well, maybe. | <urn:uuid:2db4bed2-71a2-47c7-979d-f3cc9a5e90cb> | CC-MAIN-2022-40 | https://www.eweek.com/mobile/drone-makers-work-with-faa-on-beyond-line-of-sight-flights/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00353.warc.gz | en | 0.953566 | 1,161 | 2.796875 | 3 |
Continuous testing (CT) is an approach to software development, where applications are tested continuously throughout their entire life cycle
What is continuous testing?
Continuous testing (CT) is an approach to software development, where applications are tested continuously throughout their entire life cycle. The goal of CT is to evaluate the software’s quality across the life cycle, providing important feedback earlier and permitting higher-quality and faster deliveries.
In the past, testing and development teams had been working in silos resulting in tests being extremely time exhaustive. Extensive testing required great number of human hours and with a huge spike in delivery speed and consumer expectations, the traditional system was causing huge bottle necks. It brings the development testing and operations teams together enhancing synergy among the teams and knowledge of individuals.
While Q/A focuses on standardization and adherence to it, testing involves finding and ironing out the bugs. It also helps in the QA team improve their understanding of the deployment environment. This enhances the quality of their replication of the environment.
What are the key factors in continuous testing?
• Minimizing the waiting time by simplification of processes.
• The replica of the deployment environment must be as accurate as possible so that the development teams can get all the important feedbacks.
• It requires different tools with varied capabilities. Hence access to these tools is crucial for continuous testing.
How is continuous testing carried out?
It is carried out through the following steps
1. Generating test automation suite for the requirements.
2. Setting up a test environment.
3. Creating test data bed from production data.
4. Testing API.
5. Performance testing.
What are the advantages :
1. Increase in delivery speed.
2. Increase in quality.
3. Increase in synergy between development operations and testing teams.
4. Better user experience due to faster feature releases.
5. Reduce risk as the code is checked at multiple stages.
6. Time saving as testing happens continuously and the waiting time for developers is drastically reduced.
What is the role of continuous testing in DevOps?
Initially, it was used to reduce the time taken to collect feedback from testing. Tests are conducted each time the code is modified. Each code branch is separately tested, and the integrated code base is tested for the proper functioning of all the branches put together.
It is an essential part of DevOps, identifying and resolving issues as early as possible, thus saving cost and time for resolution. | <urn:uuid:8ea78dcd-f604-448a-9ea5-e493a26ff820> | CC-MAIN-2022-40 | https://alertops.com/articles/continuous-testing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00353.warc.gz | en | 0.954514 | 511 | 3.09375 | 3 |
Researchers analyzing the safety of legitimate device drivers have found that more than 40 of at least 20 hardware suppliers can be abused to increase privilege.
Hardware is the building blocks of a computer that contains software. The drivers allow the operating system to identify and interact with hardware components.
The driver code enables communication between the OS kernel and the hardware and enables a higher level of permission than the user and system administrator.
Therefore, driver vulnerabilities are a serious problem, as a malicious actor can use them to access the kernel and obtain the highest operating system (OS) privileges.
Since drivers are used for upgrading hardware firmware too, they can reach even deeper components that are free of OS limitations and change their functioning or bricking.
For example, BIOS and UEFI firmware are low-level software, which starts before the operating system when the computer is activated. Malware that is plantted in this component can not be removed by reinstalling the OS and is invisible to most security solutions.
Drivers are trusted
Researchers in the Eclypsium firm of firmware and hardware found more than 40 drivers that could be abused to increase user privileges to kernel permissions.
Every major BIOS vendor and major names in the computer hardware business such as ASUS, Toshiba, Intel, Gigabyte, Nvidia, and Huawei are included in the list (list below).
“All these vulnerabilities allow the driver to act as a proxy to perform highly privileged access to the hardware resources, such as read and write access to processor and chipset I/O space, Model Specific Registers (MSR), Control Registers (CR), Debug Registers (DR), physical memory and kernel virtual memory.” – Eclypsium
An attacker can move from the kernel to firmware and hardware interfaces that can compromise the target host over and above the detection capacity of normal OS-level threat protection products.
Installing Windows drivers requires the privileges of administrator and must be Microsoft certified trusted parties. In order to demonstrate authenticity, the code is also signed by valid certificate authorities. In the absence of a signature, Windows gives the user a warning.
Eclypsium research, however, refers to legitimate drivers with valid Windows-approved signatures. These drivers are not designed for malicious purposes but contain vulnerabilities that malicious programs and actors can abuse.
The researchers say some drivers interacting with graphic cards, network adapters, hard drives and other devices have been found among the vulnerable drivers.
In those components, malware “can read, write or redirect data saved, displayed or sent via the network.” In addition, components can be disabled, causing a system Denial-of-Service condition.
Vulnerable drivers ‘ attacks are not theoretical. They have been identified by well-financed hackers in cyber-espionage operations.
In the Slingshot APT group old vulnerable drivers have been used to increase the privileges on infected computers. The APT28 lojax rootkit (such as Sednit, Fancy Bear, Strontium Sofacy) was more insidious when it was lodged with a signed driver in the UEFI firmware.
All modern Windows versions are affected by this problem and there is no wider mechanism to prevent vulnerable drivers from being loaded.
A scenario of attack is not confined to systems with a vulnerable driver already installed. Threat actors can add them for privileges and persistence purposes in particular.
To mitigate this risk, regular scans of outdated system and parts firmware are included and the latest driver fixes are used from device manufacturers to solve vulnerabilities.
Below is a partial list of vendors affected as some are still subject to embargo.
American Megatrends International (AMI)
ATI Technologies (AMD)
Micro-Star International (MSI) | <urn:uuid:17d70a9c-645b-4381-ad70-e0bfdde3c552> | CC-MAIN-2022-40 | https://cybersguards.com/40-windows-hardware-drivers-vulnerable-to-privilege-escalation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00353.warc.gz | en | 0.918057 | 808 | 2.84375 | 3 |
Data mining, analytics and Web dashboards can help fill gaps in education by letting educators study how students learn, a Brookings report says.
Big data techniques can improve education by making it possible to mine information for insights regarding student performance and learning approaches, according to a new report from the Brookings Institution.
There is a real potential for improved research, evaluation and accountability through the use of data mining, data analytics and Web dashboards, according to Darrel West, director of Brookings’ Center for Technology Innovation and author of the report, “Big Data for Education: Data Mining, Data Analytics, and Web Dashboards.”
“Data-driven approaches make it possible to study learning in real-time and offer systematic feedback to students and teachers,” West writes in the report. “By focusing on data analytics, teachers can study learning in far more nuanced ways,” he said.
Online tools let educators evaluate a much wider range of student actions, such as how long they devote to readings, where they get electronic resources and how quickly they master key concepts, West noted.
For example, an online high school curriculum known as Connected Chemistry helps students learn key concepts in molecular theory and gasses. However, it also allows teachers to mine learning patterns to see how students master chemistry, statistics, experimental designs and key mathematical principles.
Other ways that technology enables learning is through predictive and diagnostic assessments, the report states. McGraw-Hill has an Acuity Predictive Assessment tool that provides an early indication of how students will likely perform on state assessments tests. It assesses the gap between what students know and what they are expected to know on standardized tests and suggests where students should focus their time in order to improve exam performance.
“Armed with statistical information compiled from various digital systems, a number of schools have developed dashboard software and data warehouses that allow them to monitor learning, performance, and behavioral issues for individual students as well as the school as a whole,” according to the Brookings report.
Dashboards compile key metrics in a simple and easy to interpret interface so that school officials can quickly and visually see how the organization is doing.
The Education Department has a national dashboard that compiles public school information for the country as a whole. The dashboard measures such items as percentage of 25 to 34 year-olds who completed an associate’s or higher degree (and whether this number was up or down from earlier periods), 4th grade reading and math proficiency in National Assessment of Educational Progress, and 18 to 24 year olds enrolled in colleges and universities. It also measures the number of states using teacher evaluation systems that include student achievement outcomes.
Michigan has a dashboard at that ranks performance as improving, staying the same, or declining in various fields. The dashboard focuses on 14 indicators for student outcomes, school accountability, culture of learning, value for money (the number of districts with ongoing deficits) and post-secondary education.
Higher education dashboards often feature a wider array of material, the report states. The University of California at San Diego has dashboards that are relevant to specific parts of the organization. There is a financial dashboard that focuses on financial and capital resources. There is a faculty one that keeps tabs on sponsored research. Each draws on data from university systems and displays and updates the information as desired by the user.
Recently, the university added an energy dashboard that measures consumption and ways the campus is saving energy.
There are many opportunities to advance learning through data mining, data analytics, and Web dashboards and visual displays, the report states. Yet operational and policy barriers complicate the achievement of these benefits.
The biggest obstacles are building data sharing networks. Many schools have information systems that do not connect with one another. There is one system for academic performance, another for student discipline, and still another for attendance, the report notes. The fragmented nature of technology inhibits the integration of school information and mining for useful trends. In addition, educational institutions need to format data in similar ways so that results can be compared. | <urn:uuid:42e0bbe7-7ea4-4eee-b33d-4e1d359ab696> | CC-MAIN-2022-40 | https://gcn.com/data-analytics/2012/09/how-analytics-can-make-education-a-learning-experience-for-teachers/280933/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00353.warc.gz | en | 0.946884 | 822 | 3.671875 | 4 |
We’re being warned for years that the world is heading toward a food shortage crisis. The UN predicts that food production must double by 2050 to meet the needs of the planet’s steadily increasing population. Meanwhile, resources and land are becoming more scarce, and global warming is making things worse.
In fact, this is nothing new. Humanity has been dealing with food shortage since its dawn, and at every turn, it has found a solution to overcome the challenges. Agriculture was humankind’s first response to food shortage in an era where it relied on hunting for sustenance.
In the 20th century, advances in fertilizers, irrigation and mechanized farming helped feed a fast-growing population.
But at this stage, something else is needed to make sure the same amount of land, water and resources can feed the 9-10 billion people who will be living on Earth in the next 40 years.
Humanity’s way out from the next phase might be paved by fast growing technology trends such as Internet of Things, robotics and artificial intelligence, all of which are opening up new possibilities in numerous fields and industries.
Here’s some of the ways technology can help deal deal with the food shortage problem before it turns into a crisis.
Internet of Things
IoT, one of the fastest-growing sectors of the tech industry, has enabled us to expand the internet beyond our desktop, laptop and mobile devices to the physical world that surrounds us. By 2020, there will be more than 20 billion connected devices across the world.
Smart sensors will account for a large part of these devices, and they will gather data about the physical state and quality of things, including soil, plants, seeds, etc.
The gathered data can be used to glean insights and perform precision farming, such as applying water to areas where the moisture of soil has dropped instead of wasting water on huge patches of land that don’t need it. An IoT-managed watering system can considerably decrease consumption while at the same time increasing yields.
Added to that are the remote control capabilities of IoT and industrial IoT systems, which will enable minute changes and automated changes to be applied to agricultural machinery and equipment.
Drones and robotics
As humans become more urbanized, farmfield workers will become more scarce. However, demand for food will not lessen and will only increase. The void left from humans leaving rural areas for cities can be filled with drones and droids that are much more capable, flexible and affordable than heavy machinery—and much more hardworking than humans.
While being able to replace humans in fields, a combination of IoT and laser equipped drones can also perform tasks with higher precision, and apply energy and resources to the exact spots and locations that are needed.
For instance, using IoT sensors and weed detection software, farmer drones will be able to apply herbicide or laser exterminators to the exact location where it is required instead of spreading huge amounts of chemical substance on wide areas where it will go to waste or do more damage than good.
Machine learning and analytics
When combined with machine learning and analytics, data collected from IoT sensors can open up totally new possibilities. For instance, in the livestock business, by collecting the huge amounts of data collected by IoT sensors and feeding it to cloud-powered machine learning algorithms, livestock farmers will be able to glean actionable insights that will enable them to improve production.
By collecting sensor-generated data from cows and ingesting it, farmers can analyze and improve the quality, mixture and timing of the feed and increase milk yield without increasing the amount of feed.
Furthermore, machine learning and deep learning can be used to detect problems faster than humans. An image analysis algorithm fed with photos of diseased and healthy plant leaves, from which it learns to automatically scan image updates of fields and detect the health status of leaves and areas that are problematic and need attention.
Other uses of machine learning in agriculture include algorithms that consolidate weather forecasts and environmental data for different areas and help make predictions such as the emergence of pest or loss of nutrients, which can help farmers take action before damage is done. Predictive maintenance is one of the strongest uses of the convergence of IoT and machine learning.
This is just scratching the surface. The possibilities are a lot more, and I’m sure many of you out there have some great ideas to share. I’ll be writing about this again very soon, and I’m eager to hear what your innovations and ideas are in the field.
The truth is, we’re heading toward some very critical conditions regarding the availability of food for every human being on earth. But fortunately, we have a lot of tools and technologies that can help us overcome this challenge, just as we’ve done throughout the history of mankind. Now’s the time to act to make sure our children and children’s children will live in a world of abundance and comfort instead of one wrought with scarcity and strife. | <urn:uuid:5ab03bc8-1339-416b-88da-1e08940a386f> | CC-MAIN-2022-40 | https://bdtechtalks.com/2016/11/09/how-technology-can-prevent-food-shortage-crisis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00353.warc.gz | en | 0.938097 | 1,023 | 3.40625 | 3 |
Policy enforcement is the process of managing network and application connectivity, access, and use according to one or more policies defining the conditions under which access is allowed.
In a computing context, policy enforcement typically refers to the creation, categorization, management, monitoring, and automated execution of a specific set of requirements for use of a computer or communications network—that is, not only the enforcement of policies but policy definition, application, and management more generally.
Policies may address virtually any “who, what, where, when, how, or why” parameters, including who can access resources, when, from where, using what devices or software, what the user can and cannot do once access is granted, for how long, and with what auditing or monitoring. Policies may also address more technical interactions or requirements such as protocols to accept, ports to use, or connection timeouts.
Organizations create policies to control, manage, and sometimes monetize their assets and services. Network policy enforcement helps to automate data and asset security measures, including BYOD requirements. It can enable a service provider, for instance, to create differential rates for specific services or times of use. It also can be used to help enforce enterprise ethical standards (such as use of company equipment and time for personal ends) and to better understand and manage network use.
Policy enforcement is typically handled by software or hardware serving as a gateway, proxy, firewall, or other centralized point of control in the network. Policies must first be defined, along with one or more actions that will be taken if a violation occurs. Once policies are defined, the software or hardware becomes a policy enforcement point in the network—a nexus where policy enforcement occurs in three parts:
For instance, a policy may identify known malicious IP addresses and specify that any and all traffic from those addresses be rejected. More complex policies may allow a specific user to connect to some applications but not others, or to perform some actions once connected that will incur a higher fee than others (such as using a streaming service on a high-resolution device). In this way, authentication, authorization, and accounting (AAA) systems are a form of policy enforcement.
Network policy enforcement may require compliance with more sophisticated and granular parameters such as the presence of unexpired certificates, the type or version of a device or browser being used to connect, or an absence of patterns of behavior associated with attacks.
Monitoring or documentation of the entire enforcement process, particularly incidents of noncompliance, is often part of a policy enforcement solution.
Multiple F5 products can serve as gateways or full proxies that enable granular control over policy creation and enforcement from a single, centralized point of control. In particular, BIG-IP Policy Enforcement Manager (PEM) provides sophisticated controls for service providers looking to monetize services and improve network performance. For enterprises, BIG-IP Access Policy Manager (APM) delivers context-based management of access to applications with a graphical user interface called Visual Policy Editor (VPE) that makes it easy to create, edit, and manage identity aware, context-based policies. | <urn:uuid:e3a24d45-5e71-4f78-a8cc-c928af3b6859> | CC-MAIN-2022-40 | https://www.f5.com/services/resources/glossary/policy-enforcement | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00353.warc.gz | en | 0.918854 | 631 | 2.8125 | 3 |
Glucose levels are reduced in the brains of individuals with obesity and type 2 diabetes compared to lean individuals, according to a new Yale study.
The finding might explain disordered eating behavior — and even a higher risk of Alzheimer’s disease — among obese and diabetic individuals, the researchers said.
Both obesity and type 2 diabetes are linked to decreased metabolism in the brain.
This hypometabolism is also associated with Alzheimer’s disease, but researchers have not pinpointed why.
To examine the mechanism, the Yale team studied brain glucose levels in three different groups of adults:
individuals who are lean and healthy, and those with either obesity or poorly controlled type 2 diabetes.
After fasting overnight, the study participants received intravenous infusions of glucose for two hours. During the infusions, the researchers used a brain scanning technique — magnetic resonance spectroscopy — to measure levels of glucose in the brain.
While blood glucose levels among the participants were similar, the researchers detected significant differences in brain glucose.
Among the obese and diabetic participants, “we found decreased or blunted entry of glucose into the brain,” said first author and assistant professor of medicine Janice Hwang, M.D.
That blunting could be one mechanism that undermines the ability of the brain to sense glucose, she noted.
The researchers also rated participants’ hunger, satisfaction, and fullness before and after the infusions.
“The lean people who had more glucose entry into the brain also felt more full, even though they hadn’t eaten overnight,” she said.
Hwang explained further: “Glucose is the most primitive signal to the brain that you’ve eaten.
Could it be that obese individuals are not getting sugar into the brain, and not sensing it; thus the feedback loop to stop eating could also be blunted?”
The study points to the importance of sugar transport from the blood into the brain as both a target for further research and possible pharmacological intervention in people with obesity and type 2 diabetes, the researchers noted.
Other study authors are Lihong Jiang, Muhammad Hamza, Elizabeth Sanchez Rangel, Feng Dai, Renata Belfort-DeAguiar, Lisa Parikh, Brian B. Koo, Douglas L. Rothman, Graeme Mason, and Robert S. Sherwin.
Funding: This study was supported in part by grants from the National Institutes of Health, and the Yale Center for Clinical Investigation, supported by the Clinical and Translational Science Award, the Endocrine Fellows Foundation, and the American Diabetes Association. Hwang reports research support from Pfizer and Regeneron.
Source: Ziba Kashef – Yale
Publisher: Content organized by NeuroscienceNews.com.
Image Source: Yale news release.
Original Research: Full open access research for “Blunted rise in brain glucose levels during hyperglycemia in adults with obesity and T2DM” by Janice J. Hwang, Lihong Jiang, Muhammad Hamza, Elizabeth Sanchez Rangel, Feng Dai, Renata Belfort-DeAguiar, Lisa Parikh, Brian B. Koo, Douglas L. Rothman, Graeme Mason, Robert S. Sherwin in JCI Insight. Published online October 19 2017 doi:10.1172/jci.insight.95913 | <urn:uuid:6ce2ec8b-6879-417d-8d06-49d5cb0485c2> | CC-MAIN-2022-40 | https://debuglies.com/2017/10/20/lower-brain-glucose-levels-brains-of-people-with-obesity-and-type-2-diabetes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00353.warc.gz | en | 0.910486 | 705 | 2.703125 | 3 |
Serverless computing is a form of cloud computing where resources are provisioned on-demand and in exact units based on precise usage, unlike traditional cloud computing which allocates chunks of resources that are consumed or left unused.
For billing, this entails a very precise payment model based on usage—if the serverless application requests 3.74GB of RAM, then that is what is precisely billed. In this way, resources are dynamically pulled from a massive resource pool that serves many, but ensures resource optimization.
Currently, serverless is the top most layer of the cloud computing stack, following the other 3 major cloud computing models: SaaS, PaaS, and IaaS. Like the other under layers which provide a managed resource, serverless cloud providers provide a layer that abstracts servers away from DevOps. Because cloud providers fully manage a gamut of responsibilities including provisioning, scheduling, scaling, patching, security, etc., developers using serverless are able to focus on their DevOps solely, and free of servers. In fact, this is where “serverless” gets its name, in the idea that developers are emancipated from managing back-end operations.
Serverless is often associated with microservices and Function-as-a-Service (FaaS). Generally, the context of this association encompasses the three serverless characteristics below. Together they support a serverless environment. In one instance, this may look like a platform like WordPress that manages a website backend, while providing a separated website content management dashboard for users.
Technically, the three main characteristics that define serverless environments is what sets it apart from other cloud computing models. While scaling is a fourth common benefit in the cloud.
Stateless environment — Stateless servers retain no information. Any data to be saved must be sent to a database or storage device.
Event-driven compute containers — The application is triggered by events, or change in the system. Upon activation, a container spools up to the exact function to execute the event request.
Ephemeral runtime — Event-driven compute containers often invoke for just the time they are used, and then disappear.
Serverless computing is also considered a cloud-native development model, meaning apps are developed using the technologies native to cloud environments, and eliminating the use of on-premise servers, hence serverless. This requires a different programming paradigm for developers than traditional methods like those used to develop monolithic programs.
Serverless computing pros and cons
Each cloud model overlaps in capabilities, an accounting app can be built atop a PaaS system, or sourced as a SaaS solution. Likewise, an accounting app can be built in a serverless environment. Serverless has many pros that enable teams to operate with more agility, transparency, and cost controls.
Returns Time to Developers — Emancipates time for developers by abstracting back-end server management away from application development.
Cost Controls — Consumers are charged on executions only; compared to other payment models, which may charge per virtual machine.
Multi-Language Support — Serverless platforms can support multiple programming languages, but are well suited to support event-driven languages. However, newer developments, like Google Cloud Run, allows any language that can run within a container. And AWS lambda Layers makes allowances for bringing code written in other languages into a code base.
Reduction of Complexity — Serverless, by its nature, reduces responsibilities in DevOps cycles.
Transparent Usage — Serverless provides transparency into system usage. Typically, a dashboard offers a unified look at usage of all applications and services deployed across the organization.
Not Cost Effective in All Situations — Serverless provides significant cost controls, namely charging only for usage, which is beneficial in peak times. However, predictable workloads, for say long-running processes, may be better served by traditional server environments, when traffic is well understood.
Cold Starts — Serverless architectures do not use long-running processes, instead they offer resource provisions on-demand. This means that sometimes resources will start cold in order to respond to a request. For most systems this latency may not be a problem, yet for time sensitive applications this delay may be unacceptable.
Monitoring and Debugging — Monitoring and debugging require a shift in thinking in a serverless ecosystem because serverless architectures, especially those using microservices, present different operational challenges.
Vendor Lock-in — As with any cloud provider, vendor lock-in is a concern. Serverless providers give consumers an ecosystem of functionality, however, this requires deeper and deeper integrations to create more value for the application. As apps become more integrated they run the risk of deeper lock-in.
Serverless cloud architecture
Serverless cloud architectures are design patterns that break down business logic into functional units according to some abstraction paradigm intent on freeing up server management. Three common paradigms are offered by leading cloud providers like Amazon and Google.
Function-as-a-Service (FaaS) — The Function-as-a-Service model can be put in place between Platform-as-a-Service and Software-as-a-Service. It’s not a bare bones development platform, and it's not a full-fledged software package.
In FaaS, developers have access to ready to implement frameworks of functionality. Application development centers on the idea that when requests are made, a container with the exact necessary function and resources is invoked, then when not needed, cleaned up. In this way, FaaS and Serverless are often used interchangeably, but Serverless refers to a third-party managed cloud environment with form-fitting provisioning, and FaaS refers to the event-driven architecture itself.
Mobile Backend-as-as-Service (mBaaS) — mBaaS, also just Backend-as-a-Service (BaaS), provides APIs for mobile developers to link to cloud services, such as cloud storage, user authentication, push notifications, etc. BaaS uses FaaS concepts, such as function-fitting containers to support a framework specific to cloud services.
Serverless Database — Databases can be abstracted using FaaS concepts, helping to eliminate the operational overhead of deploying and managing databases. They automatically scale database compute and storage resources based on real-time demand.
Serverless vs. microservices
Microservices architecture refers to a design pattern where applications are broken down into smaller services, or microservices. The combination and intercommunication of these microservices constitutes the whole application.
There are many good reasons to choose microservices. First, in response to an alternative design pattern, the monolith, where applications contain all of the code, because a monolith can grow to become unwieldy, development teams are challenged to maintain the code base. Monoliths are not ideal for cloud native applications that are designed to responsively and rapidly expand and contract services to meet demand.
Second, microservices are ideal for the containerized operations of cloud architectures. Containers are smaller than virtual machine runtimes, and require less than they do, in both resources, and application overhead. Containers are well suited to holding microservices. They can come into existence when more of the same service is needed, and they can disappear when they’re not needed, saving resources.
Continuing with the concept of wrapping runtime environments around services, effectively containerizing them, we can further shrink a container to just encompassing and running a single function. In this way, when a function is invoked, a container comes up, runs the function and then closes. These kinds of containers are called stateless containers and support the Serverless event-driven model. This is in contrast to stateful containers which spin up and remain for a longer duration, which does not adhere to the ephemeral characteristic of Serverless.
In sharp contrast, serverless and microservices are only associated because they are within the cloud ecosystems, but as serverless offers developers a way to outsource backend responsibilities, microservices is a development design approach.
Serverless computing use cases
Serverless computing requires a cloud-native approach to developing applications. Serverless applications are decoupled, stateless, and contain the least amount of code necessary. As such, nearly any use case that needs to leverage the power of cloud technologies are open for development using serverless.
The short list here will demonstrate the varieties of use cases that serverless features enable.
IoT Sensor Messaging
Scaling Streaming Processes
Multiplying Chat Bots
Batch Jobs / Batch Scheduling
Continuous Integration/Continuous Development (CI/CD) pipelines
Multi Language Applications
Kubernetes for serverless environments
Because serverless environments utilize containers, Kubernetes is a common choice for running serverless environments. Kubernetes is not ready out of the box to run serverless environments. Instead, Red Hat Knative, an open-source project can be used to deploy code to a Kubernetes environment.
Build - A flexible approach to building source code into containers.
Serving - Enables rapid deployment and automatic scaling of containers through a request-driven model for serving workloads based on demand.
Eventing - An infrastructure for consuming and producing events to stimulate applications. Applications can be triggered by a variety of sources, such as events from your own applications, cloud services from multiple providers, Software-as-a-Service (SaaS) systems, and Red Hat AMQ streams.
Knative is evolved over earlier serverless frameworks by allowing the deployment of any workload—monoliths, microservices, or functions.
Business Email Address
Thank you. We will contact you shortly.
Note: Since you opted to receive updates about solutions and news from us, you will receive an email shortly where you need to confirm your data via clicking on the link. Only after positive confirmation you are registered with us.
If you are already subscribed with us you will not receive any email from us where you need to confirm your data.
"FirstName": "First Name",
"LastName": "Last Name",
"Email": "Business Email",
"Title": "Job Title",
"Company": "Company Name",
"Phone": "Business Telephone",
"LeadCommentsExtended": "Additional Information(optional)",
"LblCustomField1": "What solution area are you wanting to discuss?",
"ApplicationModern": "Application Modernization",
"InfrastructureModern": "Infrastructure Modernization",
"DataModern": "Data Modernization",
"GlobalOption": "If you select 'Yes' below, you consent to receive commercial communications by email in relation to Hitachi Vantara's products and services.",
"EmailError": "Must be valid email.",
"RequiredFieldError": "This field is required." | <urn:uuid:136b466c-9b68-4a9a-b71c-a388a2481c9a> | CC-MAIN-2022-40 | https://www.hitachivantara.com/en-anz/insights/faq/what-is-serverless-computing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00553.warc.gz | en | 0.899112 | 2,405 | 2.84375 | 3 |
Every device connected to the World Wide Web has an IP address. Since each address is unique, online activity can be traced to a specific user. However, some users require anonymity. The dark web refers to a section of the internet where individuals operate anonymously. While the content in the deep web is not indexed by search engines, it can be accessed through the surface web. Conversely, the data in the dark web is deliberately concealed. Payments for products and/or services sold on the dark web are made in crypto currencies such as Bitcoin. Therefore, the dark web is the part of the internet where people can communicate, sell/buy, and perform other online activities without revealing their identity.
The dark web, as the name suggests, is mostly used by criminals. Terrorists use it to communicate without being detected by security agents; traffickers of drugs and weapons also use the dark web to escape government authorities; counterfeit currency dealers also use it to conduct illicit activities. For instance, before the Silk Road website was shut down and its owner arrested by the FBI, the site had made more than $1.2 billion in Bitcoin. Generally, criminals have more reasons to conceal their identity, and perhaps, that is why the dark web is largely used for extreme illegal activities.
Besides the cons of the dark web, there are benefits as well. The tool that supports the dark web was created by the U.S. Naval Research Laboratory to allow anonymous online communication among the U.S. military personnel serving abroad. User activity goes through multiple layers of encryption in a way that the source of the traffic cannot be traced. Social networks have emerged in the dark web to allow people to communicate anonymously. In closed societies where governments limit the freedom of speech, people can use these alternatives to connect with the outside world without exposing themselves to authorities. The dark web also provides a platform where whistleblowers can reveal crucial information while hiding their identity. Therefore, contrary to perceptions, the dark web is not only used for criminal activities.
The dark web can be accessed by any internet user. However, it is not accessible through the surface web. Getting to this section of the web requires a browser that supports anonymous communication. One such application is the Tor (The Onion Router) browser, which is an open-source product. To get into the dark web, users must input the unique Tor address of the websites they want to visit. If a website requires authentication, users must provide the correct password. Therefore, to access the dark web, internet users only need to download Tor browser for free, install it, and then perform online activities anonymously.
The dark web exists to enable anonymous web activity. People are able to communicate while avoiding surveillance by third party entities such as internet service providers. Although the dark web provides a platform for criminals to advance their activities, it also allows vulnerable groups to communicate anonymously. Moreover, before the Tor project was made available to the public, any Tor connection would be associated with the U.S. military and other government officials. The exclusive use of this technology increased cyber threats against the government. With an open-source tool for accessing the dark web, website owners may not know when a Tor connection is from a U.S. government official. Due to these benefits, as well as the technical complexity, the dark web cannot be totally destroyed. Government authorities can only target specific marketplaces that deal with illegal trade as they did with Silk Road. Also, destabilizing crypto currencies can deny cyber criminals an anonymous method of payment.
Criminals operating in the dark web sell credit card details among other illicit products. Individuals can protect their data from getting into the dark web by using strong passwords and storing information in secure locations. Checking credit reports regularly can also minimize the damage in case of identity theft. Moreover, deleting spam emails, using information security defense systems, and avoiding suspicious websites can minimize exposure to malicious programs. Therefore, observing best safety practices can help to protect personal information from getting into the hands of criminals.
Overall, the dark web allows internet users to perform online activities anonymously. It has pros and cons. Having been created by the U.S. government, the tool enhances communication among military personnel and allows people living under oppressive regimes to disguise their identity while communicating via the internet. However, criminals have also used the anonymity provided by the dark web to advance their activities. With an open-source browser, any internet user can access the dark web. However, users should not disclose any personal information that would expose them to identity theft. Illegal trade on the web is supported largely by crypto currencies. Therefore, destabilizing these anonymous payment methods can help in the fight against crime. | <urn:uuid:04fc016f-52f1-41a9-9ae3-8e4dfdce4fc2> | CC-MAIN-2022-40 | https://cbisecure.com/insights/dark-web-revealed-and-explained/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00553.warc.gz | en | 0.946689 | 940 | 3.453125 | 3 |
Zombies! What does this make you think of? The living dead prowling around cemeteries at night? The apocalyptic mood of “The Walking Dead”? What all zombies on the TV have in common is that they can only think of one thing – human flesh. Viewers anxiously follow as masses of submissive undead wander across their TV screens, considering themselves safe on their sofas at home – while digital zombies are roaming freely on the Internet and carry out remotely-controlled attacks. We will show you what zombies from films and the television have in common with cyber zombies.
But even though cyber zombies don’t feed on a compulsory brain-based diet, they are still dangerous. A zombie PC is a computer that carries out actions under remote control, without the actual user intending this to happen. This manipulation can be the result of a drive-by download, where the user unwittingly downloads malware. If a backdoor gets onto the computer in this way, criminals can use it to infiltrate the system and remotely control the PC. Because of the uncanny parallels between the undead in Hollywood films who have no will of their own and remotely-controlled computers, security experts call these infected PCs “zombies” as well.
A zombie PC is also called a bot – and a collection of individual bots is a botnet. The network of computers can reach enormous dimensions – sometimes thousands or even millions of zombies are combined into a network. BredoLab, one of the biggest botnets, comprises over 30 million separate devices. This network alone includes ten times as many cyber zombies as people who live in Berlin.
The so-called botmaster is, metaphorically speaking, the puppet master pulling the strings of the PC puppets. He controls individual zombies from his computer and tells them what to do. Some are programmed to send out large volumes of spam. Other bots spy on the users and become “sniffers”. They send the data, credit card details or passwords they have captured to a target specified by the botmaster. The data is offered for sale on the Internet black market – or used directly to plunder victims’ bank accounts. A Brazilian gang managed to extract almost five million dollars from other people’s accounts in this way. But spying on data is just one way a botnet is used. Criminals use zombies for numerous different activities, for example DDoS attacks that deliberately overload servers or computers. Such bot attacks are offered as a service in relevant forums. | <urn:uuid:0dfe476b-fc08-4614-bf2c-659a59bd086e> | CC-MAIN-2022-40 | https://www.gdatasoftware.com/guidebook/what-actually-is-a-zombie-pc | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00553.warc.gz | en | 0.946123 | 509 | 2.515625 | 3 |
Enterprise AI traditionally views all data as good data. But that’s not always true. As investors think through IPOs and strategy when it comes to tech, we need to take injustice embedded in artificial intelligence seriously.
Artificial intelligence has benefitted enormously from the mass of data accessible via social media, smartphones, and other online technologies. Our ability to extract, store and compute data -- specifically unstructured data -- is a game changer. Searches, clicks, photos, videos, and other data train machines to learn how humans devote their attention, acquire knowledge, spend and invest money, play video games, and otherwise express themselves.
Every aspect of the technology experience has a bias component. Communities take for granted the exclusion of others due to traditions and local history. The legacy of structural racism is not far below the surface of politics, finance, and real estate. Never experiencing or observing bias, if that is even possible, is itself a form of privilege. Such bias, let’s call it racism, is inescapable.
Laws have been in place for well over 70 years to remove apparent bias. The Equal Credit Opportunity Act of 1974 and Fair Housing Act of 1968 were foundational to ensure equal access and opportunity for all Americans. In theory, technology should have reinforced equality because the program and the algorithms are color blind.
Nearly 7 million 30-year mortgages analyzed by University of California at Berkeley researchers found that Latinx and African-American borrowers pay 7.9 and 3.6 basis points more in interest for home-purchase and refinance mortgages, respectively, because of discrimination. Lending discrimination currently costs African American and Latinx borrowers $765 million in extra interest per year.
FinTech algorithms discriminate 40% less than face-to-face lenders; Latinx and African Americans pay 5.3 basis points more in interest for purchase mortgages and 2.0 basis points for refinance mortgages originated on FinTech platforms. Despite the reduction in discrimination, the finding that even FinTechs discriminate is important
The data and the predictions and recommendations that AI makes are prejudiced by the human that is using sophisticated mathematical models to query the data. Nicol Turner Lee, from the Brookings Institute, through her research found the lack of racial and sexual diversity in the programmers designing the training sample leads to bias.
The AI apple does not fall far from the tree
AI models in financial services are largely auto-decisioning, where the training data is used in the context of a managed decision algorithm. Using past data to make future decisions often perpetuates an existing bias.
In 2016, Microsoft chatbot Tay promised to act like a hip teenage girl but quickly learned to spew vile racist rhetoric. Trolls from the hatemongering website 4chan inundated Tay with hateful racist, misogynistic, and Anti-Semitic messages shortly after the chatbot’s launch. The influx skewed the chatbot’s view of the world.
Racist labeling and tags have been found in massive AI photo databases, for example. The Bulletin of Atomic Scientists recently warned of malicious actors poisoning more datasets in the future. Racist algorithms have discredited facial recognition systems that were supposed to identify criminals. Even the Internet of Things is not immune. A digital bathroom hand soap dispenser reportedly only squirted onto white hands. Its sensors were never calibrated for dark skin.
The good news is that humans can try to stop other humans from inputting too much inappropriate material into AI. It’s now unrealistic to develop AI without erecting barriers to prevent malicious actors -- racists, hackers, or anyone -- from manipulating the technology. We can do more, however. Proactively, AI developers can speak to academics, urban planners, community activists, and leaders of marginalized groups to incorporate social justice into their technologies.
Review the data
Using both an interdisciplinary approach to reviewing data using social justice criteria and the common sense of a more open mind to audit data sets might reveal subtly racist elements of AI datasets. Changing this data can have significant impact: improving education, healthcare, income levels, policing, homeownership, employment opportunities, and other benefits of an economy with a level playing field. These elements might be subconscious to AI developers but evident to anyone from communities outside the developers’ backgrounds.
Members of the Black and other minority communities, including those working in AI, are now eager to discuss such issues. The even better news is that among the people we engage in those communities are potential customers who represent growth.
Bias is human. But we can do better
Trying to vanquish bias in AI is a fool’s errand, as humans are and have always been biased in some way. Bias can be a survival tool, a form of learning, and making snap judgments based on precedent. Biases against certain insects, animals, and locations can reflect deep communal knowledge. Unfortunately, biases can also strengthen racist narratives that dehumanize people at the expense of their human rights. Those we can root out.
We will never rid ourselves of all our biases overnight. But we can pass on a legacy in AI that is sufficiently aware of the past to foster a more just and equitable society.
Ishan Manaktala is a partner at private equity fund and operating company SymphonyAI whose portfolio includes Symphony MediaAI, Symphony AyasdiAI and Symphony RetailAI. He is the former COO of Markit and CoreOne Technologies, and at Deutsche Bank Ishan was the global head of analytics for the electronic trading platform. | <urn:uuid:135cf074-e6dc-4eec-a74a-3c319674616a> | CC-MAIN-2022-40 | https://www.informationweek.com/ai-or-machine-learning/what-do-we-do-about-racist-machines- | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00553.warc.gz | en | 0.936653 | 1,125 | 2.53125 | 3 |
According to the Ditch the Label’s UK 2014 Survey, 45% of young people experience cyberbullying before the age of 18. It is therefore very important that everyone is aware of how to deal with cyberbullying.
Here are our 10 steps for dealing with cyberbullying.
STEP 1: Stay Calm
The first thing you must do is breathe. It might be good to walk away from your device or the social networking site where the cyberbullying is happening. Take time out. Remember that bullies usually have their own problems and are trying to make themselves feel better by attacking you. Nothing that they say is true.
STEP 2: Don’t Reply
It is very important that you do not engage with the cyberbully. They are looking to infuriate and hurt you, but more than that they want you to respond to them. So NEVER give a cyberbully what they want. Ignore the comments and/or messages.
STEP 3: Take Screenshots
Do not delete any of the abuse you receive online. If you need to share your story with a trusted adult, your school or the police, they may need proof of the cyberbullying in order to act. To take a screenshot of a message on a laptop or PC, press Shift and Print Screen; on a Mac, press Command, Shift and the number 3. On an Apple phone or iPad, press the lock button and the home button at the same time. On an Android phone or tablet, hold the power and volume button at the same time.
STEP 4: Tell a Trusted Adult
There are many people you can tell if you are being cyberbullied. Talk to the adult you feel most comfortable with. They will be able to help you through this difficult time. Remember that you do not have to and shouldn’t go through this alone. Open up and you will be provided with the necessary support and care to overcome this.
STEP 5: Block the Bully
Make sure you block the bully from the relevant social networking sites. This will mean that they can’t contact you or engage with you. Even if you are only being bullied on one social networking site, it is important to block the cyberbully from all of them. This will also send a message to them that you are not going to accept what they are doing to you and how they are making you feel.
STEP 6: Report Abuse
After the bully is blocked you have to report the person and the messages you have been receiving to the relevant social networking site. They will investigate what has been happening and then take relevant action against the cyberbully.
STEP 7: Confront the Bully
If the bully is someone you know from your school, club, team or through mutual friends, it might be a good idea to confront them. Some cyberbullies aren’t aware of what they are doing and how it is making you feel. Don’t presume that they know. Ask them to stop what they are doing to you.
STEP 8: Take it Further
If you have followed steps 1-7 and the bullying has continued or if the messages you have been receiving are posing a real and current threat to your safety, it is time to do something else about it. You may need to contact the school, club or the police. Don’t hesitate to take further action.
STEP 9: Change Privacy Settings
Because you have been a victim of cyberbullying, it means the privacy settings on your social networking accounts are not as secure as they should be. Make sure you change them so that only your friends can see your profile and everything you do.
STEP 10: Review Friends
Once you have reviewed your privacy settings it’s important to filter through your list of ‘friends’ and remove any that you don’t know, haven’t met or don’t like. This will protect you from another incident like this happening again. In the future do not accept friend requests from people you don’t know or haven’t met in person.
It’s very important to deal with cyberbullying. Do not ignore it, or the problem will just get worse. Don’t be afraid to speak up to stamp it out. | <urn:uuid:4e0fc65f-6c9f-4d5d-85b3-ff135fde1c38> | CC-MAIN-2022-40 | https://kids.kaspersky.com/how-to-deal-with-cyberbullying/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00553.warc.gz | en | 0.960002 | 885 | 2.96875 | 3 |
With distance learning trending in today’s classrooms, colleges need to invest in technology that captures lessons in a clear, crisp light.
The best way to start is to do some homework on distance learning cameras, the lens through which teachers in one location teach, and students in another location learn.
But what do you need in a distance learning camera?
Think with your eyes.
Good cameras will produce high definition images in bold colors and detail. Some cameras even go as far as to stitch together a complete, seamless image of a classroom, so that professors can see all of his or her students in a single frame.
For students who learn in the flipped-classroom setting, professors can utilize cameras that enable streaming options, such as Sonic Foundry‘s Mediasite system. Some cameras are used for live-viewing of a lesson now, or provide on-demand options so a class can be watched later.
Other cameras will provide point-tilt-zoom features, so that professors are able to get close-ups of students’ faces and name tags. That way, professors and students feel like they are together in a classroom, rather than feel the infinite miles between them.
Marci Powell, Global Director for Education and Training for Polycom, says that zooming in on a classroom helps students and professors get a better idea of what’s going on in the classroom.
“The ability to zoom in is amazing,” she says. “It’s like a micro-eye. You can see the tiniest detail you can’t see just standing.”
Finally, good cameras will support the ability for students and professors to collaborate and share content, such as with Mersive‘s Solstice software. Features like this will erase the seams of separation between long distance students and professors, and create a unified learning atmosphere. | <urn:uuid:b64f4632-4165-45ec-b0ae-f63e58c13d8c> | CC-MAIN-2022-40 | https://mytechdecisions.com/mobility/distance-learning-through-the-lens/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00553.warc.gz | en | 0.942431 | 390 | 2.6875 | 3 |
Placing Wi-Fi access points on the map
Before you can start working with simulated access points, you need to define the environment which includes adding the necessary maps, defining the scale, and drawing walls. How to draw walls is explained in more details in chapter Defining the Environment by Drawing Walls. After the environment has been defined, you can start simulating your network to find out the optimal network configuration.
To start predicting the network performance, place one or more simulated access points on the map:
- Select the Simulated Access Point Tool from the Planning Tab
- Choose the access point you want to simulate. You can use the search field to find the right AP quicker from the list, if you already know what you're searching for.
- Place the access point on the map with a left-click. If auto-refresh for visualizations is enabled, you should instantly see a visualization for the placed access point.
- You can edit the simulated AP properties and settings in following ways:
- Left-click either the "three dots" button on the AP list
- Left-click the colored technology/channel button on the AP list
- Right-click an access point on the to edit its properties
- You can adjust the Wi-Fi technology, primary channel, channel range (channel bandwidth), and adjust the AP transmission power, AP height, and antenna downtilt - The Elevation Pattern presentation of the antenna displays the beam towards the floor. You can also change the antennas for the access point in this menu - but only within the same band (2.4Ghz OR 5Ghz)
- Repeat steps 2-4 to place more access points
- Click to edit simulated access point
You can also adjust the directional APs orientation directly from the map. Click and hold the small triangle(s) on top of the AP icon until you find the correct orientation.
You can move a simulated access point by simply clicking & dragging them on the map when using Edit-tool.
To delete a simulated access point, right-click on the access point on the map and select Delete - or just press Delete (Win) or Backspace (macOS) on your keyboard.
After placing the APs on the map, the properties of the APs can be changed using the AP list. Refer to the chapter User Interface Overview to read more about changing the properties of the access points that have already been placed.
Tip: Starting from ESS 9.2.0, you can also move/select access points while you have Access Point tool selected - the selection and moving works the same way as with the normal EDIT-tool.
Placing Bluetooth devices on the map
Starting from version 9.2.0, Ekahau Pro now offers users opportunity to create simulated network plans with Bluetooth devices.
There are two different types of Bluetooth devices in ESS: 1. hybrid APs with both Wi-Fi and Bluetooth radios and 2. stand-alone Bluetooth beacons. Hybrid APs can be found from the Simulated Access Point Tool (see above). You can filter in the APs with Bluetooth radios by writing "BLE" in the search field.
Examples of hybrid Wi-Fi/Bluetooth access point and stand-alone Bluetooth beacon
Stand-alone Bluetooth beacons can be found in their own sub-menu called "Bluetooth Beacon" tool. You can find this tool by left-clicking on the arrow next to the Access Point tool or by right-clicking on the tool.
Bluetooth devices are currently only visualized in Bluetooth Coverage visualization (learn more from visualization section) and they don't affect any other visualizations in anyway. Currently, only simulated planning is supported with Bluetooth devices.
Otherwise, Bluetooth devices are handled and manipulated just like regular APs - like explained above.
When creating network plans manually, use the Network Health visualization and appropriate requirement profile at the same time. The Network Health visualization will immediately show you when your plan meets the performance requirements. | <urn:uuid:88a6b9f9-db40-4e63-86e4-28514146f9f0> | CC-MAIN-2022-40 | https://support.ekahau.com/hc/en-us/articles/115004916007-Creating-the-Network-Plan-Manually-Using-Simulated-Access-Points | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00553.warc.gz | en | 0.888099 | 827 | 2.984375 | 3 |
Thomas Jefferson took a couple of sleep-deprived days to craft the Declaration of Independence, spelling out a new nation’s rationale for breaking with the British crown. If only preserving the calfskin parchment on which he and others pledged their lives, fortunes and sacred honors were that simple.
This Independence Day marks the first July 4th celebration since the federal government installed new encasements last September for the document, signed 228 years ago. It’s on display at the National Archives in Washington, D.C., along with the Constitution and the Bill of Rights.
It took more than two years for engineers and scientists from the National Institute of Standards and Technology and NASA to complete their work on new gold-plated, titanium-framed encasements for the documents. The parchments inside the cases rest on cellulose paper, set beneath laminated tempered glass, in a bath of inert Argon gas. An optical system in the encasement base detects infiltrations of water or oxygen.
The encasements replace 1951 models and are designed to last a century. Which makes us wonder what archives will look like in 2104. | <urn:uuid:1411aca9-3190-4625-8220-bd74435bf3c7> | CC-MAIN-2022-40 | https://www.cio.com/article/264616/infrastructure-newly-preserved-declaration-of-independence-on-display.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00553.warc.gz | en | 0.928197 | 233 | 3.21875 | 3 |
A layout is what you see when viewing a workspace; a collection of forms, tabs, lists, and splitters that displays a parent record and its child records. The layout shows you a complete view of a parent record. The parent record appears as a form with its child records displayed under a series of tabs. Layouts use forms to present the user with fields in which they can enter data for a specific record. Layouts generally show a list of records at the top part of the window, and a preview section of a selected record at the bottom.
Layouts enable you to define which business objects and fields are available to specific users. Layouts are designed with a role in mind, and contain the lists and forms that the role can use. Multiple roles can use a layout. A layout can contain tabs, which in turn hold forms.
After defining a role, create the layout, then assign the layout to the role.
Layouts appear in an application workspace and include two main areas:
•Parent Record: This area shows the active parent record (for example, an employee record). A parent record is a form, usually at the top of the work area. Each parent record has a toolbar to help you navigate between parent records and access operations.
•Child Records: This area shows the child records (for example, attachments and notes) of the active parent. Child records usually appear in a series of tabs grouped with a splitter. Each tab represents a child business object, and individual records are displayed on each tab. View the list of records in a list or view an individual record as a form or a form summary. The child record toolbar helps you navigate between child records and perform operations.
Neurons for ITSM comes with several default layouts that are designed for your business needs. As an administrator, you can use these layouts, edit them, delete them, or create your own. | <urn:uuid:aae806cc-59da-44af-bb7b-87be7d1c3d7c> | CC-MAIN-2022-40 | https://help.ivanti.com/ht/help/en_US/ISM/2021/admin/Content/Configure/Layouts/Layouts.htm?Highlight=using%20layouts | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00553.warc.gz | en | 0.924249 | 392 | 2.5625 | 3 |
Business process mapping is an essential tool used to design, document, communicate and improve the flow of work within an organization. The goal of process mapping is to improve efficiency, lower cost and improve customer satisfaction.
This article provides an overview of why process mapping is important. It also outlines various various types of business process mapping techniques that can get you on the path to process improvement.
Why Business Process Mapping
You would never consider heading out on an expedition without a map or building a house without a blueprint.
This analogy holds for process maps. A process map, and the associated process documentation, is an essential tool to guide a company towards a successful outcome.
There are numerous use cases for business process mapping. These include:
- Identification of the end-to-end flow of work within a company
- Showing the interaction between the company and its suppliers, customers, and other stakeholders
- Enabling process improvement and optimization
- Driving digital transformation by helping to identify automation opportunities
- Supporting process audit, compliance, and governance activities
What is a Business Process?
No discussion on processes would be complete without first defining what a business process is.
A business process is a set of tasks that transform inputs into value-added outputs.
A business process lays out the sequence of steps and decision points to perform a common and repeatable piece of work. Employee onboarding is a good example. The onboarding process defines:
- the sequence of work
- who is responsible for each step
- what they need to do
- the inputs, outputs suppliers and customers
- any supporting technology or tools
Processes can be an input or an output of other processes. For example, the HR Onboarding process may trigger processes in the finance, administration, or facilities department.
Understanding how processes interact with each other provides an end-to-end view of work in the company. Understanding this view is an essential step in reducing waste and improving business outcomes.
It’s important to note that a process is not a procedure.
A process guides the reader on “WHAT” to do, and “WHO” does it. Processes are cross-functional, and focused on business goals.
A procedure guides the reader on HOW to do it. It documents what tools to use along with step by step work instructions.
Process mapping is not to be confused with creating procedures!
Process Mapping Techniques
You cannot create a process map in isolation. You need to work with team members familiar with the execution of the process.
You need to bring together a subset of knowledgeable stakeholders involved in the process. That is because a substantial amount of work in an organization is never documented.
It is also important to get cross-functional input into designing or improving a process to foster buy-in and adoption.
Workshops are a typical way to begin the mapping of a business process.
These workshops should be led by a facilitator or project manager with excellent communication skills and in business process modeling aexperience. For example, someone with Six Sigma training.
The process modeling workshop can start with reviewing the overall workflow of the process. This can be done using a SIPOC Diagram, or high-level process flows to guide the discussion. The use of a process mapping template, such as those available in the Navvia Process Designer, can also help accelerate the design process.
It's important to start at a high level and then drill down to deeper levels of detail in subsequent sessions. Always start with the big picture. Getting into the details too early causes you to lose sight of the overall objective.
The facilitator captures the process details. There are various tools that can help. These include a whiteboard, yellow stickies, or one of the many available process mapping tools, such as the Navvia Process Designer.
While mapping the process, the team should look for opportunities to improve process efficiency by reducing complexity or identifying automation opportunities.
Comprehensive business process mapping may require several workshops. Once mapped, the facilitator should validate the process with a broad cross-section of stakeholders.
Process Mapping Examples
There are several different types of process maps. These include:
- Business Process Modelling Notation (BPMN)
- Line of Visibility Enterprise Modeling (LOVEM)
- Swimlane Diagrams
- Standard process flowcharting techniques
- Flow Process Chart
The following is an example of a simple swim lane process map for an expense approval process.
In a swimlane diagram, we create a lane for each role in the process. We place each task (or step) in the corresponding lane. This technique makes it very easy to see who is responsible for the task.
There may also be a dedicated lane that shows any data, systems used, or other processes referenced.
Line of Visibility Enterprise Modeling (LOVEM) is a type of swimlane diagram. The top swim lane is always used to represent the customer or end-user. This technique ensures that the focus is on supporting business outcomes, not on internal requirements.
Business Process Modeling Notation is another prevalent form of creating process maps. It is a potent tool that allows analysts to capture processes consistently.
Here is a link to a BPMN 2.0 poster created by the “Berliner BPM-Offensive”.
Regardless of the process mapping method you choose, the goal should be to create concise, complete, and informative documents.
The Navvia Process Designer is a popular tool for process mapping. It offers a powerful design tool with out-of-the-box process templates for ITIL, COBIT, and ServiceNow® processes.
What is meant by process capability? Technically, it is a statistical tool to measure if your processes are deviating from desired outcomes. This method is critical when attempting to control heavily automated manufacturing processes.
In IT Service Management, we look at process capability as having all the right controls in place.
For example; is the process defined, does it have ownership, and is it being measured and improved?
ITSM Maturity Assessment Model
An ITSM Process Maturity Assessment can be used to assess process capability.
There are two popular methods for performing an ITSM process maturity assessment. The first is the Capability Maturity Model (CMMi) developed by Carnegie Mellon University and administered by ISACA.
The second is the international standard ISO/IEC 15504 (which has been superseded by ISO/IEC 33001).
Both measure capability on a scale of 0-5, with five being the most capable/mature. Here is a definition of the various levels.
Process Capability Levels
An experienced assessor should conduct the process capability assessment.
Assessments consist of:
- Interviewing stakeholders
- Observing the processes in action
- Evaluating process tools and documentation
- Collecting data via standardized assessment questionnaires.
The assessor should validate their observations with stakeholders before compiling their findings and recommendations.
Assessors can use an ISO15505 heat map to share observations with stakeholders. Here is an example of a process capability heat map.
Sample ISO 15504 Heat Map
The heat map identifies areas where the process is deficient.
Think of a heat map like a series of hurdles. To move up a capability level, you need to be “largely” or “fully” compliant in the preceding level.
The assessor derives the score from a set of standardized questions.
The Navvia Process Designer offers a powerful assessment tool that utilizes the CMMi and ISO/IEC 15504 capability assessment models.
The Navvia Process Designer includes hundreds of out-of-the-box process assessment questionnaires for ITIL, COBIT, and ISO20000 processes.
Defining and implementing a process is not a one-time affair.
Business requirements change over time, and business processes need to reflect those changes.
It is also essential to validate the business process is practical, efficient, and adds value to the organization.
Process improvement is a discipline focused on optimizing business processes through incremental improvements to process performance and end-user experience.
Process Improvement Techniques
There are various process improvement techniques available to the organization.
- Process capability assessments, discussed in the previous section, can help identify deficiencies and gaps in the processes.
- Process mapping is essential as you cannot improve a business process unless you understand it.
- SIPOC Diagrams are a great starting point for process improvement. A SIPOC Diagram identifies the suppliers, inputs, process, outputs, and customers of a business process, all on a single page. Here is an example of a SIPOC Diagram.
SIPOC Diagram Example
- Value Stream Mapping is a tool used to identify and eliminate waste in a process. Starting with a SIPOC Diagram, you decompose the process into its constituent steps. You then quantify the processing time and wait time for each step. The goal is to determine the cycle time for each step, then look for ways to improve.
- The Plan, Do, Check, and Act (PDCA) cycle. is a technique that calls for process improvement in every area of the business.
- Six Sigma is a set of process improvement techniques made famous by Jack Welch during his time at General Electric. A key component of Six Sigma is the DMAIC (Define, Measure, Analyze, Improve, and Control) cycle.
- LEAN Methodology is an approach designed to maximize customer value while minimizing waste. Simply, lean means creating more value for customers with fewer resources.
There are many tools and techniques available for process improvement, and it is perfectly acceptable to combine approaches from different disciplines.
Process improvement boils down to evaluating your processes’ effectiveness and implementing improvements based on changing business needs. Don't wait till your process fails, you need to constantly monitor and evaluate performance. You don't want to learn your disaster recovery process is broken in the middle of a disaster.
Business Process Management
This article discussed process mapping and the various techniques to assess, design, and improve processes.
Business Process Management (BPM) is the discipline that brings it all together into a repeatable practice.
Like any discipline, Business Process Management requires focus and governance. One way to achieve this is by creating a Business Process Office (BPO).
Effective BPM requires specific skill sets, skills not typically embedded in the business units.
Think of a Business Process Office as a “Swat Team” helping the business units design, improve, and implement their processes.
Business Process Management Skills
Skills can include:
- Business process analysts (process discovery and process mapping)
- Technical analysts (process implementation)
- Process assessment and governance
- Organizational change
- Program management
Do you have a Service Management Office? Many of the SMO skills are directly transferable to Business Process Office. Learn more about how an SMO can drive digital transformation.
Process mapping and the overarching practice of Business Process Management are more important than ever before. Organizations, to deliver superior customer experiences, are embarking upon Digital Transformation initiatives. Processes are analogous to blueprints and are essential to building any successful Digital Transformation. | <urn:uuid:0e56e364-23f8-425a-8454-117d0d512c64> | CC-MAIN-2022-40 | https://navvia.com/blog/introduction-process-mapping | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00553.warc.gz | en | 0.913975 | 2,338 | 2.640625 | 3 |
Chatbots, as the name suggests, are AI-powered bots to manage real-time queries and questions of the visitors on the websites on the basis of the knowledge base. Surprisingly, their roots go back to the 1960s when the first AI-powered chatbot Eliza, was created in MIT. Today, conversational AI has evolved a lot and most of the chatbots we see on different websites are backed by strong technologies such as artificial intelligence, machine learning, and natural language processing NLP. By using chatbots, any business can become customer-service-oriented and make loyal customers easily. Because of this, small and large-scale businesses are focusing on using chatbots and that is why they have become much more popular in the last five years.
Benefits and drawbacks of using Chatbots:
Here, we will be discussing the advantages and disadvantages of using chatbots.
1. Instant availability:
It is obvious that you cannot answer hundreds of thousands of questions simultaneously and even if you would, it will be very hectic and stressful. One of the main benefits of using chatbots is that customers can communicate with bots and can clear any confusion without being ignored. Their questions will be answered immediately which will increase the chances of sales.
A person cannot talk in hundreds of languages but a chatbot can and this is another area where chatbots are very helpful. You will increase your sales as well as the loyalty of the customers by communicating in their language.
3. It will save your time:
Answers tons of questions is not only stressful but it also wastes your time as this is a repetitive task and most probably, the questions asked will be the same. Using chatbots will help you focus more on underlying problems and you could also utilize your time to grow your business. In another way, all of your days would be spent in communication.
4. Increase in profits:
Buying a chatbot is definitely cheaper as compared to hiring an employee for communication purposes. By using a chatbot, you can save a huge percentage of your profits. Not only this, but a human cannot compete with a bot and it is obvious that the chatbots will be more efficient than an employee.
5. Data collection:
It would not be wrong to say that data is the largest asset in today’s world and the importance of data is not going anywhere in the coming few decades. By using chatbots, you will collect tons of useful data which can be further processed to get relevant information, and then the information will be used for analysis. The best thing about data collection that you can learn, improve, and grow immediately.
1. Maintenance cost:
Although chatbots cost much less as compared to human employees but still, if you need full functionality and all the tools then, it may cost a few hundred bucks per month. For small businesses, it may be difficult to fulfill this amount every month but they definitely make you earn more than you will have to spend on them.
2. Emotional connection:
Having an emotional connection with your customers is also very important in business but unfortunately, by using chatbots this would not be possible to create a rapport or emotional connection.
3. Chances of misunderstanding:
Although, modern bots are very intelligent and are able to difficult tasks more efficiently than humans but after all, they are still in the developmental stage and there is a slight possibility of misunderstanding. This is not a normally occurring event but is not pleasant for your business.
Six best chatbot platforms:
In this section, we are going to discuss the best conversational AI chatbot platforms in 2021.
Collect.chat is a popular AI-based chatbot service provider and you do not have to be an expert in order to install the chatbot to your website rather, it can be done with just a few clicks. As it is a very user-friendly platform so you should not worry about anything. Also, you can manage the chat widgets in the way you like without any problem.
Collect.chat is also very famous because of its unimaginable lead generation and is often recommended by professionals to use if lead generation is your main interest. They claim that their bots are completely automated and work proactively and you even do not have to supervise their work. Also, Collect.chat provides the facility to make and manage meetings with customers with ultimate ease.
Another plus point which makes Collect.chat a worth availing service is that they provide the basic package for free. Although this package does not include advanced features, it can be very helpful for beginners, and of course, you can switch to greater plans in the future. Following are the plans provided by Collect.chat.
|Free||$0/month and billed monthly|
|Lite||$24/month and billed monthly|
|Standard||$49/month and billed monthly|
|Plus||$99/month and billed monthly|
|Subscribe:||You can get the details of the services provided in these packages by visiting their official website here.|
Hellotars.com is also an amazing chatbot service provider and is specifically used for marketing campaigns. Typically, chatbots by Hellotars.com are used by large companies and organizations but you can also create powerful chatbots for your personal website or for any social media platform. According to Hellotars.com, the services provided by them can help you get 2x-3x higher click-through rate and conversions on the website and because of this, Hellotars.com can also be a great choice if you are finding any platform to build a chatbot that operates completely on artificial intelligence.
The bots created on Hellotars.com support up to 25 languages which help you a lot in engaging with customers all around the globe. Hellotars.com can be a perfect option for anyone who is looking for a chatbot to manage international marketing campaigns and client support programs because it is more convenient for you to relate with the customers if there is no language barrier. Like Collect.chat, Hellotars.com also provides an interactive and user-friendly interface and within a matter of minutes, you can install or modify your chatbot. Following are plans provided by Hellotars.com.
|Subscribe:||You can visit their official website by clicking on this link to get further details.|
If you are searching for a professional and cheap chatbot platform, then Virtualspirit.com can be a compatible option. As compared to other chatbots, Virtualspirits.com provides low-priced services and their paid plans start from as low as $9/month. This chatbot service provider is commonly used by small businesses as the service is not expensive and the functionality provided is more than enough for small businesses. The website claims that their services can help you achieve four times more conversions as compared to any other service. Commonly chatbots provided by Virtualspirits.com are used for customer service but can also be used for lead generation, marketing, and sales.
Coding is also not required in order to create a chatbot on Virtualspirit.com. So, it would be more convenient for anyone to make, install, or modify the chatbots. Also, to provide comfort to its users, Virtualspirits.com provides a 30-day free trial and you will get full access to the features and can also earn from their services within the trial period. In this way, the credibility of Virtualspirits.com can be attested. According to the users of Virtualspirits.com, the services provided by the platform are unmatched. The plans of Virtualspirits.com are listed below.
|Subscribe:||You can visit their official website by clicking on this link to get further details.|
Chaatbots.systems is another great chatbot service provider platform and provides a wide range of features. Usually, chatbots.systems is used for sales and in generating leads. So, it could beneficial for you if you won’t generate leads. Chatbots.systems is known for their capability of responding to multiple people at the same time without any delays. This feature is very unique and is not provided by most chatbot service providers.
Also, chatbots.systems is popular because of their simplicity and easy-to-use interface. You would not have to deal with a confusing dashboard which will save a lot of your time. Another benefit that you will get if you use chatbots.systems is that all the history of the conversations with your clients will be saved for a lifetime and you would have access to it.
Even though, chatbots.systems do not provide enough tools and features which they should but still, for small businesses their services are very helpful but for medium or large scaled businesses, chatbots.syetms is not suitable. If you are interested in chatbots.systems, then you should visit their website by clicking on this link.
Purechat.com is also a reliable website to create a live chatbot for your website. Unlike most of the chatbot service providers, purechat.com provides a free trial of 30-days in which you can get an idea if their service is beneficial for you or not. Purechat.com was specifically created to help website owners communicate with visitors easily. One of the main thing which is widely appreciated by its users is the simplicity of the website.
One thing which is often criticized in chatbots is a customization but in the case of purechat.com, you would not have to worry about that. Purechat.com provides complete control over how you want to set up your chatbot. You can choose the positions of the widgets, functions, and can also manage them according to the mobile screen.
The customer support of purechat.com is unimaginable, they would cross the lines to help you. Purechat.com also provides real-time analytics of the user activity on your website so you could take action accordingly. Further, you can get an analysis of marketing campaigns, user experience, and traffic trends. These analytics can help you grow your website rapidly. These are the plans proved by purechat.com.
|Growth||$39/month and paid annually|
|Pro||$79/month and paid annually|
|Subscribe:||Here you can visit their website and get more details about their plans.|
Livechat.com is one of the largest and powerful chatbot service provider available in the market currently and also have a mobile app. Lots of large companies use the services of livechat.com including McDonald’s, Mercedes, Adobe, and PayPal etcetera. Unlike collect.chat, livechat.com does not provide totally conversational artificial intelligence-based chatbots. Livechat.com has a perfect balance of AI and human touch so that, the viewers will get the best possible experience. Also, livechat.com provides a wide range of functionalities and tools to integrate within the bots and these tools can lead you towards more benefits life increment in sales etcetera. Following are the packages of livechat.com.
|Starter||$16/month/agent billed annually|
|Team||$33/month/agent and billed annually|
|Business||$50/moth/agent and billed annually|
|Enterprise||Request a call|
Either it is your personal website, e-commerce store, or social media store, chatbots have numerous benefits and with them, they can help your business to grow by leaps and bounds. Choosing a suitable chatbot platform according to your business type is very important and if done wrong, it could also lead to a huge loss. So, you should study chatbots in detail and make decision deliberately as it is very crucial. The list of chatbots provided in this article is proved that they are very helpful and they are suitable for almost every type of business. So, you should feel free while purchasing chatbots from any of the above-listed platforms. | <urn:uuid:46267adb-8d90-458b-8a81-11ce2113029e> | CC-MAIN-2022-40 | https://www.finsliqblog.com/product-review/the-best-ai-chatbot-platforms/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00553.warc.gz | en | 0.950706 | 2,710 | 2.6875 | 3 |
Comprehensive Understanding of FTTx Network
What Is FTTx Network?
FTTx, or fiber to the “x”, is a collective term describing a wide range of broadband network architecture options. Those architectures utilize optical fiber for some or all of their last mile connectivity. The FTTx is a key method used to drive next-generation access (NGA), describing a significant upgrade to the broadband available by making a step change in speed and quality of the service. FTTx networks bring the combined advantages of higher transmission rates and lower energy consumption. The “x” stands for the fiber termination point, such as home, antenna, building, etc. Therefore, a FTTx network moves optical fiber closer to the user, which allows the latest construction, connection and transmission techniques to be leveraged to their fullest extent and diminishes the bottleneck of conventional coax.
Figure 1: What Is FTTx Network
Understanding FTTx Network Architecture and Applications
According to different termination places, the FTTx network architectures or FTTx network types include FTTH, FTTA, FTTB & FTTP, FTTN, FTTC, etc. Listed below are the most common ones.
FTTH, or fiber to the home, is certainly one of the fastest growing applications worldwide. In an FTTH deployment, optical cabling terminates at the boundary of the living space so as to reach the individual home and business office where families and officers can both utilize the network in an easier way.
There are three main types of FTTH network structures, namely home run, active star networks and passive optical networks (PON).
FTTH - Home Run: A home run architecture uses a direct fiber run from the central office (CO) to the home/customer. Each is a full duplex optical link, making this generally more expensive considering fiber and electronics requirements. It is usually used in some small systems like gated communities with 2 fibers, one digital for Internet and VoIP, the other for analog CATV. Some people refer to this as a point-to-point or P2P network.
FTTH - Active Star: An active star network uses fiber from the CO to a local active node carrying multiplexed signals to be distributed to all the customers. It contains a multi-fiber cable leading from the central office to a local network switch. At the active node, uninterruptible local power is needed if services like 911 are required. And this active star network may be a more expensive one due to the electronics and power needed since there is electronic switching for each customer and connectivity to a dedicated optical link to the premises.
FTTH - PON: The FTTH architecture consists of a passive optical network (PON) that allows several customers to share the same connection, without any active components (i.e., components that generate or transform light through optical-electrical-optical conversion). In this architecture, it usually needs a PON splitter. PON splitter is bi-directional, that is signals can be sent downstream from the central office, broadcast to all users, and signals from the users can be sent upstream and combined into one fiber to communicate with the central office. The PON splitter is an important passive component used in FTTH networks. Because it cuts the cost of the links substantially by sharing, this architecture is more prefered by people when choosing the FTTH architecture.
Figure 2: FTTH - Home Run Architecture
Figure 3: FTTH - Active Star Architecture
Figure 4: FTTH - PON Architecture
FTTA, or fiber to the antenna, is a network architecture utilizing fiber optics to distribute the signals from a BBU (baseband unit) to a remote radio head (RRH) near the top of a cell tower, which is referred to as “fronthaul” in 5G. FTTA technology is an essential element of 5G, since massive MIMO (multiple-input and multiple-output) translates to more antennas and more cabling.
Figure 5: FTTA in 5G Application
FTTN stands for fiber to the node. It is a network where the optical fiber ends at a street cabinet, with final connections being made through existing legacy copper or coaxial cables. FTTN deployments feature optical fiber that terminates at a node that lies only a few miles from the customer. From the node, copper or coax fiber spans in branches to the end user. Within the overarching FTTN designation, a few sub-categories exist.
Figure 6: FTTN Architecture
As a type of FTTN, FTTC means fiber to the curb. It is a topology in which the fiber runs from a central office to a curb-side distribution point, such as in a pole or an enclosure, in the vicinity of customer premises. A FTTC network consists of fiber optic cabling ending within a short distance to the end user (usually around 300 meters).
FTTB refers to fiber to the building or fiber to the basement deployment. In a FTTB network, optical cabling ends directly at the building. Yet, it is different from the typical Fiber to the Home scenario. FTTB deployments are often used to connect apartment blocks or other large buildings. In these cases, service providers bring a fiber line to a node within a building’s communication room. From there, they leverage existing copper wiring to provide network connectivity to each office or apartment within the overall building. Compared to FTTN and FTTC, FTTB is as close as network operators can get to FTTH while still using a node architecture.
Note: Fiber to the premises (FTTP) is a blanket designation including FTTH and FTTB.
Figure 7: Diagram of FTTx Architecture
How Far Can FTTx Network Go?
Because of the development of cloud computing, smart cities and 5G, requirements for higher network speed and bandwidth have increased. FTTx can just meet these needs. The FTTx network provides the basic structure for low-latency, high-bandwidth fiber networks. Through this infrastructure, all current communication modes can achieve sufficient capacity and consistent connectivity. Extending the reach of a fiber network also provides long-distance signal transmission, a lightweight form factor, and immunity against electromagnetic interference. In addition, as the "x" creates unlimited flexibility, future FTTx options will be unlimited. The FTTx network deployment is expected to continue to accelerate in the next decade. | <urn:uuid:9ca38a62-3f46-4da9-9719-a0736107d90c> | CC-MAIN-2022-40 | https://community.fs.com/blog/a-comprehensive-understanding-of-fttx-network.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00753.warc.gz | en | 0.91334 | 1,366 | 3.03125 | 3 |
As more districts are discovering every day, two-way radios in K-12 schools are the centerpiece of any effective safety plan.
Radios outpace cell phones in terms of reliability, durability, and functionality, and there are tailored radio solutions available regardless of the communications challenges schools and districts face.
This safety guide with case studies provides a comprehensive overview of two-way radios in K-12 schools and the options available for overcoming infrastructure, geographic and logistical issues.
Cell Phones Vs. Radios: No Contest
In a survey conducted by Motorola, 99% of administrators, maintenance staff, teachers, transportation directors, and other school staff said their top priority was keeping students safe and secure.
But when those same school personnel considers how to keep in contact during emergencies, many automatically reach for their cell phones without considering that the real top communication tool for school safety is two-way radios.
Here are some of the main reasons that when it comes to safety in schools, more districts are deciding that two-way radios in K-12 schools are a better choice than cell phones.
Instant One-to-Many Communications
Imagine this scenario: A lone school hall monitor witnesses a fight brewing between students and calls for help. If they’re using a cell phone, they’ll need to locate their device, scroll through their contacts, find the right number, and hope that the appropriate person picks up quickly. Meanwhile, as precious seconds tick by, the conflict has escalated to the point of danger.
Now, imagine the same hall monitor is equipped with a two-way radio. Once they see the incident starting, they can use the Push to Talk (PTT) feature to instantly alert not one person, but everyone on that channel about what's happening. The monitor can not only call for help but secure the location for students in much less time than using a cell phone.
Radios Can be Used While Driving
Two-way radios are an invaluable tool for transportation staff concerned about school bus safety. For those responsible for transporting students to and from locations, it is against U.S Department of Transportation (DOT) regulations to use a cell phone while in motion.
Yet 21% of bus drivers surveyed use cell phones, and 8% use a combination of cell phones and two-way radios. Two-way radios, unlike cell phones, have been approved by the DOT.
Greater Reception and Reliability
Another concern for the use of cell phones both on and off the school premises is the chance of dead zones, lack of service, dropped calls, and more. And during large-scale emergencies, cell phone service can become overloaded and unavailable, just when it’s most needed.
Two-way radios are engineered for strong reception, and radio traffic can also be prioritized to protect the most vital communications.
Radio applications and features are also available to extend their range to ensure campus communication across more considerable distances.
Longer Battery Life
Cell phones are wireless devices, but the brief battery life that makes frequent charging necessary can make it feel as if users are tethered to their chargers.
Two-way radios have an exponentially longer battery life than cell phones. In the aforementioned study, 74% of respondents regarded “using a communication device that is reliable and has long battery life” as their most important concern, making it the second highest priority.
Lower Long-Run Cost
With all of these school security benefits of using two-way radios rather than cell phones, how come schools haven’t implemented the system? Budget cuts.
In the Motorola survey, 66% of respondents attributed their lack of a two-way radio system to not having sufficient funds, yet 40% say the school districts need an updated communication system to better meet their school’s needs – particularly for emergencies.
What school officials don’t realize is that after the upfront equipment cost, two-way radios don’t carry user fees. After the initial setup, two-way radios do not run on monthly charges or handling fees like cell phone plans or school WiFi do.
Two-Way Radios and the DHS School Security Checklist
For schools looking for regulatory guidance on safety, the U.S. Department of Homeland Security has a School Security Checklist with guidelines on everything from controlling access to campus buildings to how to develop a sound communication system.
Two-way radios address many of the department’s communication system requirements for school safety.
The K-12 School Security Checklist was issued by the U.S. Department of Homeland Security Office of Infrastructure Protection as a guide and framework for administrators and security officers.
The list has five items under the Communication System section, and calls for things such as “two-way communication between faculty, staff, administrators, and security personnel,” and “regular communication with local law enforcement and emergency responders.”
Two-way radios are, by definition, “two-way communication.” Radios allow school personnel to communicate quickly and efficiently across departments and campuses either individually or in groups.
And thanks to mobile radios and accessories, the devices can be used for both campus communication and school bus safety.
Don’t Forget Training
Several other sections of the school security checklist emphasize the importance of training, and that goes for communications as well.
As important as two-way radios in K-12 schools are in strengthening safety, they must be used correctly to do their jobs. School staff, particularly those who don’t use radios regularly, should be periodically trained on their proper use so they’re ready to communicate quickly when seconds matter.
Below are two brief case studies of how K-12 schools put solutions in place to address safety and security challenges, working with Chicago Communications as two-way radio service providers.
Case Study: Fremont School District 79
When Fremont School District 79 (SD 79) wanted to improve school safety at its three Chicagoland campuses, administrators were focused on communications, both among staff and with public safety agencies, and we knew two-way radios were the answer.
We’re proud that we were able to provide a solution that blended Motorola two-way radios and a Teldio application to make the district’s 2,200 students safer.
Fremont SD 79 covers 36 square miles in the northwestern Chicago suburb of Mundelein in a part of Lake County that’s semi-rural. Forest preserve and rolling hills make it pretty isolated, and cell coverage can be spotty.
Cell Phones Weren’t Making the Grade
When the district came to us, they were using cell phones to keep in touch, and several things weren’t going well.
“Our cell signal was weak,” says Mike Tanner, the district’s director of business services. “It didn’t reach into the lower levels of our schools where we really needed it, such as the cafeteria, storage, and mechanical rooms. Our cell coverage was not only unreliable, but Nextel stopped supporting the push-to-talk network, which eliminated the redundancy we wanted to have in a communication system.”
From a school safety standpoint, staff or public safety agencies couldn’t reliably reach each other. Even when they did make contact, the communication was public, not private, and confined to one-on-one conversations.
Staff Swap Out Cell Phones for Motorola Two-Way Radios
Tanner is a military veteran who’s familiar with using two-way radios. He understands why radios are better equipped than cell phones for what the district needs.
“We recognized the need for a reliable, always available communication system,” he says. “Particularly after the Sandy Hook Elementary School shooting, we were looking for a more robust security solution for our schools, and that was a two-way radio.”
The first step was to transition staff from cell phones to lightweight MOTOTRBO SL 7550 portable radios.
The 40 SL 7550 radios were provided to front-line administrators, the business office, the transportation director, custodians, technology staff, playground aides, and PE instructors who often teach students off campus.
“The SL 7550 radios are lightweight, unobtrusive, and easily worn with an outfit,” Tanner says. “We intentionally selected these devices to fit the people using them. Our personnel is much more likely to carry the radios, and they are really embracing them.”
Teldio App Ensures Immediate Contact with 9-1-1 Dispatchers
In addition to the two-way radios, we recommended that the district use an advanced telephone interconnect app from Teldio, a certified Motorola applications developer.
The app is designed to allow radio users to receive and make phone calls directly on their MOTOTRBO digital radios, including 9-1-1.
“This capability is important for all our personnel, but particularly for those who are outside monitoring playgrounds and sports activities and don’t have access to a landline in the building in an emergency,” Tanner says.
Radios Deliver Benefits
The savvy switch to two-way radios has transformed school safety at the district’s campuses and given staff more peace of mind.
“Our people are very confident having MOTOTRBO radios,” Tanner says.
Case Study: Evanston Township High School DAS
Chicago Communications was proud to be a partner on a high school Distributed Antenna System (DAS) project that’s improving two-way radio and cell communications for staff, students, and public safety officials.
Evanston Township High School, located about 14 miles north of Chicago, is a large, well-known, and unique school with an equally unique set of communications challenges.
Partners on the project designed and installed the high school DAS, or distributed antenna system, at no cost to the school, and they’ll cover maintenance for the first five years.
In addition to Chicomm, the partners were: Cobham Wireless Radvisory 5G, RFS, Galtronics, Graybar, and Fullerton Engineering. AT&T gave testing and engineering time.
By the Numbers
With a whopping 1.3 million square feet of space over 62 acres, Evanston Township High School (ETHS) is the largest high school under one roof in the country.
Founded in 1893, it has nearly 3,400 students and close to 900 teachers and staff. The school is served by the Evanston Police Department.
Challenges and Solutions
Project partners overcame challenges related to the school’s size, age, and sprawl, as well as the fact that classes were in session at the time, according to Dennis Ondriska, Chicomm DAS engineer on the project.
Challenge: The historic building had no pathways for cable.
Solution: We had to create new pathways for the cable where there weren’t any cable trays or we used installation hardware to put in the cable.
Challenge: Concrete walls that are 1-2 feet thick.
Solution: For the wall thickness, we either had to bypass or drill through the walls to run additional cable. We also increased the number of antennas to augment the coverage.
Challenge: Large, sprawling campus.
Solution: We added more antennas. DAS antennas are not like a radio or car antenna – they are mounted in or on the ceiling. Some look like a metal plate with an antenna and others look like a smoke detector.
Challenge: School was in session.
Solution: We worked after hours and on weekends to avoid disrupting students.
The new system has improved two-way radio and cell connectivity for ETHS security officers and staff as well as first responders, increasing everyone’s safety. We’re glad to have been part of such a rewarding project.
Better Coverage, Safer Students and Staff
Working with qualified two-way radio service providers guarantees you quality customer service and maintenance on your radios. By switching over to two-way radios from cell phones, school districts can save not only time and effort but also money. | <urn:uuid:c94649de-4df7-489c-a2a4-66077a9b5d86> | CC-MAIN-2022-40 | https://www.chicomm.com/blog/two-way-radios-in-k-12-schools-safety-guide-with-case-studies | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00753.warc.gz | en | 0.948141 | 2,535 | 2.953125 | 3 |
The term “information warfare” might call to mind Russian trolls exploiting social media, but there has always been a lot more to it than disinformation campaigns on Twitter, or for that matter the airborne propaganda leaflets or Tokyo Roses of wars gone by. Information warfare, or IW, is a key element of every military operation. It spans cyberspace and the electromagnetic spectrum, involving communications of all kinds such as Global Positioning System readings and satellite operations, as well as economic transactions, every level of surveillance, and old-school radio and TV.
Keeping pace with growing cyber threats is an uphill battle for Federal agencies as network complexity increases and the boundaries of networks extend to systems and devices not always under the control of their IT organizations.
The potential of artificial intelligence opens up the abundant future of game-changing machine-based applications in science, medicine, national defense, business, and just about every other area. But getting there while maintaining the U.S. lead in AI research and development will hinge on two old-school constants of innovation: money and people.
Amid growing fears of large-scale cyberattacks–ranging from attacks on infrastructure, to cyber espionage that threatens national security, to a “terabyte of death”–Congressional lawmakers are calling for a more clearly defined strategy for responding to such attacks.
The latest edition of the Army’s annual Cyber X-Games exercise is designed to let Reserve and other cyber warriors team up to train in dealing with real-world situations. It is focused on protecting U.S. infrastructure, an area somewhat outside the norm for the exercises, but one that reflects an emerging potential battleground on the cyber landscape.
The Defense Information Systems Agency (DISA) is considering limiting the network damage that can result from Web browsing by having employees take it outside.
The possibilities of quantum computing have been floating on the horizon for a while now, at least since renowned physicist Richard Feynman dreamed up the idea in 1982. But like the horizon itself (at least in a world that isn’t flat), it always seems to recede despite all efforts to close in on it. Until now.
With the Department of Veterans Affairs (VA) formally signing on last month to adopt the same electronic health records system as the Department of Defense (DoD), the two agencies are putting a lot of chips on a solution to a problem that history suggests is pretty risky. | <urn:uuid:04827a53-1f17-4c90-bddb-1ddd8198968b> | CC-MAIN-2022-40 | https://origin.meritalk.com/author/kmccaney/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00753.warc.gz | en | 0.939209 | 492 | 2.78125 | 3 |
Researchers at the General Atomics-operated DIII-D National Fusion Facility in San Diego have released a report on plasma turbulence and electron behavior that could support the development of nuclear fusion power plants.
The article, published on the academic journal Nuclear Fusion, provides insight on electron density and transmission through plasma that may be used to predict fusion plasma performance and improve power generation, General Atomics said Tuesday.
DIII-D scientists conducted experiments on plasma collisionality and found that low collisionality results in greater peak electron densities and an internal barrier that alters plasma turbulence.
According to the team“™s findings, the relationship between particle collisions and peak densities impact other plasma characteristics and potentially result in better nuclear fusion capacities.
“This work substantially improves the understanding of electron behavior in the plasma core, which is an area of great importance for increasing fusion gain,“ said David Hill, director of DIII-D. “This is another important step toward practical fusion energy in future commercial reactors.“
General Atomics partnered with various academic institutions in the U.S., Finland and Sweden as part of the research effort. | <urn:uuid:07422d6b-24f3-48a8-8ec6-492a53d5bbe8> | CC-MAIN-2022-40 | https://www.executivebiz.com/2020/05/general-atomics-led-team-at-diii-d-natl-fusion-facility-unveils-new-nuclear-energy-research/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00753.warc.gz | en | 0.913731 | 232 | 2.734375 | 3 |
What Is a Data Breach
A data breach or data leak is the release of sensitive, confidential or protected data to an untrusted environment. Data breaches can occur as a result of a hacker attack, an inside job by individuals currently or previously employed by an organization, or unintentional loss or exposure of data.
Data breaches can involve information leakage, also known as exfiltration—unauthorized copying or transmission of data, without affecting the source data. In other cases, breaches incur complete loss of data—as in ransomware attacks, which involve hackers encrypting data to deny access by the data owner.
In other words, in a data breach, hackers or employees release or leak sensitive data. As a result, the data might be lost, or used by perpetrators for various malicious purposes.
Types of Information Leaked in a Data Breach
A data breach can result in the leak of several types of information:
- Financial data—such as credit card numbers, bank details, tax forms, invoices, financial statements
- Medical or Personal Health Information (PHI)—as defined in the US HIPAA standard, “information that is created by a health care provider [and] relates to the past, present, or future physical or mental health or condition of any individual”
- Personally Identifiable Information (PII)—information that can be used to identify, contact or locate a person
- Intellectual property—such as patents, trade secrets, blueprints, customer lists, contracts
- Vulnerable and sensitive information (usually of military or political nature)—such as meeting recordings or protocols, agreements, classified documents
Data Breach Costs
The cost of a data breach can be devastating for organizations—in 2017, the average data breach cost its victim $3.5 million.
The immediate business costs of a breach include:
- Customer breach notifications
- Government fines
- Public relations costs
- Attorney fees
- Cyber security investigations
- Operational disruption
- Drop in stock price
Data breaches also have indirect, long term costs:
- Damage to brand and reputation
- Reduced trust by customers and partners
- Loss of customer relationships
- Loss of intellectual property
- Insurance premium increases
Causes of Information Leakage
The following are common causes of information leaks at organizations.
Insider threats include disgruntled employees, former employees who still retain credentials to sensitive systems, or business partners. They might be motivated by financial gain, commercially valuable information, or a desire for revenge.
Payment fraud is an attempt to create false or illegal transactions. Common scenarios are credit card breach resulting in fraud, fake returns, and triangulation frauds, in which attackers open fake online stores with extremely low prices, and use the payment details they obtain to buy on real stores.
Loss or theft
Organizations store sensitive information on devices such as mobile phones, laptop computers, thumb drives, portable hard drives, or even desktop computers and servers. Any of these devices could be physically stolen by an attacker, or unwittingly lost by organization staff, resulting in a breach.
Many data breaches are not caused by an attack, but rather by unintentional exposure of sensitive information. For example, employees might view sensitive data and save it to a non-secure location, or IT staff might mistakenly expose a sensitive internal server to the Internet.
Data Breach Cycle
An attacker planning a data breach will typically go through the following steps, until they successfully obtain the sensitive data:
- Reconnaissance—an attacker starts by identifying potential targets. These could be IT systems, ports or protocols that are accessible and easy to penetrate or compromise. The attacker can also plan a social engineering attack against individuals in the company who have privileged access to systems.
- Intrusion and presence—the attacker breaches the organization’s security perimeter and gains a foothold in the organization’s network.
- Lateral movement and privilege escalation—the attacker’s entry point may not allow them to immediately obtain the sensitive data. The attacker will attempt to improve their position by moving to other systems and user accounts, and compromising them, until they provide access to the desired data.
- Exfiltration—the attacker transfers the sensitive data outside the organization’s network, and either uses the data for personal gain, resells it on the black market, or contacts the organization to demand ransom.
Data Leakage Prevention
In 2017, the average data breach in the United States took 206 days to detect. A data leak frequently occur without an organization’s knowledge, and security experts agree that data leaks are not completely preventable. Therefore, sound practices must be in place to detect, contain and remediate data breaches.
In addition, here are best practices organizations can use to prevent data breaches:
- Vulnerability assessments—systematic review of security weaknesses in organizational systems, with continuous action to remediate high priority security gaps.
- Penetration testing—simulated cyber attacks against IT systems to check exploitable vulnerabilities.
- Training and awareness—many breaches occur via unintentional or negligent exposure of data, or social engineering attacks such as Phishing. Preventive measures include training staff on security procedures, helping them avoid social engineering attacks, and clearly labeling sensitive data.
- Mitigation and recovery plans—security staff must document known threats to sensitive systems, and maintain plans for responding, containing, mitigating and recovering from security incidents.
- Defending the network perimeter—security tools can be used to deny unauthorized access and prevent many types of attacks against information systems. For example, Imperva’s Web Application Firewall protects from all common web application security threats such as SQL injection, Cross Site Scripting (XSS) and remote file inclusion (RFI).
Imperva Data Protection Solutions
Imperva’s industry-leading data security solution protects against data breaches, where your data wherever it lives—on premises, in the cloud and in hybrid environments. It also provides security and IT teams with full visibility into how the data is being accessed, used, and moved around the organization.
Our comprehensive approach relies on multiple layers of protection, including:
- Database firewall—blocks SQL injection and other threats, while evaluating for known vulnerabilities.
- User rights management—monitors data access and activities of privileged users to identify excessive, inappropriate, and unused privileges.
- Data masking and encryption—obfuscates sensitive data so it would be useless to the bad actor, even if somehow extracted.
- Data loss prevention (DLP)—inspects data in motion, at rest on servers, in cloud storage, or on endpoint devices.
- User behavior analytics—establishes baselines of data access behavior, uses machine learning to detect and alert on abnormal and potentially risky activity.
- Data discovery and classification—reveals the location, volume, and context of data on premises and in the cloud.
- Database activity monitoring—monitors relational databases, data warehouses, big data and mainframes to generate real-time alerts on policy violations.
- Alert prioritization—Imperva uses AI and machine learning technology to look across the stream of security events and prioritize the ones that matter most. | <urn:uuid:4be2c199-16a1-459b-a8c9-6ea933bac5d3> | CC-MAIN-2022-40 | https://www.imperva.com/learn/data-security/data-breach/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00753.warc.gz | en | 0.903524 | 1,483 | 3.375 | 3 |
Many organizations, irrespective of their size and the industry they belong to, are exposed to cybersecurity threats. With the ongoing digitization in the business world and dependence on customer data, the chances of cyber security threat increases. Cybercriminals lookout for opportunities to use the high volume of information that big businesses are churning out to their advantage.
Today firms store data in multi-cloud environments, on-premise and on-data storage repositories that may or may not adhere to various security standards. Hackers use the loopholes in the workflows within the system to gain unauthorized access and steal data.
So, what’s the solution, and how can businesses prevent these attacks? It’s through big data analytics and cybersecurity. As far as the latter is concerned, Infosec4TC provides cyber security courses to deal with cyber-attacks. But, we need to merge both these domains to achieve security analytics in real-time.
What’s the Synergy Between Big Data Analytics And Cyber Security?
With the increase in volume and variety of cyber-attacks, the need for a well-driven, innovative real-time cyber security defense strategy increases. No longer can businesses rely on outdated threat detection tools, intrusion response tools, and firewalls to defend modern cybersecurity threats.
In the era of big data and the Internet of Things, it’s essential for businesses to incorporate big data analytics into cybersecurity for fast detection of security attacks.
Big data analytics helps in the fast processing of large volumes of data, identifying anomalies and attack patterns, reducing vulnerabilities, and improving overall security. Through data analysis, firms will know about potential cyber threats resulting in a superior cyber defense strategy.
How Big Data Analytics Helps in Preventing Cybersecurity Threats?
Future Predictions are Possible through Machine Learning
By combining machine learning algorithms with data, firms can evaluate historical as well as current data to study and predict future threat patterns. Big data analytics help in finding touch points of attackers even before the attack takes place to prepare the prevention strategy beforehand.
Big data analytics is key in developing a real-time response to data breaches. Anomaly detection, malware detection, and look-alike predictions are popular use cases.
Automate Workflow with Big Data Analytics
If employees within the organization are ignorant about the security loopholes or types of cyber-attacks, cyber-attacks will continue to increase. Inability to react in a threat situation will result in security breaches causing monetary as well as the non-monetary loss for the organization. This is where online cyber security courses will help businesses provide the necessary information to employees about the threat environment and how to deal with it.
Through big data analytics, businesses can automate the process of monitoring the activities of systems to keep threats away. Automated controls for cyber security and fraud detection play a key role in uplifting the security of the firm.
Know About Intrusion Attack
Attackers lookout for ways to bring the network down. To monitor and hunt down vulnerabilities in real-time, firms rely on big data analytics; through real-time analytics, businesses can enhance the intrusion detection system to easily detect any malicious activity in the network.
Automated systems prevent attacks before hackers gain unauthorized access to the system. For instance, firms can use data from good/safe domains to track the overall security.
Risk Management Becomes Easy
Through big data analytics, firms get to know about the potential dangers of the network, resulting in a strong defense. Big data analytics gives accurate information about the system activities, vulnerabilities in the system, cybersecurity data. This information helps in finding out the root cause of the problem and how the loophole originated.
In the end, we can say, big data analytics plays a key role in preventing and dealing with cyber security attacks. It’s the best way forward when it comes to detecting threats at the earliest. To improve the security of the network, firms have to employ the best cyber security practices, including giving training to employees through top cyber security courses.
Are you ready to employ big data analytics? If so, hopefully this blog will help you out! | <urn:uuid:d8c37187-a6fb-40cf-bbd3-9aa8bff4e475> | CC-MAIN-2022-40 | https://www.infosec4tc.com/2021/12/17/threat-or-opportunity-big-data-analytics-and-cyber-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00753.warc.gz | en | 0.906331 | 841 | 2.515625 | 3 |
Artificial intelligence (AI) refers to the ability of machines to understand the world around them, learn and make decisions, in a similar way to the human brain. Thanks to AI, machines are getting smarter every day.
Contrary to popular depictions of AI, this doesn’t mean that machines will become our evil overlords (not yet, anyway!). When you strip away the sci-fi predictions and “be afraid” hype, it’s clear that AI is making a very real, very positive contribution to the world – particularly when it comes to AI in business.
You’ll already be familiar with some of the ways in which organisations are harnessing AI:
- Smart assistants, including Siri and Alexa
- Customer service or helpdesk chatbots
- Facial recognition technology, like that used by Facebook
- Personalised recommendations on platforms such as Amazon and Netflix
In this article, I’m going to explore some other examples of AI in business – examples that you might not have come across before. In general, all of the following examples fall into two main categories:
- Delighting customers with smart products and services
- Improving business operations
Examples of smart, AI-enabled products and services
- Roomba robot vacuums. You know those cute little vacuum cleaners that look like a giant hockey puck? They use AI to scan the room, pinpoint obstacles and work out how much hoovering is needed based on the size of the room. They also learn and remember the most efficient routes around the room.
- Twitter uses AI to identify hate speech, fake news and illegal content. In one six-month period, the platform removed nearly 300,000 terrorist accounts that had been identified by AI.
- Likewise, Instagram is using AI to fight cyberbullying and take down offensive comments.
- Betterment robo-advisors. There are lots of fintech companies offering robo-advice these days, but Betterment are the biggest and one of the pioneers in the field. Robo-advisors are online financial advisors that use AI to deliver personalised financial advice in an accessible, cost-effective way. This financial revolution promises to open up financial planning to the masses.
- Nest smart thermostats. If you’ve ever railed at the cost of your energy bills, this product might be for you. The smart thermostat monitors activity in your home and begins to understand the occupants’ behaviour patterns. Then, based on what it knows about how you and your loved ones use the home, it dynamically adjusts the temperate to keep the home comfortable, without wasting energy.
Examples of smarter business operations
- Predictive maintenance is helping companies repair, replace or service parts and machinery at the optimum time – before it breaks down. Siemens AG, one of the biggest railway infrastructure providers in the world, is one example of this in action. The company uses IOT and AI technology to improve the reliability of trains, repair assets before they break down, and provide rail operators with uptime guarantees.
- KenSci’s risk prediction platform uses AI techniques to help identify fraudulent healthcare claims, which make healthcare more expensive for everyone. The system was able to identify more than $1 million in fraudulent claims from just one dataset.
- Dominos is trialling Starship Technology’s automated delivery robots to deliver pizzas in Germany. These little delivery vehicles, which have a top speed of 10 mph, are proving more cost-effective and efficient for short-distance deliveries around town compared to delivery trucks and cars. Just Eat has begun using the same technology to deliver takeaways in London.
- IBM’s Chef Watson tool uses AI to help chefs and restaurants develop recipes and suggest innovative flavour combinations.
- Burberry is using AI to combat counterfeit products and improve the customer experience. For the latter, the company’s reward and loyalty programmes capture customer data which is then analysed to provide a more personalised shopping experience for each customer.
These are just a few exciting examples of how businesses are harnessing AI to delight their customers and improve operations. No doubt, we’ll see many more examples and much wider application of AI in business over the coming years.
Taking a strategic approach to AI in business
Whether you want to use AI to delight your customers, improve your business operations, or both, it’s vital you take a strategic approach.
What do I mean by that? For one thing, I mean creating a dedicated AI strategy (separate to your data strategy) that sets out how you want to use AI and how you’ll put that plan into action. What’s more, applying AI in a strategic way means linking it to your company-wide strategy. In other words, what is the organisation trying to achieve and how can AI help deliver those strategic objectives?
When you take a strategic approach like this, you can focus your AI efforts in the areas that will deliver the greatest value for the business. If you need help with any aspect of AI in your business then get in touch. I’ve worked with some of the world’s most prominent companies to create their AI strategies, and I’m here to help your business approach AI in a strategic way.
Where to go from here
If you would like to know more about AI in business, check out my articles on: | <urn:uuid:08307514-72df-4b85-82d1-b101ef7bfe95> | CC-MAIN-2022-40 | https://bernardmarr.com/what-is-artificial-intelligence-ai-in-business-10-practical-examples/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00753.warc.gz | en | 0.933445 | 1,115 | 2.90625 | 3 |
Autonomous vehicles are a natural component of any smart city/smart transportation strategy. City traffic congestion is a well-known problem that can make or break the reputation of a city as being a liveable place. Any major initiative that makes moving the public from one point to another in the most efficient manner is good for everyone.
Singapore has always taken steps to improve public transport and provide a viable alternative to owning a vehicle. These initiatives are renowned worldwide, and Singapore’s public transport and road management held up as best practice globally. To enhance commuters experience Singapore has already introduced smart bus stops a year ago and in the future, we may see a complete ecosystem of smart bus stops.
Following a similar theme, a number of projects have been announced in the last few years to take public transport to the next level. Singapore is also embracing autonomous vehicles, with a number of initiatives launched over the last few years. Singapore’s Nanyang Technological University (NTU), and Volvo Buses in partnership with Singapore’s Land Transport Authority (LTA) have launched the world’s first full-size autonomous electric bus stretching 12 metres long with a capacity of around 80 passengers. As a part of public trials, the bus is being tested on fixed routes and services will subsequently extend to the public roads.
The buses are equipped with autonomous driving functionality and provide a quiet, emission-free operation and save up to 80% energy compared to an equivalent sized diesel bus. The bus has advanced features such as light detection and ranging sensors (LIDARS), 3D stereo-vision cameras, and an advanced GPS system that uses real-time kinematics which are connected to an inertial management unit (IMU) to measure the lateral and angular and help in navigation over varied terrains.
Real-world concerns for autonomous buses
The bus has undergone preliminary rounds of rigorous testing at the Centre of Excellence for Testing and Research of Autonomous vehicles at NTU (CETRAN). Confirming maximum safety and reliability, the AI system in the bus is protected with industry-leading cybersecurity measures. Speaking on the subject, Ecosystm’s Executive Analyst, Vernon Turner says that “While safety will always be the leading concern, software and hardware security and reliability will be the underpinning forces that make passengers comfortable with autonomous vehicles. The autonomous vehicle’s ecosystem is complex because the reliability of the vehicle is as much an IT and telecom function as it is an industrial manufacturing process.”What
What do Autonomous buses mean for the industry and how will it benefit the industry?
In most cities, public transportation is conducted in ‘restricted’ lanes (especially for buses), and therefore the routes are often consistent, and the operating environments can be continually monitored and matched for exceptions. The legislation for autonomous vehicles has to be carefully crafted to ensure the highest level of public safety while not stifling innovation.
“The digital impact of autonomous buses opens up a host of new services both for the transportation companies as well as the passengers. I wouldn’t be surprised to see transportation companies being sold public transportation vehicles such as buses as ‘buses as a service’ whereby the vehicles are managed in a 100% OPEX manner and have no CAPEX value! There will be a rich source of operational data from IoT-based sensors that the suppliers and the transportation companies will agree to pay for multiple usage metrics,” Says Turner. “Innovation will also appear in the transportation workflow – thus creating investment in real-time mapping, high-speed telecom networks, and in the case of an ‘EV’ or electric bus, the charging/recharging energy network. As the IT infrastructure is implemented, I would anticipate efficiencies in bus usage would increase with better route management. Passengers, buses and the routes become integrated into a better passenger and city life experience.”
To that end, the industry is excited to use public transportation for their autonomous vehicle programs.
Environmental Impact of autonomous buses
The government of various nations is spending enormous amounts on reducing emission and buses are inherently inefficient when it comes to diesel consumption, only getting between 1 to 4 kilometres/litre. “Switching them to electric vehicles while at the same time running them as autonomous vehicles in a very efficient manner could have a marked impact on the environment,” says Turner. “While the heavy workload for buses might quickly drain any EV batteries, having them work in a fully autonomous, dedicated bus lane should mitigate that energy cost. This could make it a feasible alternative to combustion engine vehicles while at the same time being highly friendly to the environment.” | <urn:uuid:2769f7c8-212f-40b3-8233-ffebcda876c0> | CC-MAIN-2022-40 | https://blog.ecosystm360.com/volvo-ntu-unveils-autonomous-bus-singapore/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00753.warc.gz | en | 0.949779 | 955 | 2.6875 | 3 |
By David Weingot, Founder and CEO, DMAC Security
Our modern world is full of various types of physical and cyber-related threats. The war in Ukraine is ramping up Russian attacks on American targets, and the talk of a cyberattack is not out of the realm of possibility. It is essential for businesses to be prepared for any kind of attack, and that includes a combination of both physical and cybersecurity. As the Cybersecurity and Infrastructure Security Agency states “A successful cyber or physical attack on industrial control systems and networks can disrupt operations or even deny critical services to society.”
Together, cyber, and physical assets represent a significant amount of risk to physical security and cybersecurity – each can be targeted, separately or simultaneously, to result in compromised systems and/or infrastructure.”
What is Physical Security?
Physical security refers to personnel who are assigned to keep people, property, and other physical resources safe from danger. Often these professionals are called security guards, officers, or security specialists.
Many organizations use physical security to keep customers, employees, vendors, and guests safe. Examples include schools, hospitals, banks, retail stores, corporations, government facilities, etc. Physical security covers a lot of different responsibilities such as patrolling grounds, monitoring inbound and outgoing traffic, surveillance, locking and unlocking buildings, securing off-limits areas, responding to alarms, dealing with emergencies, first aid, and much more.
Why is Physical Security Needed in a Cyber Attack?
These days physical and cybersecurity go hand-in-hand. Devices, systems, and networked equipment are often targeted to prepare for a more significant cyber-attack. For example, in 2021, 150,000 security cameras were hijacked, allowing criminals to access surveillance feeds from hospitals, jails, police stations, and even schools.
Companies are using more technology than ever before, and a lot of it is vulnerable to hacking. Cybercriminals often use botnets to take over thousands of IoT devices and then use them for attacks. Companies may not even be aware that their devices have been compromised.
It’s essential for physical security personnel to work closely with IT departments to ensure the safety of physical devices and maintain strict access to them to prevent cyber-attacks. Another big area for concern is BYOD (bring your own device). Physical security can use sensors to monitor for and prevent malicious devices from entering the building (e.g., removable devices like USB drives, cell phones with malware, etc.).
Hundreds of data breaches have put companies, vendors, employees, and customers at risk. Security personnel should be stationed wherever data is stored and protect servers, computers, mobile devices, and other networked technology to prevent any unauthorized access. A data breach can devastate a company bankrupting its resources.
Many newer corporate structures use automation to control heat and ventilation. Abusers may gain access and alter the environment to overheat or destroy specific technology. Other targeted areas may include communications, hardware or software vulnerabilities, and weak password management.
Along with the physical aspect of security, IT departments should also enhance cybersecurity measures and network monitoring to cover all angles that a cyber-terrorist might use to gain access.
The Bottom Line
Technology continues to evolve at a rapid pace. Cybercriminals are innovating new attack methods all the time. It’s critical for any business, especially supply chain companies, to keep up with the threats by using both cybersecurity best practices with ample physical security to prevent access that could cause further damage and keep everyone calm and organized in the event of an attack.
About the Author
David Weingot is the founder and CEO of DMAC Security, an established full-service armed and unarmed security firm built upon over 30 years of law enforcement experience and management. | <urn:uuid:5930694b-dcfa-430c-8aef-a3975a339888> | CC-MAIN-2022-40 | https://www.cyberdefensemagazine.com/why-physical/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00153.warc.gz | en | 0.941763 | 764 | 3.171875 | 3 |
What are the biggest challenges for 5G radio access networks?
The process of defining the next generation of wireless networks is now well underway and the development of 5G technologies have been stepped up a gear by network providers around the globe. But what is the current state of play and what will be the biggest challenges for 5G radio access networks to succeed in their ambition?
Each generation of wireless networks follow a set of universal standards that are set out to ensure that new technologies are produced in a safe and reliable way, but that they also have governmental backing and support.
However, many would argue that the most important thing that the next set of wireless standards addresses is interoperability; the ability for differently manufactured devices to work together.
With a huge range of products from a mix of different industries set to become connected, operators will need to define and manage many new commercial arrangements and pricing structures that ensure support across the board. In many cases, this could include the need for rival operators to come together and look at network sharing services.
As the global vision for 5G begins to be debated, researched and tested (before eventually being standardised), one of the key things that will need to be assessed is the network and spectrum requirements for it to be made possible.
Limited spectrum availability is a big issue in the development of 5G. The bandwidths requirements of 5G mean higher frequency spectrum will be fundamental in delivering high speed, high quality connectivity.
It is widely accepted that 5G will require spectrum in bands higher than 28GHz – also known as millimetre wave. That’s because the physical properties of these higher frequency waves mean the capacity and potential bandwidth is far greater, however, the maximum distance of the wave is far shorter than waves of much lower frequency.
The 28GHz band is considered as the only band that can distribute ultra-wide bandwidth with over 800MHz to two to three mobile network providers, allowing them to provide ultra-fast mobile network with more than 20Gbps which is one of the conditions of 5G technologies.
A key difference29/03/17 between 5G and earlier generations of mobile technology is that the focus of research is on finding the best techniques to improve spectrum utilisation, rather than on improving the spectrum efficiency. In other words, the bits per hertz per unit area, rather than just bits per hertz.
Proposed tech to improve spectrum availability includes:
- Massive MIMO
- Super-dense meshed cells
- Macro-assisted small cells (‘phantom cells’)
Transport Network Capacity
5G is set to provide access to information and sharing of data anywhere and at any time to anyone and anything. This will give rise to a truly ‘Networked Society’. However, as 5G radio-access technologies develop, transport networks will need to adapt to a new and challenging network landscape.
Because the expectations for 5G include support for a massive range of services such as IoT (internet of things), industrial applications and highly scalable video-distribution, a new radio access model will need to be developed to manage all of this.
The level of flexibility that is required in the transport network is dependent on how the 5G radio is deployed and it will need to be able to reach very high levels of capacity.
Some of the technologies that are currently being developed and evolved to tackle this significant increase in capacity requirements include:
- Small cells
- MU MIMO (multi-user MIMO)
A long road ahead
These are just a few challenges as we move forward in the development of 5G and there will be problems that haven’t yet been realised. These will need to be solved before we begin to experience the true benefits of the next generation of wireless networks. But, as with all technologies, without facing challenges and the working hard together to solve those challenges, we wouldn’t have driven the innovations that make up so much of today’s telecoms network infrastructure.
Get all of our latest news sent to your inbox each month. | <urn:uuid:9328e0cc-b9cd-4161-8bbb-6fea00c4d411> | CC-MAIN-2022-40 | https://www.carritech.com/news/biggest-challenges-5g-radio-access-networks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00153.warc.gz | en | 0.951299 | 850 | 2.796875 | 3 |
The IoT (Internet of Things) is shaking up the networking space and paving the way for machine-to-machine (M2M) communication and automated processes. From connected cars and smart homes to remote surgery and robotics – the opportunity and potential is endless.
Recent figures indicate that there are an estimated 8.4 billion IoT devices in use, and the number is expected to reach over 20 billion by 2020. Today, IoT encompasses a vast technological umbrella and its deployment comes in many flavors. Chief among them are the managed use cases of Industrial Internet of Things (IIoT), and the unmanaged use cases of Consumer Internet of Things (CIoT).
Although IIoT can appear forbiddingly complex, security management is in fact easily achievable. The key here is a solution that controls the traffic stream between devices and the application(s), guaranteeing best-in-class service and ensuring protocol conformity. It is also crucial to secure communications via cryptography (TLS) and stateful security services (policing and vulnerability protection).
A key IIoT deployment challenge is the changing characteristics of traffic metrics. IIoT devices are massive in number, sessions are long (months or even years), and traffic volume is usually very low. Terminating idling sessions is not always an option. Indeed, the ‘always-on’ nature of some applications may result in a traffic storm within the network.
CIoT devices, which are usually unmanaged, include things like CCTV cameras, intelligent speaker systems, and wearables. When sitting behind a mobile broadband or fixed line subscriber CPE, it can be difficult to identify such devices in the network as communication relationships are not clearly defined.
The problem is accentuated by the fact that many smart devices are built on inexpensive chipsets that provide the networking protocol stack and, occasionally, an application layer. Manufacturers often avoid providing patches and sometimes even wash their hands of all responsibility once the device ships. This can cause significant disruption. According to the latest Threat Intelligence Report by F5 Labs, Europe is already a hotspot for Thingbots, which are built exclusively from IoT devices and are fast becoming the cyberweapon delivery system of choice for ambitious botnet-building attackers.
F5 Labs reported 30.6 million global Thingbot attacks between 1 January and 30 June 2017 harnessing devices using Telnet, a network protocol providing a command line interface for communicating with a device. This represents a 280% increase from the previous reporting period of 1 July to 31 December 2016. Hosting providers represented 44% of the top 50 attacking IP addresses, with 56% stemming from ISP/telecom sources.
Despite the surge, attack activities do not equate to the size of key Thingbot culprits Mirai and Persirai. 93% of attacks during F5’s reporting period occurred in January and February, with activity declining from March to June. This could indicate that new attacks are on the horizon as attackers move from “recon” to “build only” phase.
Unfortunately, we will continue to see massive Thingbots being built until IoT manufacturers are forced to secure these devices, recall products, or bow to pressure from buyers who simply refuse to purchase such vulnerable devices.
Against this backdrop, service providers are challenged with not only identifying infection activities but also mitigating outbound DoS attacks.
Traditional Layer 3 and 4 firewall rules are not as much help anymore. Robust behavioral analysis of traffic is now essential. This way, security devices learn the “normal” network baseline over time. Once a deviation is detected, a variety of activities are initiated. These could include creating an alert, which would trigger a manual mitigation process after human verification, or creating a dynamic signature for existing mitigation technologies to block detected anomalies.
Self-defending networks are integral to tomorrow's security architecture. In the meantime, responsible organizations can do their best to protect themselves by having a DDoS strategy in place, ensuring redundancy for critical services, and implementing credential stuffing solutions. It is also important to continually educate employees about the potential dangers of IoT devices and how to use them safely. | <urn:uuid:f129c9bb-bfe9-4cc0-b2f7-048070b5d27e> | CC-MAIN-2022-40 | https://www.f5.com/fr_fr/company/blog/how-iot-can-compromise-network-integrity | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00153.warc.gz | en | 0.948155 | 840 | 2.6875 | 3 |
The RSA key is currently one of the most popular asymmetric cryptographic algorithms with a public key used to securely exchange information on the Internet. The most important feature of this key is providing encryption security based on the difficulty of factoring large complex numbers.
The use of an asymmetric pair of a public and private key is very common, as it is simple to generate. In addition, it is a very safe algorithm, because the decomposition of a very large number n to obtain p and q, requires enormous computing power.
As I was writing about the potential of quantum computing and the current stage of development of this tech in my recent Medium post, I asked myself many questions about the superpowers that quantum computers would bring to the table.
Can quantum computers change our perception of the security of encrypted information transfer on the Internet? In other words: can one break the RSA key using a quantum computer?
“Hold my beer”, one might say, knowing that quantum computers are actually even closer than around the corner.
“Hold your horses”, is my answer, because even though close, they are not as close as we would like them to be.
Let’s find out
(And be warned — some math’s ahead! However, don’t worry — these are simple examples to illustrate RSA concept (hands-on approach)).
Modern computers are very good at multiplying virtually any numbers. However, if we have a very large number and want to break it down into the product of two prime numbers — this becomes a very complicated and time-consuming task. With commonly used 2048-bit keys, the time it takes to break such a large number into two prime numbers (hundreds of digits) is counted in billions of years.
Here’s the math, step by step:
And here’s an example of an RSA-768 key that was broken in 2009:
RSA-768 = n-number from step 3: 123018668453011775513049495 838496272077285356959533479 21973224521517264005 072636575187452021997864693 899564749427740638459251925 57326303453731548268 507917026122142913461670429 214311602221240479274737794 08066535141959745985 6902143413
RSA-768= n-number, as the product of two primes: 334780716989568987860441698 482126908177047949837137685 689124313889828837938780022 876147116525317430877378144 67999489 (p) * 3674604366679959042824463 379962795263227915816434308 764267603228381573966651127 923337341714339681027009279 8736308917 (q)
25 years of calculations
Let’s try to find the key using a quantum computer. We know that the computer’s power increase is exponential. Adding each qubit doubles performance. In addition, the quantum computer performs all calculations in parallel.
According to researchers’ estimates, a 2048-bit RSA key can be cracked using a quantum computer in 25 years.
The number of qubits needed to do this in 8 hours is … about 20 million (including additional qubits for superposition noise corrections — see below). Currently, the largest quantum computers operate (and that with problems) on 70 (yes, only seventy) qubits. Therefore, it is clear that the current level of technology is still nowhere near RSA-breaking point.
A quantum computer’s power doubles after adding each qubit. In theory, this is true, but in practice the matter is more complex. Quantum computers require for their operation an environment that will not generate interference. A qubit stays in superposition until it is measured. If a fault occurs, an involuntary measurement may follow. Such a random measurement will introduce noise into any measurements performed. To limit this level of erroneous readings and interference, quantum computers are cooled to around absolute zero, and very low pressure is maintained.
In addition, programming a quantum computer with microwave-length waves or light beams (i.e., setting the appropriate initial parameters) is not only problematic, but also time consuming. Despite the use of such sophisticated measures, maintaining qubit stability for a long time is not possible. “Coherence time” is the time of a quantum state’s stability, so that it can be read unambiguously. Current values of coherence time are already approaching the millisecond level. This is a significant improvement, because even a decade ago coherence time was measured in nanoseconds.
As if that was not enough, to obtain reliable results, individual tests must be carried out many times, so that the results can be given to statistical treatment. With the above in mind, the quantum computer’s power lies not only in the number of qubits, but also in its interference resistance. If there are a lot of disturbances, many more tests must be performed to obtain reliable results.
Therefore, it is also important to limit the level of interference. Only the optimization of both dimensions can take processing to a new level.
The turning point: still far away
Quantum computers will be creeping into our lives in the coming years. The technology is at its initial stage, and we can currently talk about their potential in the future rather than about current opportunities. However, keep in mind the words of Raymond Kurzweil, one of the currently most respected artificial intelligence and technological development gurus. He is an active promoter of the idea that human development is also based on exponential technological growth (we invent something, then use this invention to build another innovation, which will be used to accelerate development, etc.)
In one of his books, the hypothesis was put forward that humanity will experience more technological changes in 50 years (from 2000 to 2050) than in the previous 14,000 years of human history. In his opinion, we are heading towards the so-called technological singularity, a hypothetical point in the future development of our civilization, at which technological progress will become so rapid that all human predictions will become obsolete.
The turning point in this area would be the creation of an artificial intelligence which is intellectually superior to human intelligence. Such an artificial intelligence could develop even more efficient AIs, triggering an avalanche change in technology. One of the foundations for this development is the progress in the coming new generations of quantum computers, on which the new generation of AIs can be based.
However, as long as we don’t arrive at the technological singularity, it can be said with high probability that in the next 20 years our commonly used asymmetric cryptographic keys will remain secure.
If any readers are interested in directly programming a quantum computer, I recommend using the website provided by IBM: https://quantum-computing.ibm.com. There, free of charge, you can program one of their several-qubit quantum computers. This can be done by writing a program in Python, or — much more conveniently, at least at the beginning — using well-explained block diagrams. | <urn:uuid:5bba7701-c21f-46d0-8d85-0a0ae2fe6763> | CC-MAIN-2022-40 | https://candf.com/articles/rsa-key-can-quantum-computers-break-the-algorithm/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00353.warc.gz | en | 0.924339 | 1,534 | 3.453125 | 3 |
Using drones to help make our living and working spaces a little safer is not too much of a stretch.
It’s great news that people are finally starting to get vaccinated against the COVID-19 virus. I hope that the current rollout problems get worked out soon so that everyone can get access to this potentially life-saving vaccine. But we also need to temper our excitement some, because it will be a long time before things get back to normal, if they ever do. A recent National Geographic article makes the case that we will never fully defeat COVID-19, just like we never conquered the flu. And the virus is also mutating and continuing to spread, making its full destruction a moving target. Instead, the best we may achieve is to get the situation somewhat under control, and then learn to live with it.
It’s no surprise that learning to live with the virus in the United States will likely mean getting our amazing technology involved. As such, there is a big push to use drones and other manned and unmanned aerial vehicles to help combat infection rates. We have already seen drones working within the military and also embraced in public safety roles. And we have tapped them for planetary and space exploration. So using drones to help make our living and working spaces a little safer is not too much of a stretch.
In government, we are already starting to see some movement on these initiatives even though the idea of using drones to fight COVID-19 is still in its infancy. In Alabama, the state senate recently contracted with a company called Draganfly to use its robotics technology to detect potentially infected people entering government buildings and direct them to rapid COVID-19 testing if needed.
“As the current pandemic continues, we are committed to provide a safe place for our staff and visitors to ensure there is no interruption in the work that needs to be done for the citizens of Alabama,” said Pat Harris, Secretary, Alabama State Senate. “We are confident that the implementation of Draganfly’s Vital Intelligence Technology will help to ensure an important layer to existing protocols that assist us in identifying and mitigating the risk of the spread of COVID-19.”
In addition to monitoring people for signs of infection, the Draganfly drones can actively disinfect areas by flying over them and spraying a disinfectant. It’s been used at stadiums and other large venues to sterilize the area before an event.
Other companies are working on dedicated COVID-19 killing drones. Lucid Drone Technologies has one that includes an expanded battery for longer flight times. It’s able to clean 200,000 square feet per hour, which is at least 20 times faster than having a human walk around trying to wipe everything down. It’s probably a lot more accurate too, because the special nozzles guarantee even coverage over every surface.
Other companies are working on different disinfecting methods, such as using UV radiation to destroy the virus in indoor places like schools where spraying large amounts of liquid is not practical. The Aertos 120-UVC drone from Digital Aerolus has several UV-C light emitters that would give a human a nasty sunburn in just a few minutes, but hopefully would also be enough to kill the COVID-19 virus. With so many electronics onboard the Aertos drone, it’s no wonder that it only has about 10 minutes of flight time, though multiple units could be used to sterilize schools at night when nobody else was in the building. A human crew could also swap out batteries if needed.
While I am impressed with the virus-killing drones we have seen so far, the technology is still being actively developed. All of the drones that I have seen designed for this role so far require human pilots. So although they might save time compared with walking along with a bucket of chemicals and a squeegee, it’s far from an automatic process.
Advanced military drones already have access to a lot of artificial intelligence. It would not take too much effort to add some of those elements to civilian cleaning robots. Things like automatic navigation, pathfinding and the ability for the drone to remember its programmed route would be like force multipliers for a sterilization drone. You would probably also need to add sensors so that the drone can detect humans inside their cleaning zone to make sure that nobody gets accidentally sprayed, or burnt in the case of the UV light drones.
But this is a good start to an impressive effort. I’ve tried to get a hold of one of the sterilization drones for review, but so far the companies that make them tell me that every one of their drones are earmarked and sold at least through the end of the year. So it sounds like everyone should start seeing drones in this role soon. Let’s hope that they can make a difference as we try to figure out what normal life is going to look like and continue to discover ways to keep everyone as safe as possible.
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys | <urn:uuid:79d1c77c-f475-42a8-98b5-dea17881743c> | CC-MAIN-2022-40 | https://www.nextgov.com/ideas/2021/01/can-robots-and-drones-help-fight-covid-19/171667/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00353.warc.gz | en | 0.958704 | 1,072 | 2.578125 | 3 |
Keys To Managing Your Data
Data has become a lot more important in our modern society. It is why many people consider data to be the new oil. It is as valuable as oil, and you can get many benefits from using data in that way. There are many software programs available today that can be used to manipulate data to make it useful for a variety of applications.
You can think of it the same way that petroleum is processed into many different products that we use in our modern-day life. One of the most important things we need to consider is how our software applications can keep up with the massive amounts of data being generated every day. You should consider how your software is working in this regard.
Managing Data the Right Way
It is crucial to have the right software to manage your data. Certain considerations need to be made to ensure that you are managing your data to be easy to store and scale. For example, materials management software would need to ensure that the data will be secure and could never be corrupted.
Data corruption is one of the biggest problems in this field, and it is something that software engineers always think about. You also need various software programs that can be used to simplify how you can scale your data. These things will need to happen if we are to have software that keeps up with data.
The way you manage your data can often be a huge problem for your organization. It is something that many people never seem to think about, but it has tremendous repercussions for the people who need to work with this data all the time. The future of this type of software will be effective data management and security.
Keeping Data Secure
Security is another important part of the process that people don’t think about. If you had a materials management software program that was not secure, you would be in a lot of trouble when hackers got your private information. These hackers would then be able to turn your private information against you, which most programs are vulnerable to.
It is crucial to assess your vulnerability to these programs to avoid ending up in a much worse situation. You have to always understand what is going on to not be at their mercy. Data security typically involves setting privacy policies that protect users. You also need to protect that data from hackers that could penetrate your systems.
Outside data security involves using encryption and other obfuscation techniques to ensure that no one can read the data. However, this technology keeps advancing and will eventually need to be overhauled to be safe and effective.
Distributed Software That Prevents Data Corruption
Part of effective software data management is being able to create distributed programs that work on scalable data. The main way to ensure your software can keep up is to have code inside your software that manages data as it scales. Data scalability is becoming a massive component of modern software engineering, which will only grow in the future.
Scalability is so crucial in distributed applications because the risk of data corruption is so great. Data corruption can cause a lot of issues, and it is something for you to consider. Data corruption could potentially ruin a working program, so it is something for you to think about all the time.
Software That Generates Massive Amounts of Data
You should always have a way of working with data when your software generates a lot of it. It is another way to ensure that your software can keep up with the data being generated. It means having a system for allocating storage capacity and ensuring that it all goes the way you plan. When you can structure your software in this way, you are much more likely to get exceptional results.
Some inventory systems are like this, and it will require a lot of crucial information to get it to work correctly. These types of inventory systems will only get more vital as we move toward a digital world. Eventually, most deliveries will be done by computer systems, and it is something for you to think about. These systems will need software that can manage data effectively.
Data Generation in IoT Products
(Infographic Source: Matt Tuck)
Data management is also a massive problem in the new area of IoT. These devices generate massive amounts of data, and you can turn that data into valuable insights that can help your business grow. Most manufacturing companies today have some IoT devices inside their warehouses.
IoT devices are becoming an essential factor in big data and e-commerce. It is one of the many reasons why it is crucial to have software that can keep up with the data that these products are generating. If that can happen, it will be better for the new world where data is king.
Tools to Manage Inventory
Perhaps the biggest requirement for software that can scale with data will be in inventory management. Inventory management is one of the most significant portions of most large shops. It is also the key to companies like Amazon. It is crucial to understand how to make these software programs work with the data being generated, as it can profoundly impact the business.
Inventory management will only grow in complexity. We will start to see more automation in this field as time goes on. It means that every device will now start to generate massive amounts of information. For example, we will see how our materials management software program will start to need cloud storage capacity to work well. All of these things need to be considered in the long term.
Data Cleaning Software
The final frontier in software that can keep up with data will be data cleaning. Data cleaning is one of the most crucial components of modern software, and it will only get more important as time goes on. When it comes to software that can keep up with data, it is crucial that the software can also clean and organize the data to make it more useful.
By Gary Bernstein
Gary has written for several publications over the last 20 years with his primary focus on technology. He has contributed to sites such as Forbes, Mashable, TechCrunch and several others. | <urn:uuid:0c46de6d-a07a-4d07-b13e-8281cce531f0> | CC-MAIN-2022-40 | https://cloudtweaks.com/2021/07/7-ways-ensure-software-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00353.warc.gz | en | 0.965526 | 1,204 | 2.515625 | 3 |
What is Exposed Password Screening
Exposed password screening is the process of checking currently used passwords against passwords that have been exposed in a publicly known data breach. Once these passwords are exposed, they are considered to be compromised passwords.
In 2017, the National Institute of Standards and Technology updated the NIST password guidelines, recommending for exposed password screening. Since then, companies and organizations are increasingly implementing compromised password screening as part of their cybersecurity policies.
Why Should We Screen for Exposed Passwords?
Despite their limitations, passwords are still the most common way of protecting our accounts from unauthorized access.
As data breaches have become more prevalent over time, cybersecurity guidelines have been updated in an attempt to make passwords better for protecting accounts and data. While password requirements do vary depending on the organization, the security fundamentals of passwords are largely consistent. For example, complexity rules like the enforcement of lowercase and uppercase letters, along with the inclusion of special characters and numbers are common. Forced password expirations are also common. Users are also regularly instructed to pick a unique password for each account they create.
These attempts to make passwords a more secure method of protecting our accounts can sometimes leave our accounts less secure. The truth is, these rules are great for people with perfect memory who remain alert and vigilant to cybersecurity issues and steadfastly follow security standards. In reality, very few of these people exist and most people have too many passwords that they need to remember.
According to password management software company, LastPass, the average business employee must keep track of 191 passwords.
When users are faced with keeping track of so many passwords and complying with password requirements, they tend to create ways to make their password management easier. They may pick easy-to-remember or common passwords. Or they choose similar passwords by only changing the password slightly for each account or each time they are prompted to make a new password. This is also the reason that password reuse is so alarmingly common and why so many people use a “root password” and just make slight changes to it.
“This behavior is an example of people following the “letter of the law versus the spirit of the law”.
Users are fully compliant with the password requirements and the computer is satisfied that the user has met the security standard. However, the “spirit of the law” has been abandoned in the process because a reused password that is only slightly different from the original password is less secure than a completely new password that doesn’t meet all of these rules.
Cybercriminals know that most people reuse passwords and/or will use a root password with a few variations. They exploit these lax user password habits and will try the password they found online or variations of it to gain unauthorized access to accounts.
When you consider that 81% of company data breaches are due to poor passwords, it’s plain to see how our increasing focus on password complexity simply and forced password resets are not always enough. This doesn’t mean we should do away with all password requirements, but we need to enhance our approach. With the recent advancements, it’s now possible to collect, store, and utilize huge databases worth of bad and exposed passwords to make our personal accounts and workplaces more secure.
This is where exposed password screening comes in.
Bad actors utilize databases of exposed passwords to conduct brute force attacks or credential stuffing attacks. Organizations can use similar databases to screen for exposed passwords and alert users to their leaked passwords. This type of password monitoring is both highly effective and encourages the use of more secure passwords. A user may be able to keep a secure password indefinitely if it is never exposed, and this encourages better password hygiene. | <urn:uuid:c237e634-201d-418b-9850-0ceb0bb27ae0> | CC-MAIN-2022-40 | https://www.enzoic.com/exposed-password-screening/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00353.warc.gz | en | 0.951505 | 757 | 3.1875 | 3 |
State & Local Government
Cybercriminals use compromised credentials to
attack cities, municipalities, and states
Protect your community from cybersecurity threats by securing your passwords.
It isn’t easy to maintain public trust and secure assets when users continue to select compromised passwords. The door is open to password attacks that result in data breaches and ransomware. That’s why NIST overhauled password rules to eliminate complexity rules and frequent password resets. NIST’s modern approach to passwords makes it easier for both the IT department and the user community..
Enzoic is a great tool that ensures password security without needing any additional employee training or adding an administrative burden on IT.
The City of Keizer Enhances Cybersecurity by Eliminating Compromised PasswordsDecember 28, 2021
The City of Prescott Utilizes Automated Password Security to Protect Employees from ATONovember 10, 2021
The City of Paso Robles Taps Enzoic for Password Peace of MindSeptember 29, 2021
Cyberattacks on Municipalities & How to Defend Against ThemMarch 29, 2021 | <urn:uuid:d8c4673b-fb0d-4685-861a-fcb636142989> | CC-MAIN-2022-40 | https://www.enzoic.com/government/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00353.warc.gz | en | 0.876287 | 222 | 2.515625 | 3 |
The Right Way to Better Appearance
Every day, employees print documents, and in the process they utilize conversions without even noticing it. It’s not uncommon for a document to arrive at the printer with incorrect characters in the text or to otherwise print differently than expected. Users often wonder how and why that happens.
What is a “conversion“ anyway, and why do documents have to be transformed from one format to another? Here’s an example everyone can relate to: an MS-Word document needs to be converted into PDF format so that the recipient cannot make changes to the contents. In this case, it’s obvious why conversions are necessary. Things are less straightforward in the business world, of course. For example, conversions are often needed to transfer large documents to a print outsourcer. As part of the process, the original document may need to be converted to AFP format for printing on high-speed print devices.
So far, so good. But how can this lead to printing errors? Some common problems are missing or incorrect fonts, or use of fonts that are not supported by the target print device(s). This can lead to a modified typeface, for example overly large spaces between words or characters. Sometimes certain characters do not print correctly at all. In their place, you may find strange symbols or markings, such as an unexpected“smiley“ that seems to appear out of nowhere.
By “font“ we mean the electronic counterpart to what used to be a physical typeface consisting of individual letters created from cast metal slugs. A font is therefore an electronic typeface, which is saved in digital form. These fonts can be embedded in the actual print data stream so that, for example, a document is still able to be read after a decade or more. Fonts can also be stored externally; in this case, the data steam contains font references that tell the printer which fonts are needed to correctly render the output. Ideally, these are already stored on the printer or can be quickly loaded. Otherwise, we’re back to square one with missing characters and printing problems.
From my vantage point on the Support hotline, I see a lot of problems or disruptions due to missing fonts in conjunction with LRS Transforms. LRS Transforms are optional modules for that seamlessly integrate with our VPSX software for a more powerful and comprehensive output solution. When utilizing these transforms, you can select the“relaxed“ command line option to force a data conversion. But the “relaxed“ option is really meant for problem analysis and not for production use. You can spot instances of missing fonts in the VPSX log, for example, from messages like the following:
PDI2005W Font 'Courier' (weight 'BOLD' style 'UPRIGHT' replaced with 'Default font')
PDI2005W Font 'Courier' (weight 'MEDIUM' style 'UPRIGHT' replaced with 'Default font')
The log messages usually indicate that the errors are caused by fonts, but do not always indicate which fonts are at fault. Using the command line option “–fontentries fonts.xml“ causes the file “Fonts.xml“ to be created, which contains information about the fonts used by the input file. This file, or rather the contents of this file, can be used for mffXXX-profile data for font definitions in the directory. If the input data contains only font references (and no actual fonts), then fonts can be integrated into the data stream or substituted with other fonts. In the mffxxx profile data, the fonts used in the section must be declared in the path specified by the conversion.
Using this method, it’s relatively easy to solve font problems... assuming you have the correct fonts at your disposal. If not? Aside from the technical considerations, you must consider the legal ones. Simply using fonts found on the Internet or random fonts that were shipped with some other system may not be quite legal. It’s something worth double-checking in any event.
The simplest option would be to utilize a “worry-free solution“ that provides a set of appropriate fonts regardless of the target printer, source application, or system. One that addresses legal usage concerns and that significantly automates the output process.
You’re probably thinking that such a thing is nearly impossible. But it does exist. Give us a call; we have the fonts, the software, and the know-how to provide a “worry free solution“ for your printing challenges. | <urn:uuid:b85c48b6-9371-423e-a336-be98881a4d5c> | CC-MAIN-2022-40 | https://www.lrsoutputmanagement.com/blog/post/the-right-way-to-better-appearance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00353.warc.gz | en | 0.890267 | 954 | 2.71875 | 3 |
Microsoft Edge and Google Chrome, both chromium-based browsers, are set to receive and update, Intel CET, which will prevent a diverse range of vulnerabilities.
Intel first introduced its Control-flow Enforcement Technology (CET) back in 2016. In 2020, Intel introduced the same feature to its Intel’s 11th generation CPU.
The CET feature is developed for protecting software programs from Return Oriented Programming (ROP) and Jump Oriented Programming (JOP) attacks that alter an application’s normal flow. This modification results in successfully executing the malicious code that is placed by cyber attackers. | <urn:uuid:dcc5f597-39d1-45fb-94fa-e8473314e7ea> | CC-MAIN-2022-40 | https://itsecuritywire.com/quick-bytes/microsoft-edge-google-chrome-set-to-receive-this-intel-security-feature/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00353.warc.gz | en | 0.90266 | 124 | 2.515625 | 3 |
Share Blog Post
- Researchers at MIT have come up with a new chip that is hardwired to perform public-key encryption. The chip is highly energy efficient as it consumes only 1/400 as much power as software execution of same protocols would require. Furthermore, the chip uses about 1/10 as much memory and executes 500 times faster. The researchers have described the technique used in the chip as ‘elliptic-curve encryption’ that relies on a type of mathematical function called an elliptic curve.
- Quantum physics is gaining much importance in cybersecurity because of stronger security features it can help create. An Australian cyber security company is using quantum physics to create stronger data security tools. The entire concept focuses on the concept of quantum tunneling, an intriguing property in diodes, that paves way for the creation of stronger encryption keys. As per classical mechanics, Quantum tunneling is a phenomenon as per which a particle is able to cross a barrier that technically it should not be able to do.
- The National Institute of Standards and Technology (NIST) has published the 'Attribute Metadata: a Proposed Schema for Evaluating Federated Attributes' in order to provide the basis for the evolution of a standardized approach to entity attributes. This is an internal report that can be used by public and private organizations. The report will not be imposed on the federal agencies. The purpose is to allow a system that uses federated IAM to better understand and trust different attributes; to apply more granular and effective access authorizations, and to promote the federation of attributes.
- Tesla fell victim to the hackers this week when it came to be known that their cloud environment was exploited by hackers to mine cryptocurrencies. Security researchers reported the discovery of an unprotected Kubernetes console that belongs to Tesla. The console is used to automate the deployment, and for scaling and operating application containers and virtualized software among others. The researchers discovered that hackers have deployed mining scripts on Tesla’s unsecured Kubernetes instances to perform cryptojacking.
- Hackers stole away the personal details of at least 685,000 registered forum users of Hardware Zone. As per statistics, this breach is the largest breach in Singapore to date. Although the hacking took place in September 2017, it was discovered only this week when security researchers discovered suspicious posting from a senior moderator’s account that was found to be compromised by an unknown hacker.
- California seems to be on hackers radar. Now in a new breach, the hackers have stolen the personal data of thousands of state employees and contractors from the Department of Fish and Wildlife. However, what is different about this breach is that the data was stolen by an insider (former employee) who downloaded it to an unencrypted personal device and took the data outside the department’s perimeter. As of now, the threat actor has not been named by the department.
- The Russian Central Bank came with an astonishing revelation this week when it disclosed that unknown hackers had stolen $6 million from a Russian bank last year. The hackers had compromised SWIFT international payments messaging system. Although the bank did not provide much inside details of the hack but it did mention that the hackers employed a ‘common scheme’ to compromise SWIFT and steal the money.
- The infamous Coldroot Remote Access Trojan is still found to be undetectable by popular antivirus engines. It would be essential to mention that the trojan code was uploaded and made freely available on GitHub for around 2 years. Initially, the trojan was created to target the Mac users and fill in the void of a RAT targeting Macs but since then it has expanded its domain to cover Linux and Windows also.
- Researchers have identified a multi-stage infection attack that deploys malware for stealing passwords from applications installed on the targeted computer. The attack is initiated through spam emails that are delivered via Necurs botnet. The botnet delivers macro-enabled documents including Word, Excel and PowerPoint documents. In the campaign, researchers found out that DOCX attachments containing en embedded OLE objects having external referenced were used.
- The famous peer-to-peer apps BitTorrent and uTorrent have been found vulnerable to hijacking flaws. A security researcher unearthed a number of DNS rebinding exploits in the Windows versions of the software. The bugs allow the hackers to resolve web domains to the user’s computer thereby providing the keys to the kingdom. The hackers are able to execute remote code, download malware to Windows startup folder, take hold of downloaded files and scan your download history. The bug impacts all the unpatched versions of the software.
- Researchers have unearthed new spam campaigns impacting a number of websites including the Bitcoin cryptocurrency. The spam campaign starts with the injection of a malicious script into different Joomla, WordPress and jBoss websites. The purpose is to create a binary file that is achieved by hiding the unwanted script on the embedded site. Once the binary file is created, the hackers misuse the PC’s CPU to access user’s computers to mine Bitcoin.
Posted on: February 23, 2018
More from Cyware
Stay updated on the security threat landscape and technology innovations at Cyware with our threat intelligence briefings and blogs.
Explore Industry Briefs
Cyware for Enterprise
Adopt next-gen security with threat intelligence analysis, security automation...
Cyware for ISACs/ISAOs
Anticipate, prevent, and respond to threats through bi-directional threat in... | <urn:uuid:5490c242-4c78-4ccb-842c-add2f3095a0b> | CC-MAIN-2022-40 | https://cyware.com/weekly-threat-briefing/cyware-weekly-cyber-threat-intelligence-february-19-2018-february-23-2018-7c78/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00353.warc.gz | en | 0.940378 | 1,133 | 2.828125 | 3 |
Alan Turing is famous for several reasons, one of which is that he cracked the Nazis' seemingly unbreakable Enigma machine code during World War II. Later in life, Turing also devised what would become known as the Turing test for determining whether a computer was "intelligent" — what we would now call artificial intelligence (AI). Turing believed that if a person couldn't tell the difference between a computer and a human in a conversation, then that computer was displaying AI.
AI and information security have been intertwined practically since the birth of the modern computer in the mid-20th century. For today's enterprises, the relationship can generally be broken down into three categories: incident detection, incident response, and situational awareness — i.e., helping a business understand its vulnerabilities before an incident occurs. IT infrastructure has grown so complex since Turing's era that it can be months before personnel notice an intrusion.
Current iterations of computer learning have yielded promising results. Chronicle, which was recently launched by Google's parent company, Alphabet, allows companies to tap its enormous processing power and advanced machine learning capabilities to scan IT infrastructure for unauthorized activity. AI² quickly learns how to differentiate true attacks from merely unusual activity, alleviating a vexing problem for IT security teams: false positives. There are numerous other examples of AI-based solutions, such as Palo Alto Networks' Magnifier, which uses machine learning to automate incident response, utilizing another strength of AI: speed.
These advances arrive at an opportune moment because the risks from cybercrime are rapidly growing; estimates of the cost worldwide is about $600 billion annually. The average cost of a data breach is estimated at $1.3 million for enterprises and $117,000 for small businesses, and companies are taking note. According to ESG research, 12% of enterprise organizations have already deployed AI-based security analytics extensively, and 27% have deployed AI-based security analytics on a limited basis.
Moreover, cybersecurity in the years ahead will be increasingly challenging. Enterprises and computers are relatively static and well-defined at present, but securing information amid the Internet of Things, in which almost every device will be programmable and therefore hackable, is going to be far harder. Soon, we won't just have to safeguard unseen servers anymore but also our cars and household devices.
Unfortunately, AI has also become available to hackers as well. Dark Web developments to date merit serious discussion, such as machine learning that gets better and better at phishing — tricking people into opening imposter messages in order to hack them. Further down the road, machines could take impersonation one step further by learning how to build fake images. Experts are also worried AI-based hacking programs might reroute or even crash self-piloting vehicles, such as delivery drones.
I suspect that in the future, users on the front end will be blissfully unaware that behind the scenes battles between good and bad learning machines rage, with each side continually innovating to outsmart the other. Already, the synthesis of AI and cybersecurity has yielded fascinating results, and there is no doubt we are only at the beginning. I am reminded of a quote by Dr. Turing:"We can only see a short distance ahead, but we can see plenty there that needs to be done."
- 5 Questions to Ask about Machine Learning
- 5 Things Security Pros Need To Know About Machine Learning
- The Double-Edged Sword of Artificial Intelligence in Security
- Why Artificial Intelligence Is Not a Silver Bullet for Cybersecurity
Learn from the industry's most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early-bird rate ends August 31. Click for more info. | <urn:uuid:f553d1aa-2d45-4989-af8b-d1d658c6069b> | CC-MAIN-2022-40 | https://www.darkreading.com/endpoint/the-enigma-of-ai-cybersecurity | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00353.warc.gz | en | 0.953645 | 754 | 3.28125 | 3 |
With schools heading back to school, IT and technology leaders across the country are facing some of the most challenging circumstances given the disruptions caused by the COVID-19 outbreak.
“We’re about to go back-to-school, with a choose-your-own attendance model, and it’s only me and one other IT guy. It’s madness,” reported one technology coordinator of a suburban-Chicago school district.
Plus, many parents and school staff are pushing back on reopening plans, leading to a great deal of uncertainty. With so many unknowns, IT leaders are forced to prepare for multiple contingency plans.
We’ll outline the most common reopening education models and the impact it has on technology, as well as some steps IT can take to gain control in a situation steeped in variables and last-minute changes.
Back-to-School in 2020: The COVID-19 Conundrum
The outbreak of coronavirus disrupted the end of the 2019-2020 school year, and technology leaders are still playing catch-up.
Schools across the U.S. moved remote to reduce the spread of COVID-19 beginning in March and April of 2020. Initially, government education and health officials indicated the potential return to school — in-person — in the Fall.
Yet the U.S. continues to see rising case numbers and with the autumn semester about to begin, many education leaders across departments, grade bands, and roles are struggling to make plans. The continuing changes, both in government recommendations, disease data and medical understanding, are challenging for technologists.
Some of the variables impacting reopening plans include:
Disease spread information: Data about how the coronavirus is transmitted has led medical teams to identify specific activities and behaviors that lead to increased transmission. Tight gatherings, close contact, and activities that involve heavy breathing – like singing – greatly increase the method of transmission.
Child & teen vector info: The data around how children are both impacted and involved in the transmission of COVID-19 has changed drastically over the last few months. School reopening plans have been significantly impacted by these minute-by-minute learnings.
Community demands: Parents, teachers, and students have their own views about the current virus crisis. The politicization surrounding the virus has increased the tension and sense of what best practices entail. In some areas, teachers have held sick-strikes, protesting what some view as a risk reopening.
In addition, some schools are offering a choose-your-own-adventure style entrance to the school year – or requiring 1:1 – all without including the IT team in planning.
Government recommendations: Federal and state governments have delivered mixed messages regarding the spread of COVID-19 and the risk of reopening schools, adding yet another layer of complexity for educators and IT leaders looking to make an effective 2020-2021 reopening plan.
The number of stakeholders and factors, as well as the variety of reopening education models, means technology services will be impacted.
Here are the three primary reopening plans, the impact on technology services, and what IT can do.
Education Models & Back-to-School Technology
While the nuances around reopening plans vary tremendously, they fall into three primary categories: full-time virtual or remote, part-time in-person and virtual or a hybrid, blended learning approach, and a full-time in-person model.
100% Virtual Model
The 100% virtual or remote model for reopening has the highest direct impact on IT; however, the decisive and all-in nature of this model also means that once the initial planning is complete, IT will have a consistent, routine approach to technology management.
Description: In a full-time remote or 100% virtual education model, all students and staff operate completely off-site in home or similar environments. They may use a school-issued or personal computing device.
IT Impact: Security, collaboration tools, and network connectivity become top-of-mind.
IT Next Steps: Ensure you have adequate security controls in place, include multi-factor authentication and email security.
Apply the same security diligence to your collaboration platform. We recommend Cisco Webex Teams, which is the most secure elearning platform.
Finally, help families and educators with clear, easy-to-understand support for a connected, speedy experience.
The hybrid model for reopening schools is favored by school districts with smaller populations or multiple stakeholders. While there are some risks, many districts feel this is a safe middle-ground choice for back-to-school in 2020.
Description: Hybrid models for reopening schools in the Fall are the most complex and varied. The specific logistics of a hybrid model may vary tremendously; some schools may have in-person learning half-day or split among segmented groups of students, while others may operate two days in-person and three days at-home.
IT Impact: For IT, this situation poses the most complexity. IT leaders need to ensure the technological performance of both in-person and at-home learning with technology. This might mean addressing device access, network performance, overall security, and simple troubleshooting.
IT Next Steps: Define the exact operational model under which you are working and identify relevant technology needs for each environment: in-school and at-home.
Identify what solutions already exist and map those to each environment, noting gaps in either procedures or applications.
Prioritize gaps and work with other district teams to procure the resources needed.
100% In-Person Model
A 100% in-person, regular back-to-school model for reopening is the easiest from a technological standpoint. Challenges around social distancing, device sanitation, and
Description: In a 100% in-person reopening model, most students return on-site to school for learning. This model most closely resembles pre-pandemic education models; however, the nature of the COVID-19 virus means even in a 100% in-person model, some families may opt-out, instead choosing to remain at home.
IT Impact: IT will be more focused on how technology enables social distancing in a learning environment and helps minimize disease spread.
IT Next Steps: Focus on the in-person environment first, performing your regular assessment and maintenance to ensure operational success.
Then, identify any emerging policies or needs for both the in-person and likely growing at-home learners, determining whether current resources or tools can meet them.
Finally, procure any additional resources needed to support both teachers and students during the ongoing and evolving challenges.
Challenges & Expectations For Back-to-School 2020
Regardless of the back-to-school education model employed by your district, the upcoming changes and shifting expectations will remain a challenge for everyone: staff, teachers, parents, and students.
The best steps IT can take at this juncture is controlling what they can: acquiring new devices, helping families and students navigate with clarity and transparency the multitude of possible scenarios, and adding a decisive, loud voice to the conversation.
Need help managing back-to-school technology for 2020 and the various education models? Reach out to one of our EdTech specialists at Mindsight for advice and recommendations.
See what we can do for you. Contact us today.
Like what you read?
Mindsight is industry recognized for delivering secure IT solutions and thought leadership that address your infrastructure and communications needs. Our engineers are expert level only – and they’re known as the most respected and valued engineering team based in Chicago, serving emerging to enterprise organizations around the globe. That’s why clients trust Mindsight as an extension of their IT team.
Visit us at http://www.gomindsight.com.
About The Authors
Eric White is Chief Technology Officer and VP of Consulting Services at Mindsight. With over ten years of experience in information technology and leadership, Eric excels at implementing network and data center technologies, designing high-yield solutions for the business. Holding professional certifications from Microsoft, VMware, and EMC, as well as the Cisco CCNP, Eric is an expert at solving business realities with a client-centric focus that delivers.
Siobhan Climer writes about technology trends in education, healthcare, and business. With over a decade of experience communicating complex concepts around everything from cybersecurity to neuroscience, Siobhan is an expert at breaking down technical and scientific principles so that everyone takes away valuable insights. When she’s not writing tech, she’s reading and writing fantasy, hiking, and exploring the world with her twin daughters. Find her on twitter @techtalksio. | <urn:uuid:f3071d4b-83e5-4ffc-b009-e1d151e34426> | CC-MAIN-2022-40 | https://gomindsight.com/insights/blog/back-to-school-technology-covid-19/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00553.warc.gz | en | 0.942036 | 1,796 | 2.625 | 3 |
How Secure Is Your Smartphone from Hackers?
Most have heard about the Central Intelligence Agency (CIA) hacking initiative and that the CIA has a mobile device hacking group who worked on breaching Apple’s iOS devices and Google’s Android phones. The CIA developed an arsenal of tools to infect and extract data from phones. The hacked data included location data, audio recordings, and text messages. They could also access a phone’s camera and microphone and had developed a few “zero day” exploits developed
What is a “Zero Day” Exploit?
A zero-day exploit is a term for coding flaws, unknown to the manufacturer or organization, used by hackers to access and control hardware, apps, or an IT system.
Mobile devices give internet users the ability to stay connected, work, shop, and socialize on the go. But endless connectivity also opens up unprotected devices and risky behaviors to intrusion by hackers and other cyber security threats. Many smartphone owners use a screen lock or other security features, but they don’t necessarily stay current with device updates. Still others are willing to leave a device unattended at a public charging station where anyone could grab it.
It is considered a best practice to keep a mobile devices safe and secure especially if you have any banking apps or use the device to make payments. Some of the best smartphone safety safeguards are free and easy to use.
Use Security Features to Lock Your Phone
Most smartphones come with security features that control who can unlock and use a device. The most basic feature is the use of a passcode. This can be a numeric code or a finger swipe pattern. Newer phones have biometric security controls that only unlock the phone for its registered user. Many smartphone apps can access the device’s fingerprints or biometric authentication and use that to secure the app as well.
The percentage of users using mobile devices, rather than desktop devices, to access the internet and shop online increases every year. Although it’s a minority, still 28% of smartphone owners do not use a screen lock or other device security features to control access to their phones. This includes biometric scans like facial recognition, fingerprints, or iris scans. Smartphones may have banking apps on them as well as stored payment information so controlling access with some sort of authentication is increasingly important.
With phone security locks, individuals are protected from intrusive federal searches. Previously, US courts had granted law enforcement authorities the power to force people to unlock devices using their own biometric scans, but not with passcodes. That changed earlier this year with a ruling from the US District Court for the Northern District of California. The new precedent is that government officials cannot compel anyone to incriminate themselves by using any of the security features on a phone including passcodes, facial recognition scans, fingerprint logins, or iris scans to unlock mobile devices.
Avoid Public WiFi Networks
Public WiFi connections such as those found at hotels, campus, coffee shops, retail shops, restaurants, and airports put private user data and banking information at risk. The majority of internet users (54%) of internet users use public WiFi networks while away from their home of office networks. This statistic is even more concerning when you consider that 21% are shopping online and 20% are handling their online banking!
Other seemingly innocent activities are not safe either. Logging into social media accounts and reading email using public WiFi is also an unsafe practice. Travelers are in a hurry to update loved ones or touch base with work as soon as they arrive at their airport or destination. Workers are under pressure to reconnect to thier office within minutes of landing and may do so using the first convenient WiFi network.
Logging into social media gives anyone using the same WiFi connection in an airport or cafe the ability the capture your username and password to whatever you log into. If you stop to post on Instagram, a hacker can sniff your email address and password. Check your email and the hacker has those credentials too. This is a very good start to launch a social engineering attack that leads to your bank or credit card account.
Do not use public WiFi for anything but web browsing, and only when you are not logged into the web browser. Google Chrome easily tracks users across all their apps by maintaining a single login. If you must connect to public WiFi for anything sensitive, then do so with a virtual private network (VPN).
Use a Virtual Private Network (VPN)
A VPN protects connected device’s data with encryption technology and offer privacy. VPNs allow users to securely access apps and web sites even when they are not on a secure network like free public WiFi connections.
A VPN protects your privacy online by using an encrypted network on its own private servers over any internet connection. A smartphone owner can download free and premium VPN apps to protect their personal and financial data online.
How Does a VPN Work?
When you connect to a VPN all data you send and receive is encrypted by the VPN app. A VPN can be used to alter your geographical location as it appears over the internet making them a tool for hackers. However, changing your location can also help you access web content – for example, religion – that is restricted in countries.
Keep Your Phone Updated
Some users forgo updating their phones altogether with 15% of smartphone owners reporting that they never update their phone’s operating system. While 10% never update their apps. Regularly updating your phone’s apps and operating system is one of the easiest steps you can take to secure your smartphone. It can also be one of the most maddening when your service provider pushes an update with new, unwanted features. The majority of smartphone do update their however, 40% only update their phone when it is convenient for them.
I’m guilty of this one but I have good reasons. I won’t take an update if I’m not on my home WiFi connection. I’ll also push off an update until nighttime. Unfortunately I routinely put it off for days trying to postpone some supposedly needed update that ends up making my phone harder to use. Whenever you take an update, be sure to revisit your privacy settings to ensure they have not been altered in anyway. Many times, I’ve had to turn down the data usage, and shut off location sharing after an update helped itself to my settings. If you take an update and notice a jump in battery consumption, then then data sharing is likely to blame.
Turn Off Location Tracking
Location tracking is used by hardware and app developers to track the physical location of a mobile device. Much of this tracking is conducted under the guise of improving user experience and making better devices. However, as we have seen with the string of Facebook scandals, location tracking is part of a broad tracking campaign to compile data on everything a person does with their smartphone. App developers and hardware manufactures track everything across devices from where you live, how long you spend at work, and of high interest is how much you spend on online shopping. This data is sold, in aggregate, for use by brands to market to internet users. However, the depth of tracking is not apparent to those being tracking mercilessly via their own phones.
Google tracks users across all of it properties including Google Chrome, YouTube, Google Maps, Gmail, and Photos. Maps compiles all your whereabouts in a creepy feature known as your Timeline. Facebook tracks its users across Facebook, Instagram, Messenger, and WhatsApp. Tech giants have compiled massive databases on user lifestyles, behavior, and personal information.
One of the best ways to stop tracking is to turn off location tracking. Shutting off location is trickier than it sounds. First the location tracking must be turned off at the device level. Then each app that uses location data must also be denied permission to access your device location. There are tradeoffs to disabling location tracking. For example, Google amps won’t be able to deliver driving directions as well. Be sure to disable location tracking on all apps, even if you don’t think the app is tracking you. Check to see what permission it has on your device. | <urn:uuid:e5c67ca4-8ca1-48ac-a44e-5987b5d546bd> | CC-MAIN-2022-40 | https://www.askcybersecurity.com/secure-smartphone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00553.warc.gz | en | 0.936947 | 1,661 | 2.53125 | 3 |
The Bradley County Schools system has implemented a new program to help keep children safe.
TIPS: Threat Assessment, Incident Management and Prevention Services, an award‐winning web‐based risk
management and incident reporting platform from Awareity. The TIPS platform is successfully being used in
multiple school districts across the United States to identify, prevent and effectively intervene in threatening
TIPS provides an avenue for all students, parents, faculty, staff and community members to safely report
disconcerting behaviors, suspicious incidents, or general safety/security concerns to school staff and the school
resource officer, or SRO. Reporting may be made anonymously or openly. Concerns which may be reported include, but are not limited to, bullying, cyber‐bullying, weapons, drug/alcohol use, vandalism of school property, threats of violence, suicide risk, sexual harassment, abuse and truancy.
“TIPS is a tool to be utilized by school administrators and law enforcement to increase awareness of student safety
and concerning behaviors within our schools,” said Scotty Hernandez, safety and security coordinator. “This tool
has the potential to detect, deter, and disrupt unwanted behavior or criminal activity.”
Funding for TIPS in Bradley County Schools is through a Safe Schools grant. “We strive to provide a safe learning environment for all Bradley County students,” Bradley County Schools Director Johnny McDaniel said.
TIPS does not take the place of emergency police services, but it does provide all stakeholders involved in Bradley
County Schools with another avenue to deter or disrupt unacceptable behaviors or illegal activities.
“If someone has information about concerning behaviors or suspicious activities that could potentially jeopardize the safety and security of students, faculty, or staff, the individual can access TIPS from the Bradley County Schools’ website and report that information,” Hernandez said.
Reports made through TIPS will be reviewed by administrators at the particular school and by the SRO, if
warranted. Reports can also be shared with SROs at other school locations in the event of bullying between students at different schools, harassment on the bus, etc. Since its implementation, the SROs have taken
advantage of TIPS to keep track of over 400 reports, ranging from daily log activities, custody issues, and juvenile citations to teaching “DARE”; and making arrests. TIPS makes it simple to see and track who has done what
regarding any particular incident.
TIPS allows school administrators and safety team members to investigate and coordinate actions and securely
share information during investigation, intervention, and prevention efforts. All actions are easily documented,
and team members can easily see all incidents for their school, review individual incident reports, and search or
review related incidents in real‐time.
The system can be accessed through the school’s website by visiting http://www.bradley schools.org and clicking
on the TIPS: Report Incident logo. Specific student or school safety concerns can also be reported by calling 918‐
Awareity helps leading organizations prevent the preventable and transform the status quo. Awareity is
reinventing the way schools improve safety and helping organizations prevent regulatory failures, compliance
fines, lawsuits, privacy breaches, safety disconnects, operational challenges, ethical lapses, incident reporting
failures, workplace violence and more. Awareity offers an innovative and cost‐effective prevention platform to
connect the dots, eliminate embarrassing gaps and realize a better bottom line. | <urn:uuid:fa2a3e86-e710-470f-9eee-d3efbe9aaea0> | CC-MAIN-2022-40 | https://www.awareity.com/2013/11/17/bradley-county-schools-launch-tips-safety-platform/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00553.warc.gz | en | 0.928949 | 768 | 2.515625 | 3 |
What is cyber security?
Cyber Security is the practice of protecting critical systems and sensitive information from digital attacks. Also known as information technology (IT) security, cyber security measures are designed to combat threats against networked systems and applications, whether those threats originate from inside or outside of an organization.
In 2020, the average cost of a data breach was USD 3.86 million globally, and USD 8.64 million in the United States. These costs include the expenses of discovering and responding to the breach, the cost of downtime and lost revenue, and the long-term reputational damage to a business and its brand. Cybercriminals target customers’ personally identifiable information (PII) — names, addresses, national identification numbers (e.g., Social Security number in the US, fiscal codes in Italy), and credit card information — and then sell these records in underground digital marketplaces. Compromised PII often leads to a loss of customer trust, the imposition of regulatory fines, and even legal action.
Security system complexity, created by disparate technologies and a lack of in-house expertise, can amplify these costs. But organizations with a comprehensive cyber security strategy, governed by best practices and automated using advanced analytics, artificial intelligence (AI) and machine learning, can fight cyberthreats more effectively and reduce the lifecycle and impact of breaches when they occur.
Cybersecurity terms and domains
A strong cyber security strategy has layers of protection to defend against cyber crime, including cyber attacks that attempt to access, change, or destroy data; extort money from your users or the organization; or aim to disrupt normal business operations.
Countermeasures should address:
Critical infrastructure security
Practices for protecting the computer systems, networks, and other assets that society relies upon for national security, economic health, and/or public safety. The National Institute of Standards and Technology (NIST) has created a cyber security framework to help organizations in this area, while the U.S. Department of Homeland Security (DHS) provides additional guidance.
Security measures for protecting a computer network from intruders, including both wired and wireless (Wi-Fi) connections.
Processes that help protect applications operating on-premises and in the cloud. Security should be built into applications at the design stage, with considerations for how data is handled, user authentication, etc.
Specifically, true confidential computing that encrypts cloud data at rest (in storage), in motion (as it travels to, from and within the cloud) and in use (during processing) to support customer privacy, business requirements and regulatory compliance standards.
Data protection measures, such as the General Data Protection Regulation or GDPR, that secure your most sensitive data from unauthorized access, exposure, or theft.
Building security awareness across the organization to strengthen endpoint security. For example, users can be trained to delete suspicious email attachments, avoid using unknown USB devices, etc.
Disaster recovery/business continuity planning
Tools and procedures for responding to unplanned events, such as natural disasters, power outages, or cyber security incidents, with minimal disruption to key operations.
Storage security – any platform that stores critical data should deliver rock solid data resilience with numerous safeguards. This may include encryption and immutable, isolated data copies. These remain in the same pool so they can quickly be restored to support recovery, minimizing the impact of a cyber attack.
Enables you or your provider to manage and secure your mobile workforce with app security, container app security and secure mobile mail.
Dangerous cyber security myths
The volume of cyber security incidents is on the rise across the globe, but misconceptions continue to persist, including the notion that:
Cybercriminals are outsiders
In reality, cyber security breaches are often the result of malicious insiders, working for themselves or in concert with outside hackers. These insiders can be a part of well-organized groups, backed by nation-states.
Risks are well-known
In fact, the risk surface is still expanding, with thousands of new vulnerabilities being reported in old and new applications and devices. And opportunities for human error – specifically by negligent employees or contractors who unintentionally cause a data breach – keep increasing.
Attack vectors are contained
Cybercriminals are finding new attack vectors all the time – including Linux systems, operational technology (OT), Internet of Things (IoT) devices, and cloud environments.
My industry is safe
Every industry has its share of cyber security risks, with cyber adversaries exploiting the necessities of communication networks within almost every government and private-sector organization. For example, ransomware attacks are targeting more sectors than ever, including local governments and non-profits, and threats on supply chains, “.gov” websites, and critical infrastructure have also increased.
Common Cyber Threats
Although cyber security professionals work hard to close security gaps, attackers are always looking for new ways to escape IT notice, evade defense measures, and exploit emerging weaknesses. The latest cyber security threats are putting a new spin on “known” threats, taking advantage of work-from-home environments, remote access tools, and new cloud services.
These evolving threats include:
The term “malware” refers to malicious software variants—such as worms, viruses, Trojans, and spyware—that provide unauthorized access or cause damage to a computer. Malware attacks are increasingly “fileless” and designed to get around familiar detection methods, such as antivirus tools, that scan for malicious file attachments.
Ransomware is a type of malware that locks down files, data or systems, and threatens to erase or destroy the data – or make private or sensitive data to the public – unless a ransom is paid to the cybercriminals who launched the attack. Recent ransomware attacks have targeted state and local governments, which are easier to breach than organizations and under pressure to pay ransoms in order to restore applications and web sites on which citizens rely.
Phishing / social engineering
Phishing is a form of social engineering that tricks users into providing their own PII or sensitive information. In phishing scams, emails or text messages appear to be from a legitimate company asking for sensitive information, such as credit card data or login information. The FBI has noted about a surge in pandemic-related phishing, tied to the growth of remote work.
Current or former employees, business partners, contractors, or anyone who has had access to systems or networks in the past can be considered an insider threat if they abuse their access permissions. Insider threats can be invisible to traditional security solutions like firewalls and intrusion detection systems, which focus on external threats.
Distributed denial-of-service (DDoS) attacks
A DDoS attack attempts to crash a server, website or network by overloading it with traffic, usually from multiple coordinated systems. DDoS attacks overwhelm enterprise networks via the simple network management protocol (SNMP), used for modems, printers, switches, routers, and servers.
Advanced persistent threats (APTs)
In an APT, an intruder or group of intruders infiltrate a system and remain undetected for an extended period. The intruder leaves networks and systems intact so that the intruder can spy on business activity and steal sensitive data while avoiding the activation of defensive countermeasures. The recent Solar Winds breach of United States government systems is an example of an APT.
Man-in-the-middle is an eavesdropping attack, where a cybercriminal intercepts and relays messages between two parties in order to steal data. For example, on an unsecure Wi-Fi network, an attacker can intercept data being passed between guest’s device and the network.
Key cyber security technologies and best practices
The following best practices and technologies can help your organization implement strong cyber security that reduces your vulnerability to cyber attacks and protects your critical information systems, without intruding on the user or customer experience:
Identity and access management (IAM) defines the roles and access privileges for each user, as well as the conditions under which they are granted or denied their privileges. IAM methodologies include single sign-on, which enables a user to log in to a network once without re-entering credentials during the same session; multi factor authentication, requiring two or more access credentials; privileged user accounts, which grant administrative privileges to certain users only; and user lifecycle management, which manages each user’s identity and access privileges from initial registration through retirement. IAM tools can also give your cyber security professionals deeper visibility into suspicious activity on end-user devices, including endpoints they can’t physically access. This helps speed investigation and response times to isolate and contain the damage of a breach.
A comprehensive data security platform protects sensitive information across multiple environments, including hybrid and multi-cloud environments. The best data security platforms provide automated, real-time visibility into data vulnerabilities, as well as ongoing monitoring that alerts them to data vulnerabilities and risks before they become data breaches; they should also simplify compliance with government and industry data privacy regulations. Backups and encryption are also vital for keeping data safe.
Security information and event management (SIEM) aggregates and analyzes data from security events to automatically detect suspicious user activities and trigger a preventative or remedial response. Today SIEM solutions include advanced detection methods such as user behavior analytics and artificial intelligence (AI). SIEM can automatically prioritize cyber threat response in line with your organization’s risk management objectives. And many organizations are integrating their SIEM tools with security orchestration, automation and response (SOAR) platforms that further automate and accelerate an organization’s response to cyber security incidents and resolve many incidents without human intervention.
Zero trust security strategy
Businesses today are connected like never before. Your systems, users and data all live and operate in different environments. Perimeter-based security is no longer adequate but implementing security controls within each environment creates complexity. The result in both cases is degraded protection for your most important assets. A zero-trust strategy assumes compromise and sets up controls to validate every user, device and connection into the business for authenticity and purpose. To be successful executing a zero-trust strategy, organizations need a way to combine security information in order to generate the context (device security, location, etc.) that informs and enforces validation controls.
Cybersecurity and Cynergy Technology
Cynergy Technology offers a comprehensive and integrated portfolio of enterprise security products and services. The portfolio, supported by our partnerships with national and global leaders in the cyber security industry, provides security solutions to help organizations drive security into the fabric of their business so they can thrive in the face of uncertainty. | <urn:uuid:448d2474-9ee0-4bb3-92ec-6d32967b1f10> | CC-MAIN-2022-40 | https://www.cynergytech.com/stories/cyber-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00553.warc.gz | en | 0.924423 | 2,216 | 3.546875 | 4 |
Big data is used across verticals like Insurance, Healthcare, Manufacturing, Financial, Retail and more. Companies are using big data to improve top & bottom line revenue with business values. In this data-driven era, enterprise readiness and data management needs are becoming increasingly vital. Hadoop & NoSQL are the most critical environments for data management. And Data Lake is becoming the new repository and a single source of truth, which address the big data challenges like Volume, Variety & Velocity.
It’s false, yes big data is not equal to Data Lake. Let’s get the global terminology definitions of big data and Data Lake. If faced existing Data system has faced any of the problems like Volume, Velocity, Variety, then the system might have a big data problem. We have a lot and lot number of tools to solve the Data Mass, Data Speed, Data Variety out of which the defacto is Hadoop. It designed for distributed storage and parallel processing. Big data is not recent, which is 10+ years old coined by Roger Magoula, Director of O’Reilly Media.
Data Lake is a terminology to designate the vital component of the big data analytics pipeline in a big data world. The whole idea is to have a single store for all of the raw data that all data applications might need to analyze or to engineer the data. Many of the data systems currently using Hadoop to work on the data in the lake, but the concept is bigger than just Hadoop. If it’s single store to pull together all data from app/systems wants to analyze, and then it’s a notion of a data warehouse or data mart. But we have a large distinction between the data lake and the data warehouse. The data lake stores raw data, in the same form the data source provides, here there is no definition of the schema at all. Each data source can use whatever schema it likes. It’s up to the data consumers to make schema of that data for their purposes.
Top 10 astonishing things in Data Lake
- Store Massive Data Sets.
- Mix Disparate Data Sources.
- Ingest Bulk Data.
- Ingest High-Velocity Data.
- Apply Structure to Unstructured/Semi-Structured Data.
- Make Data Available for MPP SQL Analysis.
- Achieve Data Integration.
- Improve Machine Learning & Predictive Analytics.
- Deploy Real-Time Automation at Scale.
- Achieve continuous Innovation at Scale.
To conclude data lake is a large data storage repository that holds data in its native format until it is desired. And in simple data lake is the evolution of an Enterprise Data Warehouse (EDW) into an active repo for structured, semi-structured, and unstructured data that retains all features against which we can run all our data analyzing and process. The other way to define data lake is formed by the joining NoSQL & Hadoop. It’s the primary landing zone for disparate sources like clickstreams, weblogs, sensor data, etc. Data lake helps business to take more holistic business decisions. | <urn:uuid:c37791ed-9ca3-46aa-9643-c0e1d12402f0> | CC-MAIN-2022-40 | https://www.crayondata.com/big-data-in-datalake-vs-datawarehouse/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00553.warc.gz | en | 0.883519 | 648 | 2.546875 | 3 |
More and more areas of the UK are having full-fibre introduced - but what's so good about it? What even is it? Let's take a look.
What is Full-Fibre?
The best way to explain what Full-Fibre is to explain what the "Fibre" part means and to put it in context with the nature of phone and internet lines as they exist today.
Historically, (since 1876) phone lines, and then internet lines which have connected us have consisted entirely of copper.
Like the pair of headphones or earbuds you have, copper phone wires transmit sound signals electronically.
Your home would be (or still could be) connected to a street cabinet via a copper line, and from that street cabinet, your phone line will be connected to a local exchange which connects to the other nationwide exchanges.
The whole interconnected web of copper lines is collectively called the "PSTN" (Public Switched Telephone Network).
Average download speeds of a completely copper internet connection average around 24Mps and upload speeds of 10Mps
But as you may have heard, the copper network is being switched off and made antiquated in the coming years to make way for faster internet connectivity and as you may have guessed, Fibre is the technology to replace it.
What is Fibre-Optic?
Rather than transmitting electronic sound signals like copper wires, fibre optic cables transmit their data via light signals. This allows for much faster speeds (as fast as the speed of light) and much higher bandwidth (the amount of data that can be transferred at once).
Although FTTC is being Switched-Off in 2025 and replaced with SoGEA, it's a good name for demonstrating what's happening under the bonnet of the technology. (SoGEA uses the same technology and has the same speeds, but there's no analogue lines at all converting the copper wire into digital only.)
Fibre to the cabinet converts the majority of the cable which transports your data into fibre, in particular between the cabinet and the exchange.
Because the longer of the two sections is now fibre, the speed of your broadband will increase significantly. The last leg of the connection between you and the exchange is still copper, so the potential for higher speeds exist.
The speeds you can expect from FTTC are 80Mbps download and 20Mbps upload.
With Full-Fibre, there is no exchange, and your premises are directly connected to the exchanges. FTTP doesn't have any analogue lines whatsoever and is a digital-only connection. Your phone calls will be taking place over VoIP entirely.
As you might expect, an internet connection which is carried at the speed of light isn't as affected by distances from exchanges as copper is, so even the most remote places can potentially have blazing fast broadband.
Maximum speeds of Fibre are currently being researched and new technological innovations are pushing current world records higher, with the current fastest being a blazing 319Tbps. However, this is largely contained to academia and the speeds you can expect to see from home use are around 1000Mbps download and... 1000Mbps upload.
It couldn't be clearer, Fibre-Optic is fast. Humans haven't yet been able to create anything that travels at the speed of light, but you don't need to to harness it's speed.
Embrace Full Fibre
Get in touch with our experienced engineers at BTT, so we can ensure you have a blazing fast connection | <urn:uuid:32db1f9d-ab69-4048-826d-5147c1b72105> | CC-MAIN-2022-40 | https://www.bttcomms.com/what-is-full-fibre/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00753.warc.gz | en | 0.960817 | 747 | 3.015625 | 3 |
The gender gap in technology is real. As per the World Economic Forum, women represent just 22% of AI professionals worldwide. It is vital to address the gender diversity problem in Science, Technology, Engineering, and Maths (STEM) fields. The only way to bridge the gender gap and eliminate gender bias in AI is to encourage more women to pursue tech subjects. Women role models play a vital role in breaking gender stereotypes and inspiring young girls to take up STEM subjects. To encourage conversations on gender diversity and celebrate the achievements of women in tech, we are sharing this article listing some of the most inspiring women in AI and data science.
The most inspiring women influencers in AI and data science
1. Joy Buolamwini, Founder, Algorithmic Justice League
Joy Buolamwini is an algorithmic bias researcher at MIT media. Her research has been covered in over 40 countries and has helped uncover racial and gender bias in AI technologies of large corporations such as Microsoft, IBM, and Amazon. Buolamwini established the Algorithmic Justice League for promoting ethical and more inclusive AI. She is also part of the Global Tech Panel convened by the VP of the European Commission to advise world leaders and technology executives on ways to prevent the harms of A.I. She also launched the Safe Face Pledge agreement that prohibits the lethal application of facial analysis and recognition technology in partnership with the Georgetown Law Center in 2018. Watch her talk about Coded Bias in this documentary trailer here.
2. Fei-Fei Li, Sequoia Professor of Computer Science at Stanford University
With 100+ published research papers to her credit, Fei-Fei Li is one of the most recognized AI leaders in the world. She is a recipient of several honors including the Technical Leadership Abie Award by AnitaB.Org. She is well-known for her research in computer vision, cognitive neuroscience, AI, and healthcare. Her most prominent work includes the ImageNet project for large-scale visual recognition – a game-changer in the Deep Learning field. A strong advocate of symbiotic human-AI relationships, Dr. Li is on a mission to make AI better for humans. Watch her TED Talk on teaching computers to understand pictures here.
3. Caitlin Smallwood, VP Data & Insights at Netflix
Caitlin Smallwood heads various data science functions at Netflix – the world’s most popular OTT platform with over 50 million subscribers. Since joining the company in 2010, she has leveraged her expertise in AI-enabled analytics and recommendations systems for various application use cases including data engineering, statistical research, consumer research, personalizing recommendations, predicting content popularity, guiding marketing investments, new experimentation, and developing mathematical models that enhance Netflix and makes it what it is today. Watch this video to know more about the role of data science at Netflix.
4. Allie Miller, Global Head of Machine Learning Business Development, Startups and Venture Capital at Amazon
Allie Miller is the head of AI growth for startups and Venture Capital at Amazon Web Services. A well-known AI influencer, Allie has authored eight guidebooks on building successful AI projects and spoken at several AI events around the world. She has also addressed the European Commission and helped draft national-level AI strategies. You can subscribe to her YouTube channel here.
5. Daphne Koller, Founder & CEO at insitro
Daphne Koller is the CEO and Founder of insitro – a company that leverages ML and high-throughput biology for drug discovery. She began her career as a professor at the Computer Science department at Stanford University in 1995, where she spent the next 18 years. She then worked as the Chief Computing Officer of Calico, an Alphabet company in the healthcare space. Later, she co-founded Coursera – a popular e-learning platform where she worked for over 5 years. She is a recipient of numerous honors and awards including the Sloan Foundation Faculty Fellowship. Watch this video to learn how machine learning makes drug development faster and cheaper.
6. Rana el Kaliouby, CEO & Co-Founder, Affectiva
Rana el Kaliouby is an AI thought leader and the author of the book – Girl Decoded. She is the co-founder and CEO of Affectiva, an MIT Media Lab spinoff credited with creating the category of artificial emotional intelligence, or Emotion AI. She is part of industry organizations like the Partnership on AI and the World Economic Forum’s Council of Young Global Leaders. A strong advocate of ethical development and deployment of AI, Rana is on a mission to humanize how people interact with technology. Watch this video to learn more.
7. Kate Crawford, Sr Principal Researcher at Microsoft
Kate Crawford is a Principal Researcher at Microsoft. She co-founded the Fairness, Accountability, Transparency, and Ethics (FATE) group at Microsoft. She is also a distinguished Research Professor at NYU, a Visiting Professor at MIT’s Center for Civic Media, and an Honorary Professor at the University of New South Wales. She is known for her research on the social impacts of large-scale data systems, machine learning, and AI. She has authored several articles for leading publications including The New York Times, The Atlantic, The Wall Street Journal, and Harper’s Magazine. A well-known AI influencer, Kate has been a speaker at many high-profile AI events. She has also advised policymakers in the White House, the Federal Trade Commission, the United Nations, and the City of New York about the ethics and politics of data. In 2017, she co-founded the AI Now Institute at NYU, which is dedicated to understanding the social implications of AI. Watch her discussion on AI bias and the politics of AI here.
8. Daniela Rus, Director at MIT Computer Science & Artificial Intelligence Lab (CSAIL)
Daniela Rus is the Director of MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). She is one of the world’s leading roboticists known for her work in artificial intelligence, self-reconfiguring robots, shape-shifting machines, mobile computing, and programmable matter. Her research is instrumental in developing collaborative robotics technology. She is the recipient of the 2017 Engelberger Robotics Award from the Robotics Industries Association. Watch her TED talk here.
There is a global need to encourage more female participation in the STEM, AI, and data science fields. Women entrepreneurs, academic researchers, and industry leaders are vital to the progress of AI and related technologies that make our world better. We, at CrunchMetrics, believe in empowering women and are proud to support a pro-woman workplace. We are committed to encouraging gender diversity in AI and promoting initiatives to support women’s participation in the field.
Let’s choose to celebrate women’s achievements, challenge gender bias and help create a world that is equal for all!
#IWD2021 #WomenRoleModels # WomeninTech
Get 100+ AI Predictions for 2021 from Industry Leaders | <urn:uuid:5526c635-73fc-4f4b-98f7-0f97b3027ccb> | CC-MAIN-2022-40 | https://www.crunchmetrics.ai/blog/inspiring-women-in-ai-and-data-science/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00753.warc.gz | en | 0.936047 | 1,447 | 2.609375 | 3 |
Hackers Are Going Phishing During This Pandemic
Phishing is a method of using misleading e-mails and sites to obtain personal details. What is scary is that the hackers have turned it into art: focusing on emotion, allowing victims to let their guards down, for one click. Learn more about how hackers are phishing during this Pandemic.
Would you hand over the keys to your car to a stranger? Sounds kind of silly right? What about the keys to your house? “Can this get any sillier,” you might ask?
Unfortunately, handing over passwords and personal details of our digital accounts is tantamount to handing over the keys to both our cars and houses to hackers.
And it’s not just regular folks too.
In 2016, hackers managed to get Hillary Clinton campaign chair John Podesta to cough up his Gmail password. This was considered one of the most consequential phishing attacks in history.
Remember that year when several celebrity personal photos and videos were leaked to the public? Initially, it was thought of as a weakness in Apple’s iCloud data management system. It turns out it was primarily due to phishing. If it can happen to them, what more to us?
Here at ITS, we protect businesses from these unwanted attacks. We have done thousands of network assessments and can quickly identify gaps in company technologies that we report to client ownership and management.
We’d like to help you with this article on how to best look out for yourself and your company, during these times of pandemic. Because not only is there a sense of urgency within your company and staff, but from hackers as well.
What is Phishing?
Phishing is a method of using misleading e-mails and websites to try to obtain personal details.
What is scary is that the hackers have turned it into art: focusing on emotion, allowing victims (A.K.A you and me) to let their guards down, for one click.
And just like door-to-door food deliveries during these times of pandemic, hackers are coming right into peoples’ homes.
One piece of spotted malware alerted victims: "Just because you're home, doesn't mean you're safe," according to Cyber Security Firm Nocturnus.
They are brazen enough indeed to actually send this message right before demanding payment to unlock data.
Phishing Examples: Top 10 Social Media Email Subjects
Check out the subjects that hackers have chosen, and try to see if you can resist opening messages with subjects such as these:
- “Join My Network!” (LinkedIn)
- “Profile Views.." (LinkedIn)
- “Add Me” (LinkedIn)
- “New Message” (LinkedIn)
- “Password Change.." (Facebook)
- “Primary Email Changed" (Facebook)
- “Your Friend Tagged a Photo of You (Facebook, Instagram)
- “New Voice Message At…”
- “Your password was successfully reset)”
- “Login alert for Chrome on your mobile phone”
We can see from the top four headlines that LinkedIn is now a favorite among hackers.
From a hacker’s point of view, LinkedIn is the new candy store: victims are immediately identified, what companies they work for, their current position, and possible contact information.
It’s almost like LinkedIn did the job for them in terms of targeting their next victims.
Second to LinkedIn are popular social media pages such as Facebook and Instagram with alert messages that tug at the heart – “Your Friend Tagged a Photo of You.”
We would never associate your friend and that photo with hacking, right?
This is exactly what hackers pounce on: letting our guards down into thinking that LinkedIn and Facebook (companies with strong brands) are actually the ones messaging us.
Phishing Examples: Top 10 General Email Subjects
On a personal level, these are the subject headers that hackers use to get you to open their messages:
- “De-activation of Your Email in Process”
- “A Delivery Attempt Was Made”
- “You Have a New Voicemail”
- “Failed Delivery for Package #5357343”
- Staff Review 2018
- Revised Vacation & Sick Time Policy
- APD Notification
- “Your Order with Amazon.com”
- “Re: W-2”
- “Scanned image from MX23IOU@[domain]”
Can you imagine Amazon sending you an update on deliveries for your purchases from Thanksgiving and Christmas? Wouldn't you want to open that email and track that package quickly?
How to stop phishing attacks?
You should introduce proactive steps to protect your business, including:
- Inbound email "Sandboxing" -- testing the safety of each connection a user clicks on.
- Inspecting and evaluating web traffic
- Rewarding good conduct, if anyone spots a phishing text, maybe by displaying a "catch of the day"
- Create off-site backups of your data in case of a breach
- Limit employee access to sensitive data
When it comes to personnel training, take these additional steps to protect yourself and your business from spam and other cyber attacks:
- Warn employees about malicious websites
- Train employees to spot phishing emails
- Never send private information over email
- Don’t open attachments
- Ask employees to update their passwords to protect their home WiFi network, especially if they connect to your systems from home.
On a personal security level, you can do the following:
- Before you click or enter sensitive information, always check the spelling of URLs in the email links.
- Watch for URL redirects to subtly take you to another website with the same design.
- If you receive an email from a source you know, but it seems suspicious, instead of just hitting reply, contact the source with a new email.
An Ounce of Managed-IT Services...
Doubling up on defense is the safest way to prevent phishing assaults. Anti-malware programs and powerful firewalls prevent, track, and delete malicious files on your computers and systems. Invest in good security software.
It is also well worth urging workers to log in only to HTTPS-protected websites. In addition, search for open ports on a regular basis that could expose your networks to cyberattacks.
Doing all of these by yourself is definitely possible. Partnering with Intelligent Technological Solutions to protect your organization from these hazardous and devastating kinds of phishing attacks would be more efficient, effective, and ultimately, cost-friendly.
At ITS, we can help you build and execute a robust cybersecurity plan and a security awareness training program that lets you focus on your core business. | <urn:uuid:a8c82487-164d-4eea-985c-155399f53533> | CC-MAIN-2022-40 | https://www.itsasap.com/blog/hackers-are-going-phishing-during-this-pandemic-what-you-need-to-know-to-protect-your-business | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00753.warc.gz | en | 0.941603 | 1,422 | 2.703125 | 3 |
HSTS stands for HTTP Strict Transport Security. It is a method used by websites to declare that they should only be accessed using a secure connection (HTTPS). If a website declares an HSTS policy, the browser must refuse all HTTP connections and prevent users from accepting insecure SSL certificates. HSTS is currently supported by most major browsers (only some mobile browsers fail to use it).
HTTP Strict Transport Security was defined as a web security standard in 2012 in RFC 6797. The primary goal of creating this standard was to help avoid man-in-the-middle (MITM) attacks that use SSL stripping. SSL stripping is a technique where an attacker forces the browser to connect to a site using HTTP so that they can sniff packets and intercept or modify sensitive information. HSTS is also a good method to protect yourself from cookie hijacking.
How HSTS Works
Typically, when you enter a URL in the web browser, you skip the protocol part. For example, you type www.acunetix.com, not http://www.acunetix.com. In such a case, the browser assumes that you want to use the HTTP protocol so it makes an HTTP request to www.acunetix.com.
At this stage, the web server replies with a redirect (
301 response code) that points to the HTTPS site. The browser makes an HTTPS connection to www.acunetix.com. This is when the HSTS security policy protection begins using an HTTP response header:
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
Strict-Transport-Security header gives specific instructions to the browser. From now on, every connection to the site and its subdomains for the next year (31536000 seconds) from the moment this header is received must be an HTTPS connection. HTTP connections are not allowed at all. If the browser receives a request to load a resource using HTTP, it must try an HTTPS request instead. If HTTPS is not available, the connection must be terminated.
Additionally, if the certificate is not valid, you will be prevented from making a connection. Usually, if a certificate is not valid (expired, self-signed, signed by an unknown CA, etc.) the browser displays a warning that you can circumvent. However, if the site has HSTS, the browser will not let you circumvent the warning at all. To access the site, you must remove the site from the HSTS list within the browser.
Strict-Transport-Security header is sent for a given website and covers a particular domain name. Therefore, if you have the HSTS header for www.acunetix.com, it will not cover acunetix.com but only the www subdomain. This is why, for complete protection, your website should include a call to the base domain (in this case, acunetix.com) and receive a
Strict-Transport-Security header for that domain with the
Is HSTS Completely Secure?
Unfortunately, the first time that you access the website, you are not protected by HSTS. If the website adds an HSTS header to an HTTP connection, that header is ignored. This is because an attacker can remove or add headers during a man-in-the-middle attack. The HSTS header cannot be trusted unless it is delivered via HTTPS.
You should also know that the HSTS
max-age is refreshed every time your browser reads the header and the maximum value is two years. This means that the protection is permanent as long as no more than two years pass between your visits. If you do not visit a website for two years, it is treated as a new site. At the same time, if you serve the HSTS header with
max-age of 0, the browser will treat the site as a new one on the next connection attempt (which can be useful for testing).
You can use an additional method of protection called the HSTS preload list. The Chromium project maintains a list of websites that use HSTS and the list is distributed with browsers. If you add your website to the preload list, the browser first checks the internal list and so your website is never accessed via HTTP, not even during the first connection attempt. This method is not part of the HSTS standard but it is used by all major browsers (Chrome, Firefox, Safari, Opera, IE11, and Edge).
The only currently known method that could be used to bypass HSTS is an NTP-based attack. If the client computer is susceptible to an NTP attack, it can be fooled into expiring the HSTS policy and accessing the site once with HTTP.
How to Add a Domain to the HSTS Preload List?
To add a domain to the HSTS preload list, the sites for that domain must meet several requirements. Here is what you need to do to add your domain:
- Make sure that your sites have valid certificates and up-to-date ciphers.
- If your sites are available via HTTP, redirect all requests to HTTPS.
- Make sure that points 1 and 2 above apply to all your domains and subdomains (according to your DNS records).
- Serve the
Strict-Transport-Securityheader over HTTPS for the base domain with
max-ageof at least 31536000 (1 year), the
includeSubDomainsdirective, and the
preloaddirective. See above for an example of such a valid HSTS header.
- Go to hstspreload.org and submit your domain using the form. If the conditions are met, your domain will be queued to be added.
For increased security, the preload list is not accessed or downloaded by the browser. It is distributed as a hard-coded resource with new browser versions. This means that it takes quite a lot of time for results to appear on the list and it takes quite a long time for a domain to be removed from the list. If you want to add your site to the list, you must be sure that you are able to maintain full HTTPS access to all resources for an extended period of time. If not, you risk that your website will become completely inaccessible.
How to Remove a Domain from the HSTS Cache in a Browser?
When you are setting up HSTS and testing it, you may need to clear the HSTS cache in the browser. If you set up HSTS incorrectly, you may receive errors that will lock you out of the site unless you clear the data. Here are methods for several popular browsers. Also note that if your domain is on the HSTS preload list, clearing the HSTS cache will be ineffective and there is no way to force an HTTP connection.
Removing from Google Chrome
To remove a domain from the Chrome HSTS cache, follow these instructions:
- Go to chrome://net-internals/#hsts
- In the Delete domain security policies section, enter the domain to delete in the text box
- Click the Delete button next to the text box
Afterward, you can check if the removal was successful:
- In the Query HSTS/PKP domain section, enter the domain to verify in the text box
- Click the Query button next to the text box
- The response should be Not found
Removing from Mozilla Firefox
There are many different methods to remove HSTS information from Firefox for a given domain. All of them are described in detail in a dedicated article. The following is the simplest and fastest one, but it removes more than HSTS information from the cache.
- Close all open tabs for your site
- Open the Firefox history window (Library > History > Show All History)
- Search for the domain using the search bar
- Right-click the domain and choose the option Forget About This Site
- Restart Firefox
Removing from Apple Safari
Removing HSTS information from Safari is very easy:
- Close Safari
- Delete the following file from your home directory: ~/Library/Cookies/HSTS.plist
- Open Safari
Removing from Microsoft Internet Explorer and Microsoft Edge
You cannot remove a domain from the HSTS cache for Microsoft browsers. You can only turn off HSTS temporarily in Internet Explorer 11 and only on Windows 7 or Windows 8.1 (not on Windows 10). Full instructions are available in the relevant Microsoft support article.
Frequently asked questions
HSTS stands for HTTP Strict Transport Security. It is a method used by websites to declare that they should only be accessed using a secure connection (HTTPS). If a website declares an HSTS policy, the browser must refuse all HTTP connections and prevent users from accepting insecure SSL certificates.
HSTS lets you avoid man-in-the-middle (MITM) attacks that use SSL stripping. SSL stripping is a technique where an attacker forces the browser to connect to a site using HTTP so that they can sniff packets and intercept or modify sensitive information. HSTS is also a good method to protect yourself from cookie hijacking.
When your browser tries to connect to an HSTS-protected site using HTTP, it is redirected to an HTTPS site. Then, the browser receives an HSTS header. From this moment, your browser will remember to only use HTTPS when connecting to this site and will not try HTTP anymore (for a time defined in the HSTS header, usually a year).
To make sure that your users are protected from the first time that they visit your site, you may add your site to the HSTS preload list in the browser. This means that the next version of the browser will include your site on a static list of sites that are only to be loaded using HTTPS. However, begin by reading this article carefully to learn how to prepare and what are the consequences of HSTS preload.
Get the latest content on web security
in your inbox each week. | <urn:uuid:fcc46249-040a-45e7-980d-ed76d53a9e80> | CC-MAIN-2022-40 | https://www.acunetix.com/blog/articles/what-is-hsts-why-use-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00753.warc.gz | en | 0.889975 | 2,119 | 3.078125 | 3 |
Security & Encryption: The smart grid community needs to publish detailed specifications for different levels of security and encryption standards. For secure communication purposes, the Internet leveraged data encryption standards, including variants of Digital Encryption Standard (DES) as endorsed by the National Institute of Science & Technology (NIST). To ensure secure user authentication and data integrity, techniques like digital signatures and the network authentication protocol Kerberos were also created.
Byte Me: The greening of computers – The Scene Newspaper
First, you should know that the piece of equipment that supplies power to your computer is called the power supply. Okay, that part is easy. If you’re in the market for an energy efficient computer, you’re going to want to be sure to get one that has an 80 Plus Certified power supply. 80 Plus Certification is an electric utility-funded program to promote energy efficient power supplies for desktop computers and servers.
Linux thin client tutorial pushes green benefits – LinuxDevices.com
Osier-Mixon, who is a technical writer for MontaVista Software, defines cloud computing as “the use of resources accessed over the Internet,” typically using clients of limited capability. This is essentially the same concept as traditional client-server computing over a LAN using dumb terminals, he explains, but it has been transformed with ample bandwidth and much more compelling, multimedia rich clients and services. In short, “Terminals are no longer dumb, and clients are no longer very thin,” he writes.
Doyenz Unravels Mysteries of Cloud Computing – Emerging Vendors Blog – ChannelWeb
The company’s Automated Virtual IT platform was designed to help small businesses take advantage of cloud computing. Tiwary called it a hybrid solution that uses virtualization and cloud services to automate the delivery and management of IT infrastructure. | <urn:uuid:619cb51c-bc6f-4a24-9552-aecec41c2a08> | CC-MAIN-2022-40 | https://www.ecoinsite.com/2009/04/green-it-news-roundup-thursday-april-30.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00753.warc.gz | en | 0.928307 | 376 | 2.65625 | 3 |
Even though it has been available for several years, blockchain technology is still a mystery to many business people. It is best known as the distributed database technology at the heart of cryptocurrencies such as Bitcoin. It is hardened against tampering, preventing even its operators from revising or otherwise meddling with its continuously growing list of records.
Fundamentally, a blockchain consists of a circle of trusted partners who do business regularly and already have been vetted for security purposes.
In Bitcoin’s case, it serves as its public ledger of transactions. Fintech (financial technology) companies and other security-conscious enterprises are keeping a close eye on the technology, hoping blockchain will usher in an era of automated, efficient and fraud-free record-keeping and transaction systems.
“Blockchain holds the promise to fundamentally transform how business is done, making business-to-business interactions more secure, transparent, and efficient,” Amit Zavery, Senior Vice President of the Oracle Cloud Platform, told eWEEK after his company launched a cloud-based blockchain service earlier this year. “Enterprises can now streamline operations across their ecosystem and expand their market reach with new revenue streams, sharing data and transacting within and outside the Oracle Cloud.”
Blockchain is here and now, and it will continue to gain traction as it provides transparency to the supply chain–especially in complex supply chain industries, such as the automotive and retail industries. Blockchain securely grants access to all transactions that are taking place across the entire ecosystem.
Here are some predictions from industry leaders on the impact of blockchain in 2018.
John Engates, Chief Evangelist, Rackspace: Blockchain will move beyond cryptocurrency.
“If you say ‘blockchain’ to most people, they immediately think Bitcoin—and still have no idea what it is. And while blockchain is the foundation for cryptocurrencies (digital assets that act as mediums of exchange using cryptography to secure transactions), it’s actually a much broader way to structure, store and secure data.
“When used as a ‘distributed ledger,’ blockchain consists of concatenated blocks of data or transactions across a network of computers with no central authority. It allows the sharing of that distributed ledger across clouds and even across companies, without giving a single party the power to tamper with it—and that has powerful implications, if information about the provenance of goods, identity, credentials and digital rights can be securely stored and shared.
“One of my favorite examples of a non-financial blockchain use comes from Provenance, a UK-based software company, which successfully piloted the use of blockchain and smart tagging to track tuna from catch to consumer, allowing for verifiable social sustainability claims, among other benefits.
“So while cryptocurrency isn’t necessarily the future, it looks as though blockchain may be.”
Tom Kemp, CEO of Centrify: Blockchain will emerge as a potential disruptor across many areas of technology.
“Blockchain technology has started making serious waves–and not just in the world of cryptocurrencies. Even U.S. defense contractor Lockheed Martin seems to be exploring blockchain-related cybersecurity options. While we expect blockchain to emerge as a potential disruptor across many areas of technology in 2018, it will take several years before vulnerabilities can be addressed and the technology is considered mature enough to act as a basis for enterprise security.”
Atif Kureishy: Global VP, Emerging Practices, Teradata: Blockchain will be the most overused and misunderstood term in 2018.
“As much as the term bitcoin is bantered about, most people don’t have a clue what it really is, or understand the role of a blockchain in the secure tracking/ledgering of bitcoin transactions. But, a blockchain can be used for so much more than this. This conversation will continue and there will be a lot of hype about blockchains in 2018. Unfortunately, many people will only associate blockchain with bitcoin and will continue to be generally confused.”
Peter Loop, Associate Vice President and Senior Principal Technology Architect, Infosys:
- “The adoption of blockchain will continue at an even faster pace in 2018. This is a worldwide phenomenon and early production successes will come to light, most likely in the Middle East and Asia.”
- “With the rise of ransomware attacks demanding cryptocurrencies, blockchain and IoT cybersecurity will emerge with defenses based on cryptocurrency technologies.”
- “Blockchain will drive digital transformation of the enterprise specifically with automation, digitization of processes, tokenization of physical assets and activities and codification of complex contracts.”
- “The insurance sector will emerge as a hot area for blockchain technologies. Claims processing and complex multi-party processes like subrogation will show the business value of blockchain based automation.”
- “With major breaches such as Equifax proving that you cannot safeguard current identity data systems, the need for a more secure blockchain based identity approach, where no one holds all the keys, will emerge.”
- “JPMorgan will open a cryptocurrency trading desk, despite Jamie Dimon’s “fire in a second” comments to any JPMorgan trader who was trading bitcoin.”
- “Governance issues will continue to plague Bitcoin (Segwit2x), Etherium (Frozen Parity Funds) and others as new challenges emerge. This will drive enterprises to “private” blockchains but will not slow down the growth of core cryptocurrencies.”
Rohit Adlakha, VP of Wipro HOLMES: Enterprises will start investing in blockchain.
“Blockchain is more than Bitcoin and Ethereum, and its influence has only begun. The cool thing next year for CEOs, CTOs and CIOs will be bragging about how much their company has invested in blockchain, and what new apps/products they’re launching next.
“The next application of blockchain will be hyperledgers: Blockchain’s ability to force transparency and security across every transaction will radically alter any industry that requires a transfer of assets or information based on trust, while reducing friction and costs. In 2018, one of biggest use cases for blockchain will be the launch of hyperledgers for securing and authenticating documents better than traditional methods.”
Maciej Kranz, VP at Cisco Systems: IoT devices will converge with machine learning/artificial intelligence (AI), fog computing and blockchain technologies.
“This will help companies move from IoT initiatives that merely produce incremental gains, to those that create entirely new business models and revenue streams. This will allow companies to obtain greater value from their IoT investments and drive broader adoption.”
Bill Briggs, CTO and principal, Deloitte Consulting LLP: Blockchain to Blockchains: “Blockchain is moving rapidly from exploration into mission-critical production scenarios. Advanced use cases and increased adoption drives the need to coordinate, integrate, and orchestrate multiple blockchain initiatives within a large organization, potentially across multiple blockchains across a value chain.”
Sandy Steier, CEO of 1010data: Blockchain will enable new data analytics use cases.
“The use of blockchain in a variety of applications across multiple industries will enable new data analytics–with high accuracy, privacy and identity protection–that provide significant value to both businesses and individuals. For example, in the finance and real estate industries, analytics around the mortgage approval process could be greatly streamlined. Borrowers could elect to share accurate personal income and expense metrics with lenders via a blockchain, bypassing the tortuous, expensive, fraud, error-prone and time consuming manual process of collecting paystubs, bank statements and other paper documents. With anonymity sufficiently ensured, these metrics could also be made available for aggregate analysis that would deliver insights enabling greater efficiencies in the lending process, including far more accurate prediction of creditworthiness. Other powerful possibilities exist in health and wellness, pharma, life sciences, finance, and additional sectors.”
Brian Shannon, Chief Strategy Officer, Dolphin Enterprise Solutions Corp.: Transparency and secure access happens with blockchain.
“Blockchain is here and now, and it will continue to gain traction as it provides transparency to the supply chain – especially in complex supply chain industries, such as the automotive and retail industries. Blockchain securely grants access to all transactions that are taking place across the entire ecosystem. We used to think of blockchain within the context of the banking industry, yet the technology is rapidly gaining traction in the automotive and retail world. Blockchain-ready transactions is a concept we will talk more about in 2018.”
IDC Research: Be ready to face the realities behind the blockchain hype.
“2018 will be the year CIOs will exploit the potential of blockchain technology. While there will be steady improvement and a few breakthroughs, don’t expect a major leap in technology maturity in 2018. In addition, CIOs, CISOs will pay greater attention to blockchain security, and blockchain will start to transform fraud management and identity verification. Banking processes will also see heterogeneous blockchain adoption in 2018.”
Balakrishnan Dasarathy, professor at University of Maryland University College Graduate School: The adoption of blockchain technology will impact cybersecurity big time.
“One area in the application space—blockchain—is going to explode in 2018 and beyond. Blockchain is the technology that supports the use of vast distributed ledgers to record any transaction and track the movement of any asset, whether tangible, intangible, or digital and open to anyone.
“Blockchain technology’s disruptive aspect is its potential to eliminate intermediaries, such as government agencies, banks, clearing houses and companies like Uber, Airbnb and eBay. Blockchain provides these and other companies a measure of speed and cost savings when executing transactions. The blockchain shared, distributed and replicated ledger allows transacting parties to directly update the shared ledger for every transaction. Since parties interact directly through the shared ledger, they have to trust each other, and the transaction records in the shared ledgers should be visible only to the right parties. As such, cybersecurity technologies, specifically cryptography and access control, are critical enabling technologies for blockchain.”
Be sure to save the time/date for our next #eWEEKchat on Wednesday, Dec. 13, at 11am Pacific/2pm Eastern. The topic is one of our favorites: “Predictions and Wild Guesses for IT in 2018.” Bookmark #eWEEKchat for starters; check here for further details. | <urn:uuid:1ec1dadd-e893-423c-86a8-1cbaecb3b50e> | CC-MAIN-2022-40 | https://www.eweek.com/innovation/predictions-2018-why-blockchain-is-ready-to-break-out-in-the-enterprise/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00753.warc.gz | en | 0.923995 | 2,171 | 2.515625 | 3 |
Data protection is one of the biggest concerns at the moment. A cyber attack happens every 39 seconds, posing security risks for 1 in 3 Americans yearly. According to the Pew Research Center, about 79% of Americans are concerned about how companies use their data.
The Health Insurance Portability and Accountability Act (HIPAA) Security Rule was formulated to counter this security concern for the healthcare industry. It requires professionals to protect their patients' digitally stored data from breaches, erasure, and other cyber threats.
The rule encompasses three safeguards: physical, technical, and administrative. Complying with every security standard is crucial, or otherwise, you may have to face penalties by the federal institutions. HIIPA's law requirements may look overwhelming, so this post will make them easier for you.
Congress passed the HIPAA Security Rule in 1996 to help the American healthcare industry enhance its operations. The rule obliged the Secretary of the U.S. Department of Health and Human Services (HHS) to formulate rules that protect certain health information.
Since then, many rules have been added to the original act to protect patients' information or protected health information (PHI). To ensure their compliance, the HHS published the Privacy Rule and Security Rule.
Both of these rules work side-by-side to enhance the efficiency of the healthcare system. However, they serve different purposes.
While the Privacy Rule encompasses standards for physical security and confidentiality of PHI, the Security Rule establishes standards for protecting certain health information being stored or transferred in digital form.
The electronic PHI is termed as "e-PHI". The HIPAA Security Rule applies to every health care provider and organization that stores the patients' health information electronically.
It should be in connection with a transaction the Secretary of HHS has formulated standards for under HIPAA (covered entities) and to their business associates. The HIPAA Security Rule was implemented in 2004, followed by several other security rules, including the HITECH Act of 2009 and the Omnibus Rule of 2013.
Every healthcare providing firm has to stay compliant with HIPAA to ensure the protection of their patients. The HSS has given clear guidelines and standards so that organizations can follow them to prevent any potential risks for data breaches.
Generally, the Security Rule obliges covered entities to maintain appropriate administrative, physical, and technical safeguards for the protection of e-PHI. The covered entities are mandated to:
The term "confidentiality" means that the e-PHI is neither exposed nor available to unauthorized persons.
The HIPAA Security Rule requires healthcare organizations to implement three kinds of safeguards — including physical, administrative, and technical — to protect e-PHI. Let's discuss each of them briefly to understand what they entail for organizations.
Physical safeguards prevent physical theft or misplacement of devices containing patients' information. Covered entities need to ensure physical safeguards in the below two ways:
These rules make sure that the patients' data is valid and easily accessible to authorized persons. They include:
These rules guard your networks and devices against cyberattacks and data breaches. Covered entities must ensure:
Transmission Security. Organizations must also implement technical security measures to restrict any unauthorized or suspicious access to the e-PHI transferred through an electronic network.
The Administrative Safeguards require entities to perform a risk assessment to monitor and manage their security management processes. The risk analysis and management provisions of the Security Rule are usually addressed differently.
Risk analysis allows covered entities to determine which security measures are appropriate and help them implement all the mandatory safeguards mentioned in the Security Rule. Generally, a risk assessment procedure includes:
The risk analysis is an ongoing process that requires covered entities to periodically review its record to evaluate the effectiveness of their security measures. It allows healthcare organizations to track access to e-PHI, identify security incidents and threats, and reevaluate potential risks to e-PHI regularly.
Every organization has different security concerns, so the HHS hasn't spelled out any specific recommendations for implementing the HIPAA Security Rule. In addition, the institution hasn't defined any particular technology or method that safeguards e-PHI for all covered entities equally.
The rule allows several resources to be available due to the different natures of covered entities. For example, a small clinic operating in a rural area would have different security concerns than a renowned hospital in a major city's epicenter.
The HIPAA Security Rule is quite flexible and scalable. Typically, two major types of standards within the Security Rule exist:
These standards are essential. The covered entities have no way around implementing these rules or they'll be violating the HIPAA Security Rule.
These are mostly technical in nature. Unlike required standards, addressable standards are flexible in deciding how they should be implemented to fulfill the objectives of the security requirements. This doesn't mean that you can ignore them.
Simply put, it may not matter what procedures you choose to secure e-PHI as long as it is fully protected. If a covered entity doesn't implement any of the addressable standards, the Security Rule requires it to implement other safeguards as an alternative. Moreover, the entity also has to document the decision they took and why they did so.
There are consequences for every violation. Although the HSS obliges HIPAA on organizations, enforcing penalties on violations comes under the Office of Civil Rights (OCR).
Thus, in the event of a HIPAA Security Rule violation, the OCR puts a fine of any amount ranging from $100 to $50,000 on the covered entity. However, the other HIPAA settlements may sum up over $1 million.
You may be wondering: Can you go to prison for HIPAA violations? Well, an organization and its employees may likely be held accountable for disclosing confidential PHI for any reason.
If the HIPAA violations were done intentionally with malicious intent, they would be considered criminal and come under the jurisdiction of the department of justice. As a result, the individual at fault, rather than the organization leadership, may face prison along with fines.
IBM estimated the average time taken by organizations to detect and contain a data breach is 279 days. Imagine the amount of data that could be exposed in such a duration. The HIPAA Security Rule is as complicated as it is, due to the flexible implementations.
So, how can you comply with the HIPAA Security Rule flawlessly and quickly? Simply use help of a compliance management platform like Accountable. It is an easy-to-use and simple software platform that helps organizations understand HIPAA rules and stay compliant. Get onboard with Accountable for free now! | <urn:uuid:b8fb9beb-e187-43b9-91d7-dc2d72f81b14> | CC-MAIN-2022-40 | https://www.accountablehq.com/page/how-to-comply-with-the-hipaa-security-rule | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00753.warc.gz | en | 0.94233 | 1,344 | 2.9375 | 3 |
Today In History April 25
1846 Thornton Affair: Open conflict begins over the disputed border of Texas, triggering the Mexican-American War
A fight in 1846 between the military powers of the United States and Mexico twenty miles west upriver from Zachary Taylor’s camp along the Rio Grande. A lot bigger Mexican power crushed the Americans in the opening of threats, and was the essential defense for U.S. President James K. Polk’s call to Congress to announce war.
Despite the fact that the United States had added Texas, both the US and Mexico asserted the territory between the Nueces River and the Rio Grande. Polk had requested Taylor’s Army of Occupation to the Rio Grande right off the bat in 1846 not long after Mexican President Mariano Paredes announced in his debut address to maintain the uprightness of Mexican region to the Sabine River.
In May 13, 1846, the U.S. Congress announced war on Mexico, in spite of the Mexican government’s position that Thornton had crossed the outskirt into Mexican Texas, which Mexico kept up started south of the Nueces River (the verifiable fringe of the territory of Texas). Resistance likewise existed in the United States, with one representative proclaiming that the undertaking had been “as much a demonstration of animosity on our part just like a man’s pointing a gun at another’s bosom”. Congressman Abraham Lincoln requested to know the “specific spot of soil on which the blood of our residents was so shed.” The following Mexican–American War was pursued from 1846 to 1848 with the loss of a large number of lives and the misfortune to Mexico of the entirety of its northern territories. The Treaty of Guadalupe Hidalgo finished the war on February 2, 1848, and set up the Rio Grande as the fringe between the U.S. also, Mexico, and prompted Mexico perceiving Texas as a piece of the United States.
1861 The Union Army arrives to reinforce Washington, D.C. (US Civil War)
The Civil War in the United States started in 1861, following quite a while of stewing strains among northern and southern states over subjugation, states’ privileges and westbound extension. The appointment of Abraham Lincoln in 1860 made seven southern states withdraw and structure the Confederate States of America; four additional states before long went along with them. The War Between the States, as the Civil War was additionally known, finished in Confederate give up in 1865. The contention was the costliest and deadliest war at any point battled on American soil, with around 620,000 of 2.4 million warriors slaughtered, millions progressively harmed and a great part of the South left in ruin.
Indeed, even as Lincoln got to work in March 1861, Confederate powers compromised the government held Fort Sumter in Charleston, South Carolina. On April 12, after Lincoln requested an armada to resupply Sumter, Confederate mounted guns discharged the main shots of the Civil War. Sumter’s authority, Major Robert Anderson, gave up after under two days of assault, leaving the fortification in the hands of Confederate powers under Pierre G.T. Beauregard. Four progressively southern states–Virginia, Arkansas, North Carolina and Tennessee joined the Confederacy after Fort Sumter. Fringe slave states like Missouri, Kentucky and Maryland didn’t withdraw, yet there was a lot of Confederate compassion among their residents.
1954 Bell labs announces the 1st Solar Battery made from silicon
In April, 1954, analysts at Bell Laboratories exhibited the principal handy silicon sun-based cell. The narrative of sunlight-based cells returns to an early perception of the photovoltaic impact in 1839. French physicist Alexandre-Edmond Becquerel, child of physicist Antoine Cesar Becquerel and father of physicist Henri Becquerel, was working with metal terminals in an electrolyte arrangement when he saw that little electric flows were created when the metals were presented to light, yet he was unable to clarify the impact.
In 1873, Willoughby Smith, an English designer, found the photoconductivity of selenium while testing materials for submerged broadcast links. In 1883, American creator Charles Fritts made the principal sun powered cells from selenium. Despite the fact that Fritts had trusted his sun-based cells may contend with Edison’s coal-terminated force plants, they were short of what one percent productive at changing over daylight to power and subsequently not pragmatic. Some examination on selenium photovoltaics proceeded for the following quite a few years, and a couple of uses were found, however they were not put to broad use
1990 Hubble space telescope is placed into orbit by shuttle Discovery
The Hubble Space Telescope is a telescope that was propelled into low Earth circle in 1990 and stays in activity. It was not the principal space telescope but rather it is one of the biggest and generally adaptable, notable both as an essential research device and as an advertising shelter for stargazing. The Hubble telescope is named after cosmologist Edwin Hubble and is one of NASA’s Great Observatories, alongside the Compton Gamma Ray Observatory, the Chandra X-beam Observatory, and the Spitzer Space Telescope.
The Hubble Space Telescope is an enormous telescope in space. It was propelled into space by space transport Discovery on April 24, 1990. Hubble circles around 547 kilometers (340 miles) above Earth. It is the length of a huge school transport and weighs as much as two grown-u | <urn:uuid:5861bf26-ffa4-43cc-8586-b6a752c2fa9e> | CC-MAIN-2022-40 | https://areflect.com/2020/04/28/today-in-history-april-25/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00753.warc.gz | en | 0.963496 | 1,144 | 4.09375 | 4 |
In our previous post we started consolidating the endless story of OSPF vs IS-IS, in this post we will cover the historical part of the story, it might not be interesting for some people, but I do believe that the history is what makes the future, so please bare with me through this post.
The IS-IS protocol was developed in 1987 by Digital Equipment Corporation (DEC) as part of DECnet Phase V. and was standardized later in 1992 by the International Standards Organization (ISO) in ISO/IEC 10589:1992, the second and current edition ISO/IEC 10589:2002 cancels and replaces the first edition.
NOTE You can download the electronic version of International Standards from the ISO/IEC Information Technology Task Force (ITTF) web site: http://standards.iso.org/ittf/PubliclyAvailableStandards/index.html
IS-IS was originally designed to support Connectionless Network Protocol (CLNP) and was later adapted by the IETF in RFC 1195 “Use of OSI IS-IS for Routing in TCP/IP and Dual Environments” to support IP (Integrated or Dual IS-IS). Both the IP and CLNP information is carried within the payload of the IS-IS routing updates – Unlike IP routing protocols that use IP packets, IS-IS doesn’t use CLNP packets, rather it uses its own packets that again carries either IP or CLNP (or anything else) information as a payload for its own packets – IS-IS encapsulates its packets/PDUs directly in the data-link layer.
IS-IS was designed to be extensible. RFC 1195 defined IS-IS support for IPv4, and additional IETF extensions have defined IS-IS support for IPv6, MPLS TE, and more to go (check TRILL to know how deep is it). The Cisco IOS IS-IS implementation supports CLNP, IPv4, and IPv6, while Juniper JUNOS implementation supports only IPv4 and IPv6.
On the other hand in 1988, the IETF began work on a replacement for RIP, which was proving impracticality for large scale networks with scalability and convergence issues. It was clear that any replacement for RIP had to be based on a link-state shortest path algorithm just like IS-IS. The Open Shortest Path First Working Group was born in 1987. The OSPF-WG group closely watched the IS-IS developments and both standardization bodies, the IETF and ISO, effectively copied ideas from each other, after all mostly the same individuals were working on both protocols.
I quote from Dave Katz “IS-IS and OSPF: A Comparative Anatomy”: “OSPF work begins, loosely based on IS-IS mechanisms (LS protocols are hard!)”.
OSPF v.1 RFC was published in 1989, and the first implementation of OSPF Version 1 was shipped by router vendor Proteon. In 1990, the Dual-mode IS-IS RFC 1195 was published. In 1991, OSPF v.2 RFC was published (was updated a couple of times until finally the famous RFC 2328 in 1998) and Cisco shipped OSPF, while Cisco shipped only OSI-only IS-IS, later on in 1992 Cisco shipped dual IS-IS.
In 1995 ISPs begin deployment of IS-IS and some even switched form OSPF to it, Cisco solidified its IS-IS implementation, and any vendor targeting large ISPs had to have a solid IS-IS implementation and thus Juniper and other vendors shipped IS-IS capable routers in the late 1990s.
The current status is that you’ll most probably be seeing IS-IS in large service provider networks, and OSPF in medium-to-large enterprise networks.
For more information check the IETF working groups for both OSPF and IS-IS:
OSPF IETF Working Group:
IS-IS IETF Working Group:
I hope that I’ve been informative, moving on we should be going into details. | <urn:uuid:069af135-0e6b-4233-9ccd-4b4eb1798a20> | CC-MAIN-2022-40 | https://www.networkers-online.com/blog/2010/04/the-endless-story-of-ospf-vs-is-is-part-2-the-history/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00753.warc.gz | en | 0.926294 | 898 | 2.953125 | 3 |
PIM vs PAM vs IAM: What’s The Difference?
Does access control terminology puzzle you? Many people often mistake PIM, PAM, and IAM. Let’s shed some light on this topic and explain how these terms differ.
Does access control terminology puzzle you? Many people often mistake PIM, PAM, and IAM – privileged identity management, privileged access management, and identity and access management. Oftentimes, they also believe that privileged access management (PAM) and privileged account management (also PAM) are interchangeable terms – which is not entirely true. To shed some light on this topic, in this article, I will take a look at PIM vs PAM vs IAM, explain how these terms differ, and how and why you should integrate them into your environment.
Defining PIM, PAM, and IAM
To begin with, below I will explain what PIM, PAM, and IAM mean and why they are crucial for your organization’s safety. All these concepts are built upon the concept of granting specific rights to user groups. In essence, certain users can have particular privileges and can be given access to data and systems in accordance with the policy they have been assigned. To configure a safe environment, in the first instance, you need to define the data, applications, and users that need privileged access and maintain permissions under strict control.
Defining PIM vs PAM vs IAM
Now, let’s dig a bit deeper and try to understand each of these access management concepts.
According to Oxford Computer Training, Privileged Identity Management can be defined as follows:
“Privileged Identity Management (PIM) is a capability within identity management focused on the special requirements of managing highly privileged access. PIM is an information security and governance tool to help companies meet compliance regulations and to prevent system and data breaches through the improper use of privileged accounts.”
PIM also alludes to the monitoring and protection of superuser accounts. A superuser is an account with privileges well above that of regular user accounts. This type of network identity is typically allocated to system or database administrators and is used for platform management functions. As superuser accounts have elevated privileges, the internal restrictions of a network can be bypassed by those with access. Consequently, users might intentionally or inadvertently leak sensitive records, alter transactions, and delete data. Thus, these accounts do need to be carefully managed and monitored, with PIM procedures and systems being set up to protect an enterprise’s networks from exploitation. Here are the main points you can follow to implement Privileged Identity Management in your organization:
- Identify and keep track of all superuser accounts.
- Define how superuser accounts will be managed and what their corresponding users can and can’t do.
- Set up procedures and deploy tools for superuser account management.
In short, Privileged Identity Management is the most efficient approach for the organization-wide management of superuser accounts. C-level company members and senior management may also have admin rights and access to classified information. To prevent any compromise, certain privileges and access require close supervision and appropriate controls. PIM guarantees a specific distribution of identity and rights for each user, ensuring that they can only access data under their privilege boundaries, and only perform certain actions.
What does PAM stand for – Privileged Account Management or Privileged Access Management? Well, this is the acronym used for both terms, but keep in mind these are not exactly synonyms. Privileged Account Management is part of Identity and Access Management (short for IAM, which I will explain a bit later), focused on safeguarding an organization’s privileged accounts. My colleague Elena has extensively covered the topic of Privileged Account Management and privileged accounts, so I advise you to check out her article as well. On the other hand, Privileged Access Management includes all security strategies and tools that enable organizations to manage elevated access and approvals for users, accounts, applications, and networks. In a nutshell, PAM lets companies limit their attack surface by granting a certain level of privileged access, thus helping them avoid and minimize the potential harm that may result from external or internal threats. Here is a definition of PAM provided by TechTarget:
“Privileged access management (PAM) is the combination of tools and technology used to secure, control and monitor access to an organization’s critical information and resources. Subcategories of PAM include shared access password management, privileged session management, vendor privileged access management and application access management.”
PAM is deemed as a major security project that needs to be implemented by any organization.
Privileged Access Management requires multiple tactics, with the key purpose of upholding the Principle of Least Privilege, described as restricting access rights and permissions to the bare minimum required for normal, daily operations of users, programs, systems, endpoints, and computational processes. The PAM field falls under IAM. Jointly, PAM and IAM enable organizations to gain absolute control and easily manage all user privileges. To better understand how to implement PAM in your company, I recommend you check out the following articles:
- What is Privileged Access Management (PAM)?
- 5 Essential Features to Look for in a PAM Solution
- PAM Security Essentials – Identity Management & Asset Protection
One of the main concerns within the PAM area that affects organizations refers to the struggle to fulfill all requests coming from users who would like to have their permissions elevated to be able to complete certain tasks. To end this hassle, Heimdal™ has come up with a cutting-edge PAM solution – Heimdal™ Privileged Access Management – that helps organizations easily handle user rights, while enhancing their endpoint security. As it’s the only tool to auto-deny/de-escalate admin rights on infected machines (when used alongside the Heimdal™ Threat Prevention or Endpoint Detection suite), it substantially increases the cybersecurity in your organization.
Heimdal® Privileged Access Management
- Automate the elevation of admin rights on request;
- Approve or reject escalations with one click;
- Provide a full audit trail into user behavior;
- Automatically de-escalate on infection;
Identity and Access Management recognizes the need to enable adequate access to services and to satisfy stringent regulatory required standards. IAM is a vital endeavor in every organization, requiring technological competence and a high-level understanding and overview of the business. Here’s how Gartner defines Identity and Access Management:
“Identity and access management (IAM) is the discipline that enables the right individuals to access the right resources at the right times for the right reasons.”
Basically, a more granular control, monitoring, and auditing of privileged accounts and actions are offered by PAM, while IAM checks identities to confirm that a certain user has the right access at the right time. How to implement Identity and Access Management:
- Appoint identity as one of your main protections.
- Label access rights, find unnecessary privileges, accounts, and irrelevant user groups.
- Conduct a risk evaluation of corporate applications and networks to start building your IAM project on a solid foundation.
- Use multi-factor authentication and Single Sign-On (SSO).
- Have a strong password policy.
- Implement the Principle of Least Privilege and the Zero Trust Model.
Further recommended reading:
PIM vs PAM vs IAM explained
Next, let’s take a look at PIM vs PAM vs IAM. PIM, PAM, and IAM are acronyms that are sometimes used interchangeably. These concepts reflect numerous security aspects that function in tandem to safeguard an organization’s data and systems. Below you can see a comparison of these terms:
|Concentrates on the rights assigned (typically set by IT departments or System Admins) to various identities.|
Also assists in the control of unchecked IAM areas.
|The layer that secures a certain access level and the data that can be accessed by a privilege.|
Maintains privileged identities under protection and ensures the ones with admin rights do not engage in abuse of privileges.
|Applies to all users in the organization who have an identity, which will be monitored and handled.
Keeps the overall network safe.
To Sum Up
As the network perimeter lines are now blurring due to the increasing popularity of remote work, network security alone may not suffice. One of the potential risks for all companies are unmanaged accounts, which means that all users must always be recognizable and permanently monitored for adequate rights. Lack of access controls will increase threats and can lead to the abuse of highly sensitive data. For instance, an ex-employee may still have access to your confidential data, an attacker may compromise an account and misuse it, or insider threats could exist in your company. This is where, PIM, PAM, and IAM come into play, protecting your organization against various types of identity management dangers. | <urn:uuid:7edb1e6a-c028-4666-9984-82a18ef6fa76> | CC-MAIN-2022-40 | https://heimdalsecurity.com/blog/pim-vs-pam-vs-iam/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00753.warc.gz | en | 0.913217 | 1,900 | 2.875 | 3 |
An Internet of Things (IoT) platform is used to unify the monitoring and management of IoT endpoints within a business unit. Applications can be developed on top of the platform adding features as needed. IoT platforms can come in the form of on-premise software packages, or as cloud services. These applications benefit organizations by streamlining operations, lowering costs, and accelerating production.
By extension, an Industrial IoT (IIoT) platform aggregates the real-time data from industrial sensors, machines, and device endpoints within a factory under a unified system of control and management. These systems are designed with the capacity to manage thousands of devices while providing data-driven analytical insights about performance.
What is IIOT?
Industrial IoT (IIoT) is a subset of the Internet of Things (IoT) revolution that refers to the application of IoT principles, technology, and approaches, specifically in industry, manufacturing, energy and similar sectors. For all industries, IIoT ultimately aims first at gathering and analyzing data from factory sensors and devices, and then secondly to make intelligent responses based on data-driven insights. Automated real-time responses can be implemented to significantly streamline performance.
IIoT concepts are similar to other IoT concepts, in particular the networking together of numerous small devices, sensors, instruments, and actuators, to create the “internet of things”, a convergence of networking and device technology. However, IIoT differs from common IoT examples like smart homes, in both the degree and scale of technologies that are connected. Smart home sensors can monitor temperature, and send mobile device notifications in emergencies. Comparatively, in larger industrial settings, IIoT may orchestrate the operations and interactions of tens of thousands of devices, sensors, and robots. This difference requires more complex implementation methods, including using IIoT platforms, sophisticated device management software, and custom integrated automation tools.
Components of an IIOT platform
IIoT platforms are responsible for managing devices, collecting and managing data, integrating with complimentary systems, performing advanced analytics, and keeping systems secure. To fulfill these responsibilities, IIoT platforms have 7 main components:
Device Management — Industrial settings can have numerous IoT devices, sometimes numbering in the millions. In order to streamline such swarms of connected machines, an IIoT platform is equipped with device management features that allow the creation, configuration, management and maintenance of IoT devices.
Application Enablement & Management — Platforms offer more than administrative features, they also, as the name implies, provide a springboard for custom application development. App development capabilities allows organizations to optimize operations and reduce errors, but also develop novel apps to meet unknown challenges.
Digital Twins — Digital twins are virtual models of physical systems, used for simulated predictions that help to improve operations. In IIoT, for example, a digital twin of the factory can be used to test new hardware before it is integrated with the production system. By connecting the new device to the digital twin, teams can analyze and anticipate how that introduction will impact the whole system.
Integrations — IIoT platforms almost universally promote hardware and software integrations, it is a vital aspect of these software packages. To be sure, they should be selling agnostic end-to-end integrations and APIs as a component of their platform.
Security & Compliance — Data security and compliance capabilities are key elements in IIoT platforms. IIoT enabled manufactures take on a much wider threat surface than traditional factories due to numerous network nodes.
Data Management — Data management somewhat overlaps security and compliance, and deals with managing the massive volumes of data generated in IIoT systems. These responsibilities include the ingestion, persistence, organization and governance of data.
Advanced Analytics — IIoT platforms earn their value through advanced analytics engines that turn data into valuable and actionable insight. This provides operations the ability to make data driven decisions, and support automated management.
Why is an IIOT platform important?
Organizations deploy IIoT platforms when they are looking to gain more insight into factors that affect production throughput, factory performance, resource utilization, and quality assurance. Data aggregated from sensors and devices throughout the factory can help present a fully transparent view into all aspects of operations. Ultimately, IIoT platforms are decision-making applications used to automate industrial environments through connectivity, data analysis, forecasting, and controls.
Benefits of IIOT platforms
IIoT platforms have become foundational in successfully implementing large-scale industrial IoT deployments. The best-in-class IIoT platforms deliver many benefits:
Reduces Costs — Using software to centralize the management of large numbers of devices saves time, and eventually costs. Furthermore, automations emancipates time for IT staff by assuming repetitive and mundane tasks.
Improves Operational Performance — Real-time monitoring of both equipment and people helps to identify bottlenecks and streamline business processes and workflows. These efficient workflows can then be further integrated with upstream and downstream supply chain actors, supporting coordinated supply chains for even greater efficiencies.
Improves Productivity Throughput — The insights from platform analytics, AI, digital twins, and other innovative approaches helps to improve productivity throughput by better understanding product production and use lifecycle. Usage data from products in the field can supply a whole new layer of behavior insights that can contribute to feature and production improvements.
Improved IoT Security — IIoT platforms provide umbrella security for the thousands of devices with weak enterprise-strength security. IIoT uses identity management, secure authentication and authorization, and endpoint hardening to protect against cyberthreats.
Leverage IoT Data — Data generation is one characteristic of IoT systems that organizations are leveraging into better lifecycle management. Data can help to map new services to each stage of the product life and usage, finding new value offers and revenue streams.
Types of IIOT platforms
IIoT platforms can be built from scratch, purchased as a package, or a service in the cloud. When building your own platform, three levels must be considered, infrastructure, platform and applications. Because handling all three levels in-house comes at considerable cost, but maximum control and customizability, organizations will often only take responsibility for one or two of these levels. For the other levels, an IoT technology provider, like AWS, Google, or Microsoft, offer platform and other IoT services.
Unless there is concern for proprietary configurations, there are platform providers that offer ready to build upon frameworks so that IIoT operations can be set up quickly. Below are 4 common platforms that allow users to easily add marketplace apps, as well as build to spec.
End-to-end Platforms — Also known as application enablement IoT platforms, end-to-end platforms provide the core components for product development, including data analysis and management. These platforms are made for rapid development, typically within a specific domain, for example, industrial platforms target manufacturing and heavy industry, while consumer platforms are geared for smaller projects.
Cloud Platforms — Similar to End-to-end platforms, in that they provide the basic building blocks and functionality to rapidly set up and manage an IoT network, cloud platforms provide the important cloud advantage of accessible scalability. Start small, and grow without bounds.
Connectivity Platforms — Connectivity platforms act as communication backbones for IIoT spaces by connecting all the devices together and potentially to the internet. They provide users with the software, connectivity, and data management, including the ability to administer these resources in real-time.
Analytics Platform — IIoT products deliver data to analytics engines to be turned into actionable insight. While most platforms include analysis tools, there are many that specialized analytics platforms provide that go beyond basic analysis, like advanced analytics visualizations and data processing, digital twins, AI and machine learning.
How to select the right IIOT platform
Selecting the right IIoT platform begins with understanding the end industrial application. To support that end application, below are several factors to consider based on requirements. At a bare minimum, IIoT platforms should provide:
Security & Compliance
User Support & Access
More specifically, consider these factors:
Connectivity methods — How will equipment connect to the network, cable, WiFi, cellular, other methods?
Geographic coverage and performance — Can the provider give secondary location options to improve latency and performance, or support disaster recovery?
Hardware & edge intelligence — IIoT can extend beyond the core network, to the edge where devices are working but distant from compute resources.
Integrations & API access — If a platform is closed, organizations will not be able to integrate their systems with others, curtailing upstream and downstream integrations. Ensure that platforms support integrations and APIs.
IoT platform — Some IoT platforms are better suited for specific situations. What are providers offering in terms of data analytics, storage, connectivity, cloud features? Do they align with requirements?
Platform lifetime — Simply, is the provider reputable, has the vendor been in business long enough to demonstrate its effectiveness?
OTA firmware update — The ability to apply firmware updates over the air (OTA) can save hundreds of hours of valuable staff time. What process is there for updated devices?
Pricing models — Does the pricing model scale with usage?
Scalability & flexibility — Think about how many new devices will be added in the future. Will data storage and bandwidth be able to support this?
Business Email Address
Thank you. We will contact you shortly.
Note: Since you opted to receive updates about solutions and news from us, you will receive an email shortly where you need to confirm your data via clicking on the link. Only after positive confirmation you are registered with us.
If you are already subscribed with us you will not receive any email from us where you need to confirm your data.
"FirstName": "First Name",
"LastName": "Last Name",
"Email": "Business Email",
"Title": "Job Title",
"Company": "Company Name",
"Phone": "Business Telephone",
"LeadCommentsExtended": "Additional Information(optional)",
"LblCustomField1": "What solution area are you wanting to discuss?",
"ApplicationModern": "Application Modernization",
"InfrastructureModern": "Infrastructure Modernization",
"DataModern": "Data Modernization",
"GlobalOption": "If you select 'Yes' below, you consent to receive commercial communications by email in relation to Hitachi Vantara's products and services.",
"EmailError": "Must be valid email.",
"RequiredFieldError": "This field is required." | <urn:uuid:4879d682-5945-4966-8d1d-796fe60196a1> | CC-MAIN-2022-40 | https://www.hitachivantara.com/en-anz/insights/faq/what-is-an-iiot-platform.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00753.warc.gz | en | 0.916733 | 2,337 | 3.234375 | 3 |
Spam is unavoidable: It clutters your phone call history and chokes your email inbox. Like spam, scareware is another annoyance that seems to go hand-in-hand with internet access. It’s why you should never click pop-ups that say your system is at risk.
Most cybersecurity experts say scareware has been around since 1990 when programmer Patrick Evans designed a program called NightMare to attack computers. A creepy image of a bloody skull took over victims’ screens — and an echoing shriek assaulted the ears.
The aptly-named NightMare set a disturbing precedent. Cybercriminals have since used scareware to wrestle millions of dollars from unsuspecting victims. In this article, you’ll learn how to prevent falling victim to scareware.
Scareware definition: How this cyberattack works
Picture this: You’re surfing the web, minding your own business. Then a random pop-up says your computer has a virus. It looks legitimate, with a technical design similar to that of Apple or another trustworthy brand.
Since it looks like it came from a reliable source, you fall for the pop-up’s claims. You immediately feel stressed out. After all, your whole system could be in danger.
Most scareware pop-ups urge you to click it or else.
For example, they’ll say to “click here” to remove the viruses. Since you don’t want your device infected with malware, you might do what the pop-up asks. Unfortunately, clicking on the link will download viruses onto your device.
That’s right: You didn’t have any viruses on your phone or computer. The pop-up was lying. Cybercriminals manipulated your emotions so they could scare you into action.
How you might encounter scareware
Famous scareware attacks came in many different forms. You can encounter these nasty scams on your phone, tablet or computer. This is why you need antivirus protection on all your devices. Here’s how to set up cybersecurity programs on your iPhone or Android.
Here are some scareware attack examples you may have heard of:
- You might find ads for computer security software that says it detects many threats on your computer. The FBI says that one international cybercrime ring stole more than $74 million from victims before its apprehension in 2011.
- One FTC case led to a $163 million judgment against a marketer who promoted scareware. The federal court said that the criminal used scareware to trick customers into thinking they had computer issues.
- Do you like the Minneapolis Star Tribune? Here’s a bombshell: Back in 2018, the Department of Justice said the paper’s website hosted an ad that led readers to a scareware-infested website that slowed down their systems. Pop-ups promised to fix the issues for around $50.
As you can tell, scareware social engineering schemes are incredibly dangerous. They can steal a ton of money. Now that you know some scareware history, let’s move on to the more critical part. How to prevent it.
The easiest way to protect yourself
Not sure how to spot a scareware scam? First, ask yourself if the pop-up is hard to close. Scammers make it difficult for you to shut down the box, so even if you hit X or close, it might not disappear immediately.
You might also see icons you can’t click on. That’s because scareware designers will spoof icons from reputable companies. They’re mooching off those companies’ good reputations to trick you into thinking they work together.
So if you can’t click through to the sites, take that as a red flag. Of course, the best way to protect yourself from scareware is to protect your device with robust and up-to-date antivirus software. Kim recommends our sponsor, TotalAV.
TotalAV’s industry-leading security suite is easy to use and offers the best protection in the business. In fact, it’s received the renowned VB100 award for detecting more than 99% of malware samples over the last three years.
Not only do you get continuous protection from the latest threats, but its AI-driven Web Shield browser extension blocks dangerous websites automatically, and its Junk Cleaner can help you quickly clear out your old files.
Right now, get an annual plan of TotalAV Internet Security for only $19 at ProtectWithKim.com. That’s over 85% off the regular price. | <urn:uuid:43365fd4-9893-4dda-bbf1-b7c53c47afda> | CC-MAIN-2022-40 | https://www.komando.com/security-privacy/scareware-101/852668/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00753.warc.gz | en | 0.926953 | 955 | 2.8125 | 3 |
People often dispose of batteries in the household garbage without thinking. Later these batteries end up in a landfill. Gradually, batteries begin to deteriorate, and the chemicals they contain – such as cadmium, nickel, lead, zinc, silver, and lithium – negatively impact the environment for many years.
Batteries are easily recyclable: from a ton of batteries, we can extract up to 600kg of reusable materials. Therefore, we encourage our employees to participate in this process, by bringing dead batteries in the collection box.
One small battery can pollute 20 square meters of land and up to 400 liters of water. It will take decades for the ground to recover from such damage and become fertile again. However, that will not happen, either, if more and more batteries end up in the landfill.
How to Dispose of Old Batteries the Right Way
The main thing is not to throw away dead batteries together with your garbage. Yes, even one small battery! There are special collection points in every city. For example, look for one in the nearest mall. Of course, you wouldn't think of carrying one battery at a time, so it makes sense to choose a small container and collect batteries there. As soon as it is full, take it to the collection point.
At Cloud4U office, there is a special box where employees put batteries from AC remote controls and other devices or bring them from home.
That's how quickly and cheerfully our battery box fills up:
Taking care of the environment is easy! You don't need a lot of money or special knowledge. It's enough just to remember a few important rules. We believe that any big deal starts with a small step. And everyone can take this step. | <urn:uuid:027bee71-881f-49d2-840e-39acb2611219> | CC-MAIN-2022-40 | https://www.cloud4u.com/blog/cloud-4-green-collecting-old-batteries-in-the-office/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00753.warc.gz | en | 0.947942 | 360 | 3 | 3 |
Rahul Tenglikar, Regional Director, India at Neo4j shares insights on how graph technology can help in making your organisation secure and future ready
What is the difference between a graph database and a normal database?
A traditional relational database structures data in tables, rows or columns making it possible to establish links between two data points and gain insights about their relationship. Over the years, these traditional databases have powered software applications and proved to be helpful in gaining insights from predictable data. Such databases only model data as a set of tables and columns, carrying out complex joins and self-joins when the dataset becomes more inter-related. These queries have often been complicated to construct, and expensive as well as difficult to run in real time.
This is where graph databases come into play. Graph databases are appealing because they enable businesses to derive meaning out of the huge chunks of data that they deal with. The flexibility of a graph data structure allows one to add new nodes and relationships without jeopardizing the existing network or going through expensive data migration. Graph databases, with data relationships at their core, are extremely efficient when it comes to query performance, even for deep and complex queries.
One of the most significant differences between graph databases and traditional/relational databases is that the connections between nodes are directly linked, making it simple to relate data and follow connections. Relationships are stored at the individual record level in a graph database, whereas a relational database uses predefined structures.
Who are the beneficiaries of graph technology?
All organizations today collect huge amounts of data. Collecting this data is not enough to attain maximum advantage in competitive markets. Organizations need to mine this data in a way which enhances their decision making, supplemented with intelligent insights based on the data collected. Graph technology is what helps them unlock this potential of their data. Therefore, companies irrespective of their nature or industry can leverage graph technology and gain unparalleled advantage.
We already have graph technology being implemented by banking institutions to identify financial crimes, by governments to fight crime, prevent terrorism, improve fiscal responsibility, and provide transparency and also by telecom companies to manage increasingly complex network structures. These are just a few of the ways in which graph technology is being leveraged. The same can be applied across any industry.
Inherently, any data that involves three or four hops within its data set will become a perfect candidate for graph technology to add value.
How secure is graph data considering security is a big challenge these days?
With the ever-evolving technology landscape, security undoubtedly continues to remain a major challenge for organizations. Frauds, data breaches and ransomware attacks have become very sophisticated and the impact that they have on a company is not only difficult to manage but often long lasting. It impacts regular operations, company reputation, sales and ultimately the growth trajectory of the company.
Graph technology is a very evolved and intelligent way of dealing with such issues. Firstly, graph platforms like Neo4j suit the security function very well because the organization’s data is secured within one’s own environment without any intervention or handover to an external third-party vendor. Additionally, graph technology serves as a cybersecurity solution by helping detect breaches and enabling faster recovery in the event of an accident.
Neo4j graphs provide database-level security. Role-based access control enables organizations to create sensitive data rules and know that those rules will be applied across all Neo4j applications and uses.
What are the areas organizations need to be mindful of while deploying graph technology for cybersecurity?
Organizations today require a robust system in place to address the increasing security concerns. With restricted corporate networks becoming more prone to cyberattacks, the usual security framework is becoming irrelevant. Organizations need to understand that adopting a layered approach to security and using the latest cybersecurity tools is not enough. They need to be mindful and have an effective security posture which refers to the awareness of assets, processes to monitor and maintain security, and ability to detect, handle and recover from attacks.
For this, they must maintain a live representation of their network structure for analysis purposes. There should be an understanding of the most likely attack paths and a plan to combat the same. For organizations that already have some graph analytics in place, they should look at pushing their capabilities further. A connected data platform like ours can go beyond just supporting the basic functions and enable operational applications, such as real-time card ecommerce fraud detection, border control, and so on. It also comes with a low TCO (total cost of ownership) and gives you the ability to customize capabilities according to your organization’s needs.
Graph databases easily capture the complexity of IT infrastructure and security tools.Graph visualizations can then show the critical information needed to determine how to stop the attack, which could include blocking user accounts or access from specific IP address ranges. When a company faces a cyberattack, predicting the attackers’ next move is as simple as matching the latest attack with a node on the graph and seeing what happens next.
How is the product offered to the customers?
Neo4j has multiple offerings at different price points depending on different levels of usage. Aura free, our community edition, is a free version that small businesses and start-ups can use for small development projects, learning, experimentation, and prototyping. We also have Aura professional, which is available at a competitive cost and can be used by mid-tier companies who want to get started with Neo4j and explore its offerings. It helps with medium scale applications in advanced development or production environments. The AuraDB professional is used for large scale, mission-critical applications that require advanced security and round the clock support. Earlier this year, we also announced Neo4j Graph Data Science which is a comprehensive graph analytics workspace with new and enhanced capabilities built for data scientists. It is available as a fully managed cloud service called AuraDS.
What are you doing to market the products and develop evangelists? How are you supporting the developers?
In most organizations, developers are the first people who try our offerings, especially our community version which is free to use. Developers and data scientists are at the heart of Neo4j.They help us evangelize use cases of our products within their organizations. Most of the product enhancements too, usually, come as product feedback from our developers.
For the community, we encourage learning and experimenting with our offerings. We have tutorials, guides, a pool of resources – including blogs and videos – to help them understand the product comprehensively. We also organize meetups for them that serve as a platform to share ideas, experiences to learn and grow together.
Neo4j also organized the GraphSummit in Bangalore. The event offered new and exciting opportunities to meet and learn from Neo4j experts in the field of graph data science. Neo4j scientists also hosted workshops showcasing the development of knowledge in graphs by activation of Aura DB. With India having the largest community of graph data professionals, the Graph Summit also provided a platform for them to interact, network and share their graph stories with their peers. | <urn:uuid:2f8e6baa-b19c-4080-ad0b-3376328b0b8e> | CC-MAIN-2022-40 | https://www.enterpriseitworld.com/connected-data-the-key-to-protecting-yourself-against-emerging-cyber-attacks-says-neo4j/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00153.warc.gz | en | 0.953674 | 1,436 | 2.609375 | 3 |
This year, we celebrate the 19th annual Safer Internet Day. Each February, a day is devoted to making the internet a safer place for users of all ages across the globe. The internet is useful and we access it daily, but it’s important to protect your privacy online. We’ve created a quick guide to help you guard your privacy on the internet every day.
- Use Strong Passwords
Your passwords protect your accounts and your personal information. It’s vital that your passwords are very strong. Never share your password with others, change passwords often, and use numbers, symbols and special characters to make passwords harder to crack.
- Use Multi-Factor Authentication (MFA)
Adding an additional layer of security with multi-factor authentication makes it harder for hackers to access your information, even if they have your password. It only takes a moment and can save you post-hack headaches that can last years.
- Secure Your Devices
Utilize passcodes, fingerprint readers, and facial recognition to secure your devices. Make sure to use these security functions on all devices. Access to your device provides access to your accounts and email.
- Make Sure Your Software is Up to Date
Many software and app updates include security patches and upgrades. Update your apps often to stay on top of it, or turn on automatic updates so you never miss one! Outdated software is a primary security breach point for individuals and businesses.
- Be Wary of Wi-Fi
Public Wi-Fi networks are not secure or private. It’s best to avoid connecting to public networks whenever possible. Only connect to Wi-Fi that you trust, and keep your own Wi-Fi secure with strong passwords.
Be security smart and enjoy a safer internet every day. | <urn:uuid:1a368247-07dd-4933-966b-2fc19359cc9f> | CC-MAIN-2022-40 | https://www.jfg-nc.com/safer-internet-day/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00153.warc.gz | en | 0.894044 | 365 | 2.765625 | 3 |
virtual machine (VM)
What is a virtual machine (VM)?
A virtual machine (VM) is a program running on host hardware that provides an isolated environment with its own guest operating system (OS) and applications, separate from the host OS or any other VMs running on the host system.
Virtual machines operate identically to physical hardware
From an end user’s perspective, a VM provides nearly the same experience as a single-computer environment. Files and applications can be loaded, stored, updated and worked with in the same way as they would on a physical (i.e. bare metal) computer, without affecting the host system or any other VMs. The physical resources of the host system – such as CPU, GPU, memory and storage – are allocated to the VM by a software layer called the hypervisor. The virtual hardware devices provided by the hypervisor map to physical hardware on the host system (e.g. a VM’s virtual hard disk is stored as a file on the host hard drive).
VMs are separate from the hardware for a reason
Virtual machines have several practical applications. Because they separate the virtual operating environment from the physical hardware, VMs are useful for testing potentially malicious applications. Before rolling out an OS update, IT teams can test the OS on a VM to make sure business-critical apps will still work with the update. VMs can also be used by dev teams to test new applications or updates on a range of OSes and versions. If there is a need to run an older application that requires a legacy OS, a VM can be used to run it.
Types of VMs
Broadly speaking, there are two types of virtual machines: process VMs and system VMs.
A process VM, also known as an application VM or managed runtime environment (MRE), is a virtual platform for a single process to run as an application on a host machine. Once the process is finished, the VM is destroyed.
A system VM provides a complete system, so it works just like a bare metal system. Each system VM can run its own OS and multiple applications on that OS. This type of system requires the use of a hypervisor to access the host machine’s hardware resources.
Why should you use virtual machines?
The benefits of VMs include:
- Portability: VMs can easily be moved from one server to another, or even from on-premises hardware into a cloud environment.
- Smaller footprint: Because VMs allow for more efficient use of hardware resources, fewer host machines may be needed to support the same workloads as compared to running them in a physical environment, saving space, energy and costs.
- Faster provisioning: An existing VM can be easily duplicated when a new instance is needed, rather than having to be set up from scratch.
- Security: VMs provide a safe, sandboxed environment, so any malware or other issues affecting one specific VM are not spread to the host system or other VMs.
However, there are some trade-offs to running VMs. The administration and management of a VM environment does require some expertise from IT staff. And having a hypervisor layer and multiple OSes running on the same host system does come with a performance cost. For users who have significant performance demands, latency or resource availability issues in a VM environment may make them hesitant to work on a VM.
Virtual desktops vs virtual machines
There are two primary ways that virtualisation is used by organisations. Companies may have a mix of these two options in their network, depending on their needs.
The first option is virtual desktops. This technology creates a virtual workstation that offers a standard, shared experience across all virtual desktops on a central network. Users can easily access their virtual desktop remotely over the internet and work on it with a consistent experience regardless of the device they use to access it. The desktop interface is limited, and users only have access to specific applications. These workstations do not use virtual hardware resources such as CPU, memory or storage, and they are no longer active when the user logs off.
Virtual machines, on the other hand, offer a customisable virtual PC experience that does provide the user with specific hardware resources. A greater range of applications are available on VMs as compared to virtual desktops. VMs are also isolated from all other VMs on the network, and they continue to exist on the system even after the user signs off. They basically offer the same experience as a desktop PC but without the hardware maintenance.
Uses of virtual machines
Software, OS and application testing: While software developers naturally need to test their applications in different environments, they aren’t the only type of company that may need to do so. Any organisation that is looking to deploy a critical update may wish to test that update on a VM instance and identify possible incompatibilities before deploying it across their organisation. Performing such tests on VMs is simpler and more cost effective than having to test on several individual physical machines.
Running legacy software: Companies may have custom or specialised applications that can’t be run in a modern OS but must still be used by the business. Users who need to run these applications can run them on an old OS from a VM.
Running software designed for a different OS: Some applications are only available for a specific platform. In addition, certain users may have specific needs that cause them to use different hardware to the rest of the organisation, but still need to access company-standard apps. In these cases, a VM can be used to run software designed for a different OS to the version that is native to the host computer.
Running SaaS applications: Software as a service (SaaS) refers to providing software to users through the cloud. SaaS users subscribe to an application and access it over the internet rather than purchasing it once and installing it on their computers. VMs in the cloud are typically used for both the computation for the SaaS applications as well as for delivering them to users.
Data storage and backup: Cloud-based VM services are very popular for storing files because the data can be accessed from anywhere via the internet. Plus, cloud VMs typically offer improved redundancy, require less maintenance and scale more easily than on-premises servers.
Hosted services: Hosting services such as access management and email on cloud VMs is generally faster and more cost-effective than doing so in an on-premises data centre. Running these services on cloud VMs also helps offload maintenance burdens and security concerns to the cloud provider.
HPE virtual machine solutions
We engineer our servers for deep integration with partner operating systems and virtualisation software. We also work closely with our partners to optimise, certify and support their products in various HPE server environments. Our Partner Software portfolio delivers a variety of compelling software and virtualisation solutions for hybrid and multi-cloud environments, in collaboration with software partners including Microsoft, VMware, Red Hat and SUSE.
HPE Infosight delivers AI-powered autonomous operations that ensure your VM environment is always-on, always-fast and always-agile. It collects data from more than 100,000 systems worldwide, uses cloud-based machine learning to diagnose the root cause of issues, and recommends the right remediation through app- and resource-centric modelling. This AI-powered autonomous operation helps drive deep visibility and eliminates guesswork with VM- and data-centric analytics.
Virtualise more business-critical workloads and get the performance, availability and savings you need with HPE data storage solutions for VMs. HPE Nimble Storage provides an agile, always-on, always-fast platform for storage that can power VMs and extend across hybrid cloud. The predictive intelligence of HPE InfoSight ensures your apps are always on and always fast, with visibility from storage to virtual machines and real-time, actionable recommendations for optimisation.
HPE GreenLake offers a consumption-based solution for on-premises VM infrastructure. HPE owns and installs the hardware in your on-premises or colocated data centre, your remote office/branch office (ROBO) or your edge location – with no upfront capital purchase required. Whether you select a Nutanix environment with a choice of hypervisors, or an HPE SimpliVity-based solution, built-in buffer capacity means you’re always prepared for business growth and new business opportunities. | <urn:uuid:1c1907a9-4fb6-4315-b025-6df56a203012> | CC-MAIN-2022-40 | https://www.hpe.com/my/en/what-is/virtual-machine.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00153.warc.gz | en | 0.922179 | 1,741 | 3.828125 | 4 |
A process sheet is a document that provides all the steps for manufacturing products. Professionals can use it to ensure that each step in their workflow is appropriately completed.
Process sheets are also processed records, production documents, or shop orders. Whatever they’re called, these documents are essential for any manufacturing business to thrive.
Read out the article to learn about the process sheet and its structure.
What is a process sheet?
A process sheet consists of manufacturing instructions for a specific batch, lot, or run.
It describes the operating parameters and settings for the equipment and facilities used and associated tooling or supplies.
It contains part information, routing information, and operation detail information.
We all know a blueprint and how it works, so trust me when I say a process sheet works precisely like a blueprint. This article finds out all about process sheets and how they are done.
A process sheet is a set of instructions that can be followed to achieve the desired goal. It is a manual that involves detailed procedures of tasks.
This rule book focuses on systematically converting the raw materials into the final finished product and completing the job.
Importance of process sheet
It outlines the detailed mechanisms and describes every event in a step-by-step manner. That is where the importance of the manufacturing process sheet lies.
Firms with infrequent productions, a complex production pattern, or elaborate and irregular steps must follow a process chart efficiently.
It is developed way before a firm starts operating. That helps in having control over operations from day one.
The components outlined in a process sheet
A process sheet for machining comprises two units,
A route sheet
A route sheet is like a map that mentions the path and how the production and management activities proceeded.
An operation sheet
An operation sheet mentions the step-by-step programs to follow, outlines all the materials required for each task, and maintains a logbook of items that arrived or are yet to come.
It is an irreplaceable tool. It keeps a tab on the entire work process it has been designed.
It keeps a count on the amount of raw material required, total processing time taken, the number of workers, specifications of the machines functioning, and so on for every separate component involved in the production.
For example, a process sheet documentation for preparing a food item would include the recipe and each component mentioned in the required quantity.
It will also consist of cooking time with a detailed description of each task, the number of workers required for each job, and any remarks needed to be followed by previous experiences.
Structure of process sheet for manufacturing
- Outlining the sequence of work
- Figuring out the assemblies and sub-assemblies involved and drawing a map to figure out where each one fits
- Determining the number of units to be produced
- Machines, tools, and details of other instruments are required, along with further information like their efficiency, capacity, and run times.
- The sequence of these operations is drawn in detail on a map that develops a clear conception of how the raw material is converted into the finished product.
- A certain quality is required throughout the process; hence quality controlling instructions are also important.
- Instructions during packing and handling the products or movement during the processing are also mentioned.
Process sheets are an integral part of the manufacturing process. They can be used as a planning tool to plan wisely and efficiently, help with quality control, or be used simultaneously for various purposes by different departments.
The result is that it creates an economically efficient plan which will succeed through skillful execution.
Get more definitions about process sheets and other ERP-related terms here. | <urn:uuid:f3c28d06-4903-465e-84c5-4444ea8045a7> | CC-MAIN-2022-40 | https://www.erp-information.com/process-sheet.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00153.warc.gz | en | 0.927916 | 757 | 3.296875 | 3 |
5 Tips to Spot a Phishing Email
Phishing emails are an attempt to obtain sensitive or personal information such as usernames, passwords, financial or personal details by scammers who have disguised themselves as a legitimate business or person. The intent is to use this information for illegal purposes. Phishing has been around for more than 20 years, first coined as a phrase somewhere around 1996 by hackers stealing America Online (yes, AOL) information. While the hacking world is constantly changing and evolving its methods to fool the end user, below are 5 things to look for that are immediate red flags an email is a phish.
1. Obvious Grammar and Formatting Errors
While this seems like it should be an obvious clue that the email is a fake, thousands of end users fall victim to emails addressed to “Dear” or “Dear Customer” with no other identifier in the greeting. The unfortunate result of a world desensitized to the personal touch associated with human interaction. Users don’t seem to mind that companies won’t always remember a customer’s name.
Phishing emails also often contain different fonts and font sizes from paragraph to paragraph or even sentence to sentence. They may also lack appropriate punctuation or contain misspelled words. In some emails the phisher will also frequently use the word “kindly” as in “Kindly reply by the end of the day with the information requested”.
Often the scammer resides in a country outside the target’s residence. The scammers just aren’t familiar with the language or grammar of their target and this comes through in their poorly written email. A strategy behind this? Selecting gullible targets means a higher likelihood the scammer will get the information they need. In other words, if the end user doesn’t notice misspelled words, inappropriate or missing punctuation and varied font, they may be more likely to click a link or attachment intended to harm their credentials.
2. Claims That There is a Problem or Reward with an Associated Sense of Urgency
Phishing emails will regularly claim that there is a problem with an account, an overdue invoice or that suspicious activity has been noted. They will often note that urgent action is required to fix the issue. The diligent end user is immediately confused or scared and, in an effort to clear up their good name, quickly enters personal information to correct the error.
Eligibility for free items are also good bait for the phisher. “Click Here to Claim Your Free Pizza” is a good one especially when sent out on a Friday or just before a holiday. Gift cards from popular web retailers are also prime bait. It’s become so problematic that large online retailers like Amazon have designed entire web pages to help their consumers spot fakes. Often the supposed reward will expire if not claimed immediately or within a short time frame.
3. There’s a Suspicious Attachment or Link
A phishing email may contain fake invoices, attachments or links. These attachments or links make it easy for the end user to enter information or payment methods.
Phishing emails frequently are impregnated with malware or ransomware that, once a link or attachment is clicked, will download viruses to the user’s computer. Some viruses will enable the hacker to sit silently behind the scenes (referred to as Advanced Present Threats) and gather data: user patterns, keystrokes and other personal information. They gather this data over several days, weeks or months until the hacker deems it safe to execute their attack. This delay is strategic on behalf of the cybercriminal in that the user will likely not remember the suspicious email they clicked on that could be associated with their hacked bank account. According to the Verizon Data Breach Investigations Report, 30% of phishing messages get opened by target users and 12 % of those users click on the malicious attachment or link. These numbers tell us that phishing methods work, time and time again.
4. There’s Something Off in The Web or Email Address of The Sender
Hackers will try to mimic a legitimate web or email address as closely as possible to fool the end user. Unless the end user looks closely, the bogus information is easily missed. An example provided by Stay Safe Online would be @airbnb.work as opposed to @airbnb.com (notice the .work opposed to the .com). Hackers will sometimes add an additional letter, number or symbol to a legitimate URL or email that blends in so the phishing email is easily missed.
5. The Signature Lacks Detail
Legitimate emails will typically contain the information you need to contact the sender. Many phishing email attempts will appear to come from an internal domain, a CEO or CFO. These emails can be potentially devastating to SMB’s as the target is usually someone in HR or Accounting who is eager to respond and please their superior. End users should be on the look out for an email from a high-level executive in their own organization who is sending them communication with an informal or absent signature.
You Received a Phishing Email, Now What?
- If the email came from someone within your organization, or someone you know. Pick up the phone and call the sender (don’t reply to the email).
- If the email contains a link, copy and paste the link into isitphishing.ai. This will help you determine if the link is malicious.
- If the email contains an attachment, don’t open it. Think the attachment actually might be legit? Go to the sender’s trusted website directly (by entering the address in browser manually) and download the attachment.
- Forward it along to your IT support team or provider for review.
How to Stop Phishing Emails
The best way to stop phishing emails is to utilize an effective email filtering system. Filtering inbound and outbound email is essential to protecting not only your business’s confidential information but also its reputation. Hate getting spammed? Your customers will hate getting spammed by you via an outbound email hack even more.
Train users with Security Awareness Training and test them via simulated phishing. According to Ponemon Institute’s 2017 State of Cybersecurity in SMB 54% of data breaches were caused by a negligent employee or contractor. Cybersecurity training doesn’t have to be expensive or boring. For SMB’s who utilize a managed IT services provider, ask your provider if Security Awareness Training is included in your contract. Testing employees also helps overcome the perpetual “Rules don’t apply” or “That stuff only happens to other people” mentality. And managers of employees who regularly catch the test phishing attempts can use this for employee recognition!
>> Curious to learn more? Click here for our Complete Guide to Managed IT Services.
Phishing Isn’t Going Away
Studies show cyber attacks year over year are becoming more targeted, more severe in terms of negative consequences and more sophisticated. The profitability of these attacks and anonymity available on the dark web to cybercriminals means SMB’s must continue to stay on top of cyber defense.
Partnering with an experienced IT support provider like Astute Technology Management can ensure your business maintains a secure network via industry best practices. Serving Columbus, Ohio and Cincinnati, Ohio since 1998 with industry leading partnerships in the cybersecurity industry means your business will stay up and running day in and day out. | <urn:uuid:633b9526-b592-4e60-82fb-df9bd77634ff> | CC-MAIN-2022-40 | https://www.astutetm.com/2019/07/5-tips-to-spot-a-phishing-email/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00153.warc.gz | en | 0.932244 | 1,544 | 2.578125 | 3 |
About VXLAN Interfaces
VXLAN provides the same Ethernet Layer 2 network services as VLAN does, but with greater extensibility and flexibility. Compared to VLAN, VXLAN offers the following benefits:
Flexible placement of multitenant segments throughout the data center.
Higher scalability to address more Layer 2 segments: up to 16 million VXLAN segments.
This section describes how VXLAN works. For detailed information, see RFC 7348.
VXLAN is a Layer 2 overlay scheme on a Layer 3 network. It uses MAC Address-in-User Datagram Protocol (MAC-in-UDP) encapsulation. The original Layer 2 frame has a VXLAN header added and is then placed in a UDP-IP packet.
VXLAN Tunnel Endpoint
VXLAN tunnel endpoint (VTEP) devices perform VXLAN encapsulation and decapsulation. Each VTEP has two interface types: one or more virtual interfaces called VXLAN Network Identifier (VNI) interfaces to which you apply your security policy, and a regular interface called the VTEP source interface that tunnels the VNI interfaces between VTEPs. The VTEP source interface is attached to the transport IP network for VTEP-to-VTEP communication.
The following figure shows two ASAs and Virtual Server 2 acting as VTEPs across a Layer 3 network, extending the VNI 1, 2, and 3 networks between sites. The ASAs act as bridges or gateways between VXLAN and non-VXLAN networks.
The underlying IP network between VTEPs is independent of the VXLAN overlay. Encapsulated packets are routed based on the outer IP address header, which has the initiating VTEP as the source IP address and the terminating VTEP as the destination IP address. The destination IP address can be a multicast group when the remote VTEP is not known. The destination port is UDP port 4789 by default (user configurable).
VTEP Source Interface
The VTEP source interface is a regular ASA interface (physical, redundant, EtherChannel, or even VLAN) with which you plan to associate all VNI interfaces. You can configure one VTEP source interface per ASA/security context.
The VTEP source interface can be devoted wholly to VXLAN traffic, although it is not restricted to that use. If desired, you can use the interface for regular traffic and apply a security policy to the interface for that traffic. For VXLAN traffic, however, all security policy must be applied to the VNI interfaces. The VTEP interface serves as a physical port only.
In transparent firewall mode, the VTEP source interface is not part of a BVI, and you do configure an IP address for it, similar to the way the management interface is treated.
VNI interfaces are similar to VLAN interfaces: they are virtual interfaces that keep network traffic separated on a given physical interface by using tagging. You apply your security policy directly to each VNI interface.
All VNI interfaces are associated with the same VTEP interface.
VXLAN Packet Processing
Traffic entering and exiting the VTEP source interface is subject to VXLAN processing, specifically encapsulation or decapsulation.
Encapsulation processing includes the following tasks:
The VTEP source interface encapsulates the inner MAC frame with the VXLAN header.
The UDP checksum field is set to zero.
The Outer frame source IP is set to the VTEP interface IP.
The Outer frame destination IP is decided by a remote VTEP IP lookup.
Decapsulation; the ASA only decapsulates a VXLAN packet if:
It is a UDP packet with the destination port set to 4789 (this value is user configurable).
The ingress interface is the VTEP source interface.
The ingress interface IP address is the same as the destination IP address.
The VXLAN packet format is compliant with the standard.
When the ASA sends a packet to a device behind a peer VTEP, the ASA needs two important pieces of information:
The destination MAC address of the remote device
The destination IP address of the peer VTEP
There are two ways in which the ASA can find this information:
A single peer VTEP IP address can be statically configured on the ASA.
You cannot manually define multiple peers.
The ASA then sends a VXLAN-encapsulated ARP broadcast to the VTEP to learn the end node MAC address.
A multicast group can be configured on each VNI interface (or on the VTEP as a whole).
The ASA sends a VXLAN-encapsulated ARP broadcast packet within an IP multicast packet through the VTEP source interface. The response to this ARP request enables the ASA to learn both the remote VTEP IP address along with the destination MAC address of the remote end node.
The ASA maintains a mapping of destination MAC addresses to remote VTEP IP addresses for the VNI interfaces.
VXLAN Use Cases
This section describes the use cases for implementing VXLAN on the ASA.
VXLAN Bridge or Gateway Overview
Each ASA VTEP acts as a bridge or gateway between end nodes such as VMs, servers, and PCs and the VXLAN overlay network. For incoming frames received with VXLAN encapsulation over the VTEP source interface, the ASA strips out the VXLAN header and forwards it to a physical interface connected to a non-VXLAN network based on the destination MAC address of the inner Ethernet frame.
The ASA always processes VXLAN packets; it does not just forward VXLAN packets untouched between two other VTEPs.
VXLAN Bridge (Transparent Mode)
When you use a bridge group (transparent firewall mode), the ASA can serve as a VXLAN bridge between a (remote) VXLAN segment and a local segment where both are in the same network. In this case, one member of the bridge group is a regular interface while the other member is a VNI interface.
VXLAN Gateway (Routed Mode)
The ASA can serve as a router between VXLAN and non-VXLAN domains, connecting devices on different networks.
Router Between VXLAN Domains
With a VXLAN-stretched Layer 2 domain, a VM can point to an ASA as its gateway while the ASA is not on the same rack, or even when the ASA is far away over the Layer 3 network.
See the following notes about this scenario:
For packets from VM3 to VM1, the destination MAC address is the ASA MAC address, because the ASA is the default gateway.
The VTEP source interface on Virtual Server 2 receives packets from VM3, then encapsulates the packets with VNI 3’s VXLAN tag and sends them to the ASA.
When the ASA receives the packets, it decapsulates the packets to get the inner frames.
The ASA uses the inner frames for route lookup, then finds that the destination is on VNI 2. If it does not already have a mapping for VM1, the ASA sends an encapsulated ARP broadcast on the multicast group IP on VNI 2.
The ASA must use dynamic VTEP peer discovery because it has multiple VTEP peers in this scenario.
The ASA encapsulates the packets again with the VXLAN tag for VNI 2 and sends the packets to Virtual Server 1. Before encapsulation, the ASA changes the inner frame destination MAC address to be the MAC of VM1 (multicast-encapsulated ARP might be needed for the ASA to learn the VM1 MAC address).
When Virtual Server 1 receives the VXLAN packets, it decapsulates the packets and delivers the inner frames to VM1. | <urn:uuid:84f3ff97-54f7-425a-b9c2-f260de0fb695> | CC-MAIN-2022-40 | https://www.cisco.com/c/en/us/td/docs/security/asa/asa96/asdm76/general/asdm-76-general-config/interface-vxlan.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00354.warc.gz | en | 0.876053 | 1,679 | 2.609375 | 3 |
There are some things that require a permanent internet connection. One cannot listen to internet radio, watch YouTube or take part in a video conference, without a permanent, live, high-quality internet connection. But unlimited broadband access is not something people can take for granted.
With a large proportion of the country’s population at home, everyone is now relying 100% on their home broadband to school children and get office work done. There are some parts of the country that still lack an adequate level of broadband. According to the Office for National Statistics, in January to February 2020, 96% of households in Great Britain had internet access, up from 93% in 2019 and 57% in 2006 when comparable records began. What happens to the 4% of households who do not have the internet?
In the last few weeks, the digital divide has been highlighted once again with kids no longer able to attend school. Sadly, it is the poorest families that are hardest hit by the lockdown. Without home broadband, these families face the prospect of either using extortionate mobile internet charges to ensure their children can take lessons online, or go without something essential.
Those fortunate to have a job that has enabled them to work from home during the lockdowns, are now not only balancing homeschooling with their work – they are also time-sharing their home internet connection. Who gets priority? Online lesson or a business meeting via a video conference call?
Just like natural resources, the internet is not unlimited. It certainly is not set up to enable every school to stream their own online lessons to every child. It works very well as a broadcast channel, enabling a few content distributors to reach many, many subscribers. This is Netflix’s model and how YouTube runs.
In March during the UK’s first lockdown, Joe Wicks became a household name, with his live YouTube fitness classes. Why limit this to fitness. Can teachers leverage existing online material more effectively, rather than try to recreate a school classroom online? Can a school teacher act as a curator of online educational content? How can BBC Bitesize or the mass of classes readily available on YouTube supplement remote learning?
Beyond the debate around online lessons, accounting for limited internet access is something business and IT leaders need to consider. Is it absolutely necessary to maintain connectivity for the functionality of their applications. What happens when connectivity is poor or intermittent; what happens if, for some reason, the connection goes down? | <urn:uuid:9f737c83-6d75-4ac0-9bdc-aa9378a2acf1> | CC-MAIN-2022-40 | https://www.computerweekly.com/blog/Cliff-Sarans-Enterprise-blog/Treat-the-internet-like-a-natural-resource | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00354.warc.gz | en | 0.955604 | 500 | 2.59375 | 3 |
Although different physical storage devices with various sizes and capacities have been used so far, this is fundamentally changing with cloud technology. Cloud-based technologies do not have any physical counterparts and can be used for different goals, predominantly for storage. Supporting various innovative technologies including the Internet of Things (IoT), automation systems, and Artificial Intelligence (AI), this unique storage solution can be quite vulnerable to cyberattacks if adequate precautions are not taken. You can find out more below about providing maximum security while you benefit from cloud technologies with complete digital infrastructure.
The term cloud security is used to define the entirety of the processes utilized to protect the integrity of the cloud-based applications, data, and virtual infrastructure. The term is also valid for on-demand solutions, in addition to any cloud deployment models and services. Generally, for cloud-based services, the service provider is also liable to ensure the security of the base infrastructure and the applications and data in the cloud. Therefore, the service providers are required to be always vigilant in terms of security, and they should follow the advancements in this field and apply them when required.
Cloud deployment models are significant in order to better understand the cloud security. Having four fundamental deployment models as public, private, hybrid, and multi, the risk level of cloud technology is varied depending on the distribution model. For instance, while the risk level is fairly low in public distribution models such as Microsoft Azure or Google Cloud, the risk level is higher for private deployment models which are reserved for a single enterprise and accessible via different users in the enterprise. Hybrid or multi-deployment models that utilize both distribution models and cloud services pose greater risks. Regardless of their inherent risk level, Cloud-based services always require effective precautions since they are always targeted by malicious third parties and cyber-attacks.
Privileged Access Management (PAM) combines the most current and comprehensive defense strategies against malicious third parties executing cyber-attacks with increased efficiency and the support of greater resources. Constantly updated and evolving Privileged Access Management manages to be efficient in terms of protecting your data, including cloud security. Compiling privileged session manager, dynamic password controller, two-factor authentication (2FA), dynamic data masking, and privileged task automation against current cyber-attack scenarios, this multi-tier safety approach becomes more powerful and comprehensive yet more flexible by including various innovations in cloud technology.
Since it is quite difficult to be protected against the vulnerabilities and risks of cloud technologies with standard safety precautions, data access security should be established via innovative approaches such as Privileged Access Management. This is one of the most effective ways to create a more effective security ecosystem for digital services such as cloud technologies. Some of the steps to establish cloud security via Privileged Access Management include:
Such Privileged Access Management (PAM) steps ensure efficient protection of cloud technologies, which are so hard to be protected via only legacy security software or firewalls. You can also take that PAM steps with Krontech’s Privileged Access Management suite Single Connect, where common precautions are insufficient to ensure that processes run smoothly and safely. | <urn:uuid:80373e61-a29d-4b64-9f03-9b9368bd636b> | CC-MAIN-2022-40 | https://krontech.com/-privileged-access-management-in-cloud-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00354.warc.gz | en | 0.926009 | 625 | 2.734375 | 3 |
Sep 21 2021
With connectivity increasing rapidly, alongside the need for low latency “real-time” data processing, it is no surprise that many companies are transitioning to the edge. However, with this transition, there is a strong requirement for edge computing hardware to be built for specific conditions for its different use cases with different hardware requirements for each.
Edge computing hardware refers to the physical components and the surrounding services that are needed to run an application at the edge. These components include servers, processors, switches & routers and the end device. To learn about other parts of the edge value chain, use our Edge Ecosystem Tool.
Processors are composed of CPU, GPU, and memory storage. The CPU determines the performance of an edge computing system; a higher number of CPU cores means that the system can handle more workloads and complete tasks at a higher speed. GPUs are used to accelerate hardware and allow for performance computing to occur at the edge. GPUs can also allow for edge computers to store, process, and analyse large volumes of data. More recently, processors are being optimized and purpose built for edge and IoT, with built in AI accelerators and 5G support.
Servers are the hardware that run the compute at an edge location, within which a processor resides. Servers can be common-off-the-shelf, or specialised (depending on the processor). Servers may be more or less suited for different use cases based on their specifications and location. These include CDN edge servers, network edge servers on on-premise edge servers. Find out more about edge servers in our article: What is an edge server?
An edge router is a device that is deployed to act as a gateway between networks in addition to connecting local networks to the internet or a WAN. An edge switch (also known as an access node) is a component located at the meeting point of two separate networks and connect end-user local area networks to internet service provider networks.
Edge computing has a multitude of applications that operate in different conditions and locations. They require different hardware requirements depending on their use cases and industries. For example, for autonomous vehicles, it is necessary for real-time decision making for control of the vehicle, thus high performance hardware is a priority given the large amounts of data being processed in real-time, however, due to limited space in the vehicle, hardware size is also a constraint.
Additionally, for industrial uses, edge computing hardware should be rugged and be able to withstand shocks, vibrations, extreme temperatures, and dust due to exposures to harsh environments. To fulfil this, a fanless design could be used with a closed system, in which there are no vents required to be present to cool down the system and prevent dust and dirt from entering the computer and thus preventing damage. To prevent vibration damage, a “cableless design” could be used in which there is a lesser chance of a “loose connection” to cause a defect in the system and fewer moving parts. Data storage would be optimised by using solid-state drives (SSDs) – silicon chips – instead of hard drives (HDDs) – spinning disks – as they allow for faster data transfer and data storage. There is also less chance of data loss in accidental scenarios as fewer moving parts mean the system is less susceptible to damage from vibrations and shocks.
Due to a large amount of data being stored and processed in edge servers, they tend to heat up rapidly and therefore effective cooling systems are required. Air cooling is the most common system, however liquid cooling is also increasingly used in high-performance machines due to its greater heat capturing capacity. There are also initiatives to create sustainable and energy efficient powering solutions for edge computing hardware. Over 40% of data centre energy consumption goes to cooling systems, and there is a push for more efficient cooling systems and renewable power sources given the large power consumption of edge computing hardware. Find out more about reducing power consumption in our article: Edge computing – Changing the balance of energy in networks
There are many companies developing products and solutions in the edge hardware space, across the value chain, from processors to servers and supporting services such as power and cooling. Below is a small sample of some of the new and existing ecosystem players and their innovations in edge hardware. These companies have been taken from our 60 Edge Companies article.
Intel is an American multinational chipmaker that develops and manufactures primarily processors, but they also offer a range of vision processing units (VPUs), field-programmable gate arrays (FGPAs), networking & connectivity products in addition to a line of SSD products. Intel has a large commitment to edge through its recent acquisitions of edge hardware providers such as Habana Labs and Movidius. Additionally, the company is collaborating with other companies involved in the edge space, such as Red Hat, to create more innovative products such as a workload-optimised data node configuration for Red Hat Open Shift using Intel Xeon processors and Optane. Recently in a business reshuffle under the new CEO Pat Gelsinger, Intel further iterated its commitment to edge by creating a core business unit for edge computing. In the future, Intel’s Managing Director in India believes that edge computing will be “more and more prominent”.
Hewlett Packard Enterprise (HPE) is an American multinational technology company that offers a wide variety of edge solutions including edge services, edge security and converged edge systems. It has specialised edge products, for example hardware converged edge systems that are ruggedised to cater for a variety of harsh operating environments. Additionally, it provides standalone edge server blades – compact devices that distribute and manage data in a network. HPE is committed to edge, with CEO Antonio Neri stating “the enterprise of the future is edge-centric, cloud-enabled and data-driven”, whilst creating an “intelligent edge practice” for the company. Furthermore, HPE seeks to have a “partnership first” attitude to achieve its “edge to cloud vision” through strong collaboration and innovation.
Iceotope is a UK-based edge facility, power, and cooling provider. Its main offering is a chassis-level liquid technology. The company cites that space constraints, rising chip & rack densities, water use regulations and the societal pressure of reducing energy consumption drives its mission of developing high quality cooling technologies. It recently partnered with Lenovo, Schneider Electric and Avnet to take HPC to the edge with Lenovo’s ThinkSystem high performance servers to the edge. Iceotope is committed to sustainability and was named as one of the Sunday Times BGF 10 Green Tech to Watch UK Companies. In 2021, it is focusing on how to bring more sustainable data centres by saving water and using more efficient liquid cooling solutions.
NGD Systems is an American computational storage provider. It created the world’s first computational storage device and recently released the entire portfolio of computational storage drives (NVME SSD) to the edge market to production. It is increasing partnerships with other edge companies, for example with Trenton Systems, a rugged cyber-secure system provider, to collaborate and create ruggedised, high-capacity computational storage devices (CSDs). The company is seeking more partnerships and continuing to develop new products – especially for organisations with processing needs or public clouds wishing to offer edge services. The CEO, Nader Salessi said, “NGD is solving issues that no other legacy storage architecture can address”.
Dell is a large American multinational technology company that has significant investments in the edge market. The portfolio is segmented into three types of edge computing hardware: Mobile Edge, Enterprise Edge and the IoT Edge for different user sizes and use cases. Dell is committed to edge computing with its UK vice president Rob Tomlin stating, “edge is the future, and those that fail to embrace it now are likely to be left behind”. The CEO, Michael Dell, made it clear that the company’s investment in this industry is “accelerating and not going to slow down”.
Originally published on https://stlpartners.com/edge_computing/how-edge-computing-will-impact-hardware/ | <urn:uuid:05310676-45f6-4ba5-a1e1-084a1b7c7191> | CC-MAIN-2022-40 | https://ngdsystems.com/how-edge-computing-will-impact-hardware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00354.warc.gz | en | 0.931847 | 1,683 | 3.359375 | 3 |
Wherever there is data, structured or otherwise, there's a good chance that predictive modeling software and analytics can process it. The results vary, but people can learn a lot about certain phenomena, as well as possibly gain an understanding that may be different from what came before. Businesses can use these insight to make informed decisions about their strategy, including the products they sell and the marketing campaigns they execute. One area which will likely greatly benefit from the use of big data in their field is the study of history. As unstructured data becomes compilable and old records get digitized, a better understanding of what happened in the past becomes apparent.
Understanding the past in context
While historians cannot compile every detail to create a clear picture of what happened in terms of the past, they can focus on different areas to create context for critical moments in history. For example, we may never know exactly what may have spurred a politician to make one decision or another, but we can gather data to understand the position he was in.
Dataversity examined the context of history by performing analytics on crucial points to assess U.S. president job performance over the the last 70 years, dating back to the Harry Truman administration. The focal point was creating jobs in the economy. The goal in this situation was to step away from the hagiographical viewpoints of individuals and look at the raw data to see how well they performed in reality. Many interesting observations appear, providing a snapshot of history that isn't thought about. For example, President Truman's term had a very volatile job market, including the largest shedding of jobs in a single month in 1945 – likely to coincide with the end of World War II. Other presidents had more stable job patterns in accordance with the state of the economy.
Creating a visual history
Another area where data analytics could play a major role is historiography, or the graphical representation of history. With data visualization, historians have a new and improved means of explaining the past in a way that makes sense. David J. Staley of the American Historian refers to its usage as "distant reading," since it requires looking at the larger picture to find patterns in the details. He demonstrated this by creating 3-D printouts of graphs showing when the frequency of certain terms in key publications on a yearly basis. Such information can inform viewer of when certain political movements or historical events begin taking shape. With these tools, better understandings of history surface as a whole, thanks to big data. | <urn:uuid:3cfd418a-72b1-4915-a7da-fa02251a1f91> | CC-MAIN-2022-40 | https://avianaglobal.com/what-is-data-analytics-doing-for-history-studies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00354.warc.gz | en | 0.961557 | 506 | 3.125 | 3 |
A new University of Utah study released Monday provides more fuel for the conviction that sending and receiving text messages while driving affects concentration and reaction times. However, more potential digital distractions lie down the road — as shown by another Monday announcement: Ford Motor said its second-generation Sync service will turn cars into rolling WiFi hotspots in 2010.
Granted, the next-level Sync entertainment and connectivity qualities are designed for everybody in a car except the driver — unless the car is at a stop. Sync’s first generation has also drawn raves for features that allow drivers hands-free access to mobile phone services and stored music.
Yet drivers’ advocates are urging tech companies to think about all the consequences of mobile Web innovation, so that safety and wireless convenience don’t end up on a collision course.
Nineteen states have bans on texting while driving. Increased competition for a driver’s attention awaits, as more companies invest in the concept of mobile connectivity to boost not only their own fortunes, but those of a struggling automotive industry as well.
Putting the Brakes on Car Texting
The study, led by University of Utah psychologists Frank Drews and Dave Strayer and released in the current issue of the journal Human Factors, focused on a group of 40 test subjects averaging 21 years of age who used mobile devices in driving simulations.
“Analysis of driving performance revealed that participants in the dual-task condition responded more slowly to the onset of braking lights and showed impairments in forward and lateral control compared with a driving-only condition,” the study’s authors concluded. “Moreover, text-messaging drivers were involved in more crashes than drivers not engaged in text messaging.”
Driving while talking on a cell phone is bad enough, but those who texted while they drove changed their driving habits in much more negative ways, the researchers reported. The test subjects increased their following distance from the car ahead of them as they texted, almost as if they were trying to create a “buffer zone”; hence, they were aware of the inherent dangers in texting while driving. Yet that was not enough to counteract the impact of texting on braking distance and reaction times, and slowing down to create a buffer may have increased the potential for being rear-ended by cars following them.
There’s texting, and then there’s reading reply texts. Both activities increased reaction times, but reading a text resulted in even longer braking times, the study concluded. In this situation, though, the researchers realized an opportunity for technology to ride to the rescue.
“For example, systems reading messages out loud could support drivers,” they wrote. “However, if the impairment associated with reading text messages is a result of the externally controlled event of receiving a text message, then suppressing reception of messages while operating a vehicle might be a better-suited strategy to mitigate the impact of driver distraction.”
More research on this subject is useful, Justin McNaull, AAA’s director of state relations, told TechNewsWorld.
There is a difference between simulator research and real-world testing, he acknowledged, but “clearly, the range of physical and fundamental distractions makes [texting] completely inappropriate for driving.”
The AAA has been active in getting state legislatures to implement bans on adults and teens texting while driving, but McNaull said its members know that it can’t halt the march of innovation.
“We don’t want to push technology out of the vehicle that can be useful for passengers,” he commented. “We’re concerned that these technologies be developed and deployed responsibly by the automobile manufacturers and the device manufacturers. It’s important that when this equipment is developed that we protect ourselves from ourselves.”
Services like the next generation of Ford’s Sync auto connectivity promise mobile WiFi with the ease of a USB modem plugged into a port on a car’s dashboard, creating a wireless zone. Ford, which has its first-generation Sync in 13 of its current models, says more will get the second generation of the service sometime next year.
A rolling hotspot can mean more updated traffic and weather data streamed to a car’s driver.
“We’re all for making people more productive behind the wheel,” McNaull observed. “We just want to make sure that efficiency doesn’t come at the cost of crashes and injuries and lives.”
The Future of Mobile Connectivity
Any parent who has tried to keep a young child strapped in a car seat entertained for a long road trip knows the value in mobile connectivity, said Ben Bajarin, director of the consumer technology practice at Creative Strategies.
“We know in terms of media delivery and other things, the car’s always been an attractive target,” Bajarin told TechNewsWorld. “Most people had assumed something like satellite or cable service would deliver TV, instant access to on-demand movies and games — making the car some type of portable entertainment platform. WiFi may be the way it’s delivered.”
Yet mobile connectivity has to be more about just accessing the Internet from the car, according to Bajarin. More people, especially younger people, will already have smartphones and other Web-enabled mobile devices for that purpose when they buckle up. Perhaps the screens they bring with them into a car are the windows to a richer media experience enabled by the rolling hotspot concept.
“Maybe your laptop, or one of these tablet PCs, as long as it’s got WiFi — then it becomes more interesting,” said Bajarin. “It can go from home to car to hotel, whatever. There are more uses to it.”
Technology companies must keep the safety issues in mind when developing new concepts for cars, he agreed. Yet a richer data pipe into automobiles can also enhance safety — for example, by improving vehicle maintenance providing real-time traffic data. | <urn:uuid:17a7735e-394b-4236-9c1e-0617d58c7a93> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/automobiles-digital-technology-and-safety-its-complicated-68971.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00354.warc.gz | en | 0.95152 | 1,251 | 2.765625 | 3 |
The fundamentals of understanding color
Color theory is both the science and art of color. It explains how humans perceive color; how colors mix, match or clash; the subliminal (and often cultural) messages colors communicate; and the methods used to replicate color.
So why should you care about color theory as an entrepreneur? Why can’t you just slap some red on your packaging and be done with it? It worked for Coke, right?
Color theory will help you build your brand. And that will help you get more sales. Let’s see how it all works.
Color is perception. Our eyes see something (the sky, for example), and data sent from our eyes to our brains tells us it’s a certain color (blue). Objects reflect light in different combinations of wavelengths. Our brains pick up on those wavelength combinations as a phenomenon we call color.
When you’re strolling down the soft drink aisle scanning the shelves filled with 82 million cans and bottles and trying to find your six-pack of Coke, what do you look for? The scripted logo or that familiar red can?
People decide whether or not they like a product in 10 seconds or less. 90% of that decision is based solely on color. So, a very important part of your branding must focus on color. | <urn:uuid:06f9516f-2b03-4ad8-bb75-96e146fba70d> | CC-MAIN-2022-40 | https://www.madwolf.com/Substance/MadWolf-Substance/Blogs/-the-fundamentals-of-understanding-color | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00354.warc.gz | en | 0.905284 | 275 | 2.609375 | 3 |
Series: IBM Mainframe Performance and Capacity Management
Z Performance – Introduction to Mainframe Performance
The Introduction to Mainframe Performance course provides the learner with a core understanding of what performance measures are required when managing a mainframe environment. Measuring the usage of critical resources is discussed, and potential issues that can affect the performance of tasks running in a z/OS system are presented.
Z Performance – z/OS I/O Performance and Capacity Planning
In this course you will examine the I/O process and see how I/O performance problems are detected, and the metrics used to determine where a problem may exist. Methods used to improve I/O performance are also discussed.
Z Performance – z/OS Performance Tools and Software Pricing
In this course you will discover how SMF is used to capture important system activity and store it as specific record types. You will see how these records are structured and the utilities used to convert their content into a readable format. Commands used to display, configure and manipulate SMF are covered, as well as the process of archiving SMF records and creating your own SMF records. Following this, an introduction to software licensing is presented, describing common licensing models and the metrics they use to determine the cost to the customer. This information will assist the user in determining ways to minimize software licensing costs.
Z Performance – z/OS Workload Manager
The Z Performance - z/OS Workload Manager course provides the learner with steps describing how WLM components are created and linked, to form a WLM policy. The course then progresses to discussing in detail various workloads and the goals and importance that should be assigned to them. This is followed by an overview of performance information that can be obtained through SMF records, MVS commands, and SDSF. | <urn:uuid:b1251a6f-5707-4dfa-adda-10d15d5afd0b> | CC-MAIN-2022-40 | https://interskill.com/series/ibm-mainframe-performance-and-capacity-management/?noredirect=en-US | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00354.warc.gz | en | 0.904788 | 366 | 2.65625 | 3 |
Identity theft is far more prevalent than most Americans suspect. A 2018 online survey conducted by The Harris Poll reveals that nearly 60 million Americans have been impacted by identity theft at some point. Additionally, almost half anticipate that identity theft will prompt personal financial suffering in the next year.
The prevalence of identity theft is shockingly high, in part due to increasingly common data breaches that have recently hit major companies such as Equifax. However, pinning incidents to a single breach can be tricky, as a variety of other factors contribute to the problem.
The good news? You are not completely helpless. Yes, identity theft is common, and data breaches can increase your risk of being targeted. However, by employing a proactive approach to protection, you can minimize the potential for personal devastation.
Below, we’ve outlined a few of the most effective options for preventing identity theft.
1. Use Security Software for All Phones and Computers
If you’ve yet to install security software on your computer, it’s time to get started. This could be the easiest and most critical step you take towards protecting yourself online. Strong software can provide several layers of protection against devastating attacks such as phishing, viruses, spyware, and malware.
Your smartphone also warrants extensive security protocols. According to the Federal Communications Commission, one in 10 smartphone users have their devices stolen. Often, this leads to identity theft, as ample personal information is easily accessible from most mobile phones. Smartphones warrant just as much protection as PCs, so don’t hesitate to invest in robust security solutions.
2. Learn to Spot Scams and Spam
Today’s fraudsters may be clever, but many scams are surprisingly easy to spot once you’re aware of the signs. Vigilance is always necessary; scams can pop up anywhere and at any time.
How you browse the internet can leave you vulnerable to common scams, even if you think you’re adept at sniffing them out. Free Wi-Fi, in particular, should be used with caution. Data transmitted over public Wi-Fi is often unencrypted, and therefore, more likely to be targeted in cyber-attacks and other incidents. Other scams to be on the lookout for include the following:
- Tax scams in which thieves obtain IRS records to secure financial data or your Social Security number.
- Phishing scams leading to the takeover of your bank or credit card account.
- Child identity theft involving the Social Security numbers of young victims who have yet to establish credit scores.
- Synthetic identity theft, which involves merging real and fake information to create new identities, sometimes via personal details purchased on the dark web.
- Medical identity theft scams, often including false claims or other fraudulent documents that appear legitimate.
3. Use Strong, Unique Passwords – and Close Unused Accounts
Password protection is a huge point of weakness for the average internet user. A TeleSign report highlighted by Entrepreneur reveals that three out of four internet users rely on duplicate passwords. Furthermore, one in five continue to use login information that is over a decade old. Such practices place accounts at risk, especially as passwords serve as a first line of defense against identity theft.
Take a close look at your current password lineup and consider making a few changes. The ideal password will involve a string of random letters, numbers, and symbols. Avoid including your birth date or any other recognizable details from your personal life.
Coming up with and remembering all of this login information may seem like a hassle, but repetition should be avoided at all costs. Even a strong password can quickly be rendered ineffective through repetition on multiple accounts.
As you assign stronger passwords to your various accounts, determine which ones you actually use – and which could be closed for good. The more open accounts you maintain, the greater the chances of infiltration, and ultimately, a stolen identity.
4. Monitor Your Credit Score and Other Financial Information
Preventative measures are critical, of course, but it’s also crucial to respond quickly and strategically when any warning signs become evident. Look carefully at your credit score and history, as both may provide early clues as to whether your identity is at risk.
If you’ve yet to examine your credit history, you’re certainly not alone. According to a poll from Princeton Survey Research Associates, 34 percent of Americans have never bothered to check their credit reports. By neglecting this essential task, these individuals render themselves vulnerable to long-term interference from criminals, who may never be held accountable.
Even if you haven’t experienced identity theft, it’s important to stay on top of your credit score to ensure that it remains accurate. In a notable Federal Trade Commission study, 26 percent of respondents highlighted errors on their credit report, many of which were capable of impacting loan applications or interest rates.
5. Seek Data Backup Services Offline and In the Cloud
Backed-up data plays a key role in protecting digital information, and yet, it’s a move few internet users bother to make. If hackers gain control of sensitive information, significant data loss could ensue, prompting nearly as much devastation as the theft itself. Data backup and recovery services provide peace of mind; in the worst-case scenario, you can take comfort in knowing that backed-up versions of sensitive data remain safe.
Offline backups are valuable but also vulnerable to a variety of physical hazards. Backups should also be created within secure cloud frameworks to ensure that they remain accessible. A variety of excellent cloud backup options are available; examine offerings closely to determine which provide the best security at a reasonable rate.
6. Stay Alert When Visiting Websites and Making Purchases Online
When shopping online, do you carefully examine the websites you browse? Or do you shop haphazardly, granting little consideration towards where you share your address or credit card number?
Vigilant browsing is critical, regardless of whether you intend to make a purchase. How you interact with other internet users matters, no matter the website’s alleged purpose. Never supply those you do not know with personal information; you never know which details could come back to haunt you.
Seek Help from Trusted Data Security Experts
As you take extra steps to keep your identity safe, don’t hesitate to turn to NerdsToGo for advice. Our experts can help you adopt the solutions necessary to protect your privacy online. We can assist with cloud data backup services to encourage full coverage in the worst-case scenario. Our child internet protection services are also worth considering as you seek security for your entire family. Contact us today to learn how we can be of assistance in your quest for effective identity theft prevention. | <urn:uuid:fad74710-2b27-4351-8d7e-1afdcc1d3bc1> | CC-MAIN-2022-40 | https://www.nerdstogo.com/blog/2019/september/5-tips-to-protect-yourself-from-online-identity-/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00354.warc.gz | en | 0.934145 | 1,372 | 2.625 | 3 |
Sanctions have been a top news item for the last few months. In this blog, let’s examine what sanctions are, the history of sanctions, and how technological advancements can enable institutions to improve sanction discovery and actioning, simplify data collaboration, and expedite their ability to respond to sanction mandates. However, before we dive into how to address sanctions with technology, let’s first understand the history of sanctions.
A Brief History of Sanctions
The term “sanctions” has been thrown around quite a bit over the past two decades. For those of us not familiar with the specifics, economic sanctions include commercial and financial penalties levied against individuals, companies, and/or countries, typically for political purposes.
Sanctions date back to ancient times. In 432 BC, the Athenian Empire enacted the earliest recorded form of sanctions by banning traders from Megara from its marketplaces; the move eventually led to the dwindling of Megara’s economy.
Other early sanctions include the Continental System, Napoleon’s blockade against the British during the Napoleonic Wars; the US Embargo Act of 1807, in which Thomas Jefferson’s administration banned foreign trade with Britain and France to discourage impressment of American soldiers; and numerous blockades and embargoes between the North and South during the American Civil War.
Sanctions in the Globalization Era
However, for the most part, sanctions have become a modern tool of the post-WWI era; inherently linked to globalization. As Nicholas Mulder, author of The Economic Weapon: The Rise of Sanctions as a Tool of Modern War (Yale University Press) states, the elements needed for sanctions are “globalization, the administrative state, and mass society.” This is due, in part, to the way modern sanctions are levied: by the US Office of Foreign Assets Control (OFAC), a subdepartment of the US Department of the Treasury, which issues specific lists determining against whom sanctions are filed and to what extent. Financial institutions, globally, must cross-reference these lists every time they onboard a new client, whether business or individual.
Since World War I, sanctions have been levied countless times; the most well known examples include the US embargo against Cuba (1958), the Organization of Arab Petroleum Exporting Countries (OAPEC)’s 1973-74 ban on oil sales to the US and other Western countries (causing the 1973 oil crisis), and the United Nations embargo against apartheid South Africa (1987). More recent examples include ongoing sanctions against Iran, North Korea, and Syria.
Do Sanctions Work?
Are sanctions effective? It depends on whom you ask. According to one 2015 study, US and UN economic sanctions reduce the GDP growth of targeted countries by an average of 3% per year, with long-lasting effects for 10 years; this culminates in an average aggregate decline of the target country’s GDP per-capita of 25.5%. In 2019, the U.S. Government Accountability Office (GAO) published a report entitled Economic Sanctions: Agencies Assess Impacts on Targets, and Studies Suggest Several Factors Contribute to Sanctions’ Effectiveness which explicitly states that measuring the direct impact of sanctions on foreign policy is difficult, but also posits that their effectiveness overall is boosted when a) carried out through a formal body such as the United Nations Security Council (UNSC) or b) when the targeted entity is dependent on the US.
On the other hand, experts have cited in multiple studies that sanctions not only have a negative impact on the economy, public health, and security of the targeted country, but on the economic status of the body imposing the sanctions as well. More importantly, historically, it is unclear whether sanctions actually work in the long-term: as a 1998 Brookings article notes, sanctions against Iran since the 1970s have failed to produce their regime from supporting terror movements; sanctions against Saddam Hussein failed to persuade him to withdraw from Kuwait in 1990, leading to Operation Desert Storm; sanctions against Cuba failed to unseat Fidel Castro; the list goes on. Indeed, extrapolating from the historical trend, North Korea continues to test nuclear weapons, despite being sanctioned since 2006 – and Syria continues to be in the throes of a civil war, despite sanctions against the Syrian government being levied in 2010.
Sanctions in the Digital Age
Whether or not sanctions work long-term, enforcing them produces special challenges in the digital age. Today, sanctioned individuals can easily open shell companies to facilitate money laundering, fraud, and effectively dodge their impact by exploiting financial institutions’ inability to share vast information regarding the full expanse of their holdings with other institutions.
In the US, most major banks can only see 25% of their customers’ data and history; in Canada, that number is 15%, and in the UK, as low as 10%. Moreover, networks of sanctioned individuals or institutions are larger than ever before – and the threat of malicious insiders being able to tip off sanctioned individuals that they are being flagged is high. Sanction-dodgers are skilled at exploiting this lack of visibility, and know how to hide their digital breadcrumb trail. In the current conflict, some are even using data privacy laws as a legal shield to defend their activities.
The Technological (un)Divide
The sanctions discovery process needs an overhaul – now more than ever. According to Deloitte, “operating costs spent on compliance have increased by over 60 percent for retail and corporate banks.” And with the proliferation of data privacy regulations in general and in the financial space in particular increasing, the most cost-effective way to handle these complex processes is by wielding AI and other advanced technologies in the market. Using privacy-preserving technologies will revolutionize the future of how institutions handle financial crime; standardizing that technology will smoothen processes further.
Privacy-Enhancing Technologies are one way to reform the process and bring it into the digital transformation age. When applied correctly by cryptographic experts, PETs allow financial institutions to:
- Collaborate with other financial institutions, law enforcement, and data conglomerates to gain a fuller understanding of prospective customers’ transaction history
- Discreetly query the financial behavior of individuals or companies during the onboarding process – keeping the identity of the individual being queried about a secret from all parties except for the inquirer
- Create, tune, and train models to alert compliance officials of suspicious transactions or behavior
For more information about how Privacy-Enhancing Technologies are being used to fight financial crime, check out this interview below with Neil Ringwood, Executive Consultant, IBM Global Business Services.
Stay tuned for Part II of this blog, where we delve into the technological impact on the evolution and history of sanctions. | <urn:uuid:6800893c-b34a-4267-87d3-7a3b1567d4bc> | CC-MAIN-2022-40 | https://dualitytech.com/the-history-of-sanctions-part-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00554.warc.gz | en | 0.930641 | 1,395 | 3.046875 | 3 |
While technology has the potential to create its fair share of waste, it may also be the key to creating a more sustainable future.
From smarter cities to faster emergency response, recent technological advances are contributing to a safer, more connected world. However, the hardware and software systems that are changing our lives almost every day are also consuming an abundance of resources and leaving behind their fair share of waste in the process.
So while devices may be getting more energy-efficient, constant upgrades that push consumers to discard old electronics in favor of the next big thing can have a deleterious impact on the environment. Up to 50 million tons of techno trash is thrown away every year, amounting to roughly five percent of the world’s solid waste. What’s more, only 11 percent of the mobile devices discarded in 2010 were collected for recycling.
Technology may be the genesis of this cycle of consumption, but it’s also the key to solving it. Encouragingly, researchers and industry innovators are developing a range of solutions aimed at making the next generation of technology more sustainable and environmentally friendly. Everything from the Internet of Disposable Things (IoDT) to modular cell phones to data center waste heat recycling promise to cement our technological gains while minimizing the carbon footprint of our hardware and software.
Research suggests that the total installed base of devices connected to the Internet of Things (IoT) will surpass 75 billion by 2025. While the IoT has the potential to transform industries as varied as healthcare and manufacturing, bringing this many devices online will require an abundance of resources. Currently, IoT devices are constructed using microelectromechanical systems and rely on silicon chips to house sensors, channels, and other microscopic structures.
This model may have been economically feasible during the dawn of the IoT, but it will be costly for the full range of everyday IoT use cases that will come to shape our daily lives in the future. That’s why Seokheun Choi, an associate professor of electrical and computer engineering at Binghamton University, is pioneering the development of paper-based biobatteries for single-use, low-power systems. These biobatteries are powered by bacteria that consume the batteries at the end of their lifespan, at which point consumers can throw them away.
Choi’s biobatteries may play a pivotal role in the evolution of the IoDT. This IoT offshoot may one day include everything from sensors on food packaging that tell consumers when items have expired to disposable cardboard boxes that take package tracking to the next level. By creating environmentally friendly, biodegradable batteries—as opposed to the current chemically hazardous alternatives—Choi and his team are paving the way for a more sustainable IoT.
While technological disruption has fostered a culture that prizes innovation, it has also created a market that prioritizes the latest and greatest. The urge to upgrade your devices—smartphones in particular—has contributed to the rise of techno trash as the fastest-growing type of waste in most countries around the world.
Modular and recyclable cell phones are designed to reverse this disturbing trend. With companies like Fairphone racing to meet the demand for more sustainable devices, more and more consumers will have the option of using phones with longer shelf lives. Nearly every part of a Fairphone is recyclable, and its modular design enables consumers to replace broken or outdated cameras, screens, batteries, and more without having to purchase an entirely new device. This flexibility extends the lifespan of a Fairphone far beyond the typical lifespan of mass-market smartphones.
By doing so, Fairphone is helping reduce the enormous waste that is currently being created by the most popular smartphones. Instead of throwing away phones every two years in favor of the newest upgrade, modular devices may be able to encourage consumers to adopt a more environmentally friendly approach to device ownership.
For data center professionals, waste heat is unavoidable. Computers—especially those performing at the highest levels—generate excess heat as a byproduct of the intensive processes modern companies demand. In the past, experts have tried to use innovative cooling methods to prevent data centers from overheating, and some have even created systems that repurpose this heat to warm nearby homes, businesses, and community facilities.
However, new research may offer an even more efficient option for data centers. A team at Rice University has been using arrays of single-wall carbon nanotubes to absorb heat, turn it into an easily capturable bandwidth, release it as light, and convert it back into electricity. Considering waste heat making up 20 percent of industrial energy consumption, if these nanotubes can prevent such heat from escaping unused, it will be a landmark development in energy efficiency—particularly in the data center space.
While channeling waste heat to benefit nearby homes and businesses is a smart way to put excess heat to good use, funneling that energy back into data centers to make them more efficient will solve the problem at the source. With data centers taking on an increasingly central role in corporate operations as technologies like cloud computing become a given, these nanotubes could be a game-changer.
While the innovations outlined above promise to change the way we handle techno trash and wasteful energy consumption, stakeholders throughout the tech space need to help mitigate these issues sooner rather than later. For many companies, this will mean being strategic about their technological footprints—i.e., only investing in the network, storage, and compute resources that they need.
Server colocation can help organizations of every stripe improve their environmental friendliness by right-sizing their data center infrastructure. By taking advantage of smaller-scale offerings such as Colocation America’s 1U and 2U space, companies will only need to commit to the server infrastructure that their operations truly demand, knowing that they can scale up as needed without missing a beat. | <urn:uuid:53434691-a28c-4181-a484-1a6c49930adc> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/how-tech-world-strives-for-sustainability | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00554.warc.gz | en | 0.941795 | 1,194 | 3.5625 | 4 |
Social engineering is the formal name for the psychology of persuading people to feel the need to take certain actions. It’s the way that advertisers convince you that a certain brand of jeans is cooler than another. Or how public health campaigns remind you to get your flu shot. In cybersecurity, social engineering is considerably more sinister – and the domain of cybercriminals who perpetrate phishing attacks. Bad actors use all sorts of psychological tricks to lure their victims into opening dodgy emails, clicking suspicious links, handing over passwords, downloading sketchy attachments and engaging in other unsafe behaviors that can put your business at risk of damaging disasters like ransomware. These 10 facts about social engineering paint a picture of how it influences cybercrime and what you can do to protect your business from the trouble it can bring in its wake.
See the tide of phishing rise & fall to spot future trends in the eBook Fresh Phish. GET IT>>
10 Facts About Social Engineering That Tell the Tale of This Threat
- The number one type of social engineering attack is phishing.
- 43% of IT professionals say they have been targeted by social engineering in the last year.
- Social engineering attacks are responsible for 93% of successful data breaches
- 45% of employees click emails they consider to be suspicious “just in case it’s important.”
- 71% of IT professionals say they’ve experienced employees falling for a social engineering attack.
- On average, social engineering attacks cost $130,000
- 60% of IT professionals cite recent hires as being at high risk for social engineering tricks.
- 45% of employees don’t report suspicious messages out of fear of getting in trouble
- Socially engineered cyberattacks are just under 80% effective.
- The costliest socially engineered cyberattack is business email compromise – its 64 times worse than ransomware!
Automated security isn’t a luxury. See why Graphus is a smart buy. LEARN MORE>>
Graphus Can’t Be Fooled by Social Engineering
People can be easily fooled by social engineering – but Graphus isn’t. When you deploy Graphus to protect your organization, you’re putting three powerful layers of automated security between phishing and your business. Powered by smart AI technology, Graphus catches 40% more phishing messages than the competition automatically, keeping more social engineering attacks away from your employees than conventional email security solutions or clunky old SEGs.
TrustGraph is the star of the show, guarding your company’s inboxes against social engineering attacks. Using more than 50 separate data points, TrustGraph analyzes incoming messages to detect trouble before speeding them to their recipients – and it never stops learning, constantly gathering fresh threat intelligence from every analysis it completes.
EmployeeShield slips into place when a new line of communication comes into your business, adding a bright, noticeable box that warns employees to use caution when handling the message. This empowers every staffer to join your security team by marking a new message safe or quarantining it with one click for administrator inspection.
Phish911 completes your triple-layered protection by making it easy and painless for employees to report any suspicious message that they receive to an administrator for help. When an employee reports a suspicious message. it is immediately removed from everyone’s inbox to prevent further trouble.
Social engineering threats can strike at the heart of your business leaving devastating damage in their wake – and more than 60% of businesses that experience a cyberattack go out of business. Our solutions experts are ready to give you a personalized demo of Graphus to show you how this affordable solution is the ideal choice to protect your organization from today’s biggest threat fast. Schedule a demo => | <urn:uuid:a7acfb64-b58c-41bd-8d1c-5dcef54f6e5a> | CC-MAIN-2022-40 | https://www.graphus.ai/blog/10-facts-about-social-engineering-that-you-need-to-know/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00554.warc.gz | en | 0.922986 | 771 | 2.890625 | 3 |
COVID-19 has affected us all over the past year and a half. For some, the hardship has been all too personal. While others have remained healthy, COVID-19 has still changed almost everyone’s lives, in one way or another. The financial impact of COVID-19 has left repercussions even in places that had fewer outbreaks.
In Africa, the economy is still hurting from a recession brought on (at least in part) by COVID-19. When Africa’s trade partners had to slow or shut down operations during the pandemic, it had a direct and lasting effect on the African economic sector. Stagnant trade produced ripple effects, leading to lower consumption in Africa and a lower average income per capita. Coupled with lower prices of oil exports, almost no tourism, and other factors, this led to a serious blow to the African economy.
There are many approaches toward reviving an economy at a time like this. Some countries try to spur spending by offering stimulus packages. Others encourage innovation to breathe new life into the economy. In Africa, focusing on telecommunications advancement is one of the best ways to respond to this economic recession.
Before COVID-19, the African telecommunications landscape was looking promising. A new transatlantic cable was laid to Brazil and Fiber-to-the-Home (FTTH) was introduced to many urban areas. There had already been a move toward abandoning 3G in favor of 4G, while 5G was being discussed as a possibility in the not-so-distant future thanks to the promise of OpenRAN technologies.
On the flip side, lack of electricity continued to be a hurdle in many remote areas, while 4G prices remained too high for full market penetration. Another negative is that monopolies and duopolies remain the norm in many sub-Saharan countries.
As COVID-19 broke out, telecom began to play a major role in all our daily lives. It allowed for communication between loved ones who could no longer meet physically, and it played a role in distance learning and telecommuting. It was also crucial for spreading health information and safety protocols to a dispersed population, as well as providing remote doctor/patient communication. This was true in Africa as much as anywhere else.
There are several ways telecom advances can prove advantageous in Africa, both to combat the further spread of the COVID-19 virus as well as to recharge the economy and even spur it to greater heights than before the outbreak.
First, telecom is necessary to support digital healthcare, including mobile medical stations and broadcasting of health protocols. A strong telecom system also allows workers and students to telecommute when necessary to maintain social distancing to impede the transmission in offices and schools. In addition, migrant workers had trouble sending money to their families in Africa during COVID-19. Telecom can make this much easier, with infrastructure that allows for better transfer of migrant workers’ remittances.
One example of how telecom in Africa can be improved is provided by Supersonic via MTN, the largest operator in Africa. They are providing wireless broadband connectivity across the continent. Wireless broadband offers bandwidth comparable to fiber-to-the-home, but with far less infrastructure required to connect homes to the network, a huge bonus in a place like Africa, where laying new infrastructure is the biggest challenge.
This will be especially useful in remote areas. One or two towers can supply connectivity for an entire village at exceptionally high speeds. Moreover, by using fixed wireless service, expensive cable is not needed to reach the hinterlands.
Ethernity Networks produces FPGA systems-on-chip that enable such technology. This includes traffic management with over-the-air congestion control and IPSec encryption of the tunnel between the radio and the rest of the network, as well as other required telecom features. FPGAs are ideal for fixed wireless access because of their programmability, even after they have been field-deployed. This enables the unit to be improved from version to version without needing to physically replace components, which can be especially challenging in remote African areas.
COVID-19 has left its mark across the globe. As we begin to recover in fits and starts, it is crucial to establish a strong telecommunications backbone in Africa, both to help it recover financially and to prepare it for any future crises. A strong and vibrant telecom system can be used to spread accurate health information while allowing the economy to keep moving. With technologies like those provided by Ethernity, an advanced telecommunications system throughout the continent can ensure that Africa is well prepared for the post-COVID world.
By Brian Klaff | <urn:uuid:eb9b297f-01e9-41e4-9b61-d9c70d31622e> | CC-MAIN-2022-40 | https://ethernitynet.com/how-telecom-can-help-africa-financial-recovery/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00554.warc.gz | en | 0.963971 | 949 | 2.625 | 3 |
We all know that the ever-expanding Internet of Things (IoT) brings with it some significant security challenges. How can a vastly proliferating ecosystem of connected devices, many of them too small to include sophisticated embedded security systems, deliver adequate protection of the data they generate, transmit and store? How can organizations harness the benefits of the IoT without compromising the security of their corporate network?
One solution that clearly fits into the evolving IoT security picture is Public Key Infrastructure, or PKI.
As an infrastructure, PKI is not one single ‘thing’. Rather, it is a set of rules, policies and procedures – all based around the principal of digital certificates. These policies verify the ownership of public keys – that is, the disseminated keys that form one half of public key cryptography pairs. Those pairs of keys achieve two crucial security functions: they authenticate the sender of information, and they encrypt that information – only the holder of the paired private key can decrypt the message on the public key. The key pairs are authenticated and bound to respective identities by digital certificates, which are issued by certificate authorities (CAs) like Comodo, Symantec and DigiCert.
In short, PKI provides a framework to both verify the identity of devices, and to protect the data transmitted between those devices. It has long been used to secure devices ranging from network routers and servers to individual printers and fax machines. And because it is open standard, free to adopt and customise, it is a clear choice for businesses.
The IoT ecosystem is entirely new by comparison with traditional corporate IT infrastructures. The IoT will have 20X the volume of devices, a far greater diversity of devices, with new devices being provisioned faster than ever before – all without human intervention.
Fundamentally, though, those devices need to follow the same security principals as any other devices on corporate IT infrastructures. In particular, the identity of each device must be verified. This is what PKI can offer.
The first step in securing the IoT with PKI is to securely on-board each individual connected device into an IoT application. From there, PKI certificates must be provisioned. Each PKI certificate proves the identity of the associated device to the IoT Platform/Application. Specific devices or gateways may also require additional verification, such as username/password credentials.
The obvious problem is that in a vastly expanding IoT landscape, the task of manually on-boarding and provisioning each individual device quickly becomes unmanageable. It’s enormously time-consuming – and the risk of human error, which could open severe security flaws, increases along with the volume of devices. Yet each individual device still needs its own unique PKI certificate.
Another key requirement of PKI for IoT is the ability to manage these certificates at IoT Scale, e.g. revoke or rotate certificates as per the policy.
Automated provisioning of those PKI certificates securely without human intervention is the obvious solution – and this, fundamentally, is how PKI has evolved to solve the unique challenges of the IoT. This is where Device Authority comes in.
Our KeyScaler platform is all in-in-one solution for IoT device identity and validation – and the latest iteration of the platform enables the automatic provisioning of PKI certificates and policy based certificate management securely at IoT scale.
Device Authority’s extensively patented technology binds the PKI certificate to the respective device, only that device can use it, can’t be copied.
It automatically proves the identity of each connected device, and encrypts all data between the device edges and central servers, thereby delivering a truly granular and automated software security solution for the dynamic IoT landscape.
Click here to learn more about how KeyScaler could help your business.
Please wait while you are redirected to the right page... | <urn:uuid:a55b4088-6d34-4da3-acc0-754cbc353b6d> | CC-MAIN-2022-40 | https://www.deviceauthority.com/blog/how-pki-has-evolved-solve-iot-security-issue/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00554.warc.gz | en | 0.918518 | 790 | 2.8125 | 3 |
Amazon Web Services (AWS) has become the largest and most prevalent provider of public cloud Infrastructure-as-a-Service (IaaS). Organizations can now build, test, and deploy entire application stacks without purchasing or reconfiguring on-premises infrastructure. Production applications can benefit from advanced application delivery services such as a web application firewall (WAF), SSL acceleration, content-based routing, load balancing, DDoS mitigation, identity and access management, and protocol agility—which make these key applications go faster, smarter, and safer. For cloud-native AWS deployments, F5 solutions enable simple and easy delivery of these application services to AWS cloud environments.
As a provider of IaaS, Amazon Web Services (AWS) offers a rented compute infrastructure sufficient to develop applications. Without spending any money on capital expenses or upkeep, a complete application suite can be developed and deployed. Besides providing compute infrastructure like those found on premises, AWS delivers application services, automatically scales infrastructure, and hosts applications in many locations. Because of these additional capabilities, best practices for AWS are different than best practices for a customary data center. To better appreciate the differences between AWS and a traditional data center, it's important to understand how AWS delivers its services.
To gain the most benefits from AWS, let's look at several important attributes of its IaaS offering, such as geographical distribution, computing instances, networking, and storage concepts.
Understanding AWS begins with a physical description of how resources are distributed. At the highest level, AWS has several regions, each of which covers a large geographic area, such as the eastern part of the U.S. Each geographic region contains several availability zones, which are so named because they are designed to be isolated from failures with each other. A failure in one availability zone (AZ) will not cascade to another AZ within a region. While each AZ is isolated, AZs within a region are connected via low-latency network links. For availability purposes, design patterns exist for redundant capability across multiple AZs within a region.
Compute instances begin as a software disk image, known as an Amazon Machine Image (AMI). Each AMI is immutable, meaning that it is read-only by the computer using it. Booting a compute instance against an AMI creates a running virtual machine in the cloud. The virtual hardware characteristics of the compute instance can vary, such as the number of CPUs and RAM available to the instance.
Each compute instance exists within a Virtual Private Cloud (VPC), which equates roughly to a LAN in the physical world. Each VPC is logically separated. A VPC can consist of multiple subnets with implicit routing between subnets. Public subnets are routable from the Internet, while private subnets are available only internally. To deploy a complete application, one or more compute instances are deployed within a VPC. Communication across VPCs can take place when a compute instance or load balancer has an assigned DNS entry.
As noted above, each AMI is immutable from the instance using it. For persistent storage, AWS offers many solutions, primarily Simple Storage Service (S3) and Elastic Block Storage (EBS). S3 is an object storage service where an object can be stored and accessed by name. The object can range in size from zero bytes to 5 TB. In fact, each AMI is implicitly an S3 object. Objects cannot be modified, only created and deleted, making them ideal for relatively static content, such as photographs or virtual machine images.
EBS provides storage more akin to traditional storage. A compute instance attached to EBS sees the EBS as a traditional hard disk. Only one running instance at a time may attach to an EBS volume, so EBS cannot be used for shared storage.
Besides delivering key infrastructure components, AWS provides additional services that enable application scaling. Because AWS can quickly provision infrastructure resources, Amazon has developed solutions that allow for auto scaling and load balancing of those resources.
AWS provides an Elastic Load Balancing (ELB) service. Initially, ELB referred to a simple load balancer operating at layer 4 that spread traffic across multiple healthy nodes in a pool. The pool could span multiple availability zones, creating automatic redundancy in case of failure in one availability zone. While this load balancer provides some basic network layer 7 capabilities, it primarily operates at layer 4, simply routing TCP traffic in a round-robin fashion to the nodes in the pool. Health checks determine which nodes are available and therefore are candidates for traffic. AWS now refers to this initial load balancer as the Classic Load Balancer to differentiate it from the new Application Load Balancer (ALB).
Since AWS has standby capacity available, it can provide the option to scale nodes within a pool. The AWS CloudWatch service monitors the performance metrics of each node within a pool. When predefined thresholds—such as CPU utilization exceeding 60% for 5 minutes—are crossed, another node is provisioned. Inversely, when a lower threshold has been crossed, a node is removed. The designer can determine the maximum and minimum number of nodes in a pool and the thresholds that trigger node instantiation or node removal. Using auto scaling behind a load balancer enables the node pool to expand or contract as needed based on load.
While applications handle business rules and logic, they often lack the hardening required for at-scale production deployment, management, and operation. F5 solutions enable applications to go faster, smarter, and safer by providing the advanced application delivery services detailed in the table below.
F5 Advanced Application Delivery Services
Network and Transport Optimization
Application and Data Optimization
Data Path Programmability
Advanced Network Firewall Services
Advanced application delivery services enable applications to perform at a higher level, while being available and more secure. These services can exist at a strategic point of control independent of each application. Decoupling the services from the application business logic allows the applications to meet business needs without burdening development teams with infrastructure, management, and performance concerns. A strategic point of control also allows issues of governance to be handled uniformly and independently of each application.
By providing cloud-native integration with AWS infrastructure, F5 allows organizations to get the most of their AWS deployments, empowering their applications with better performance, higher availability, and stronger security. In the following section, we'll examine how AWS and F5 work together.
When a server instance boots from a generic image, it often makes sense to change parameters or set configurations, such as the hostname and IP address. On client machines, these parameters are set via DHCP, but setting server parameters via DHCP can often cause problems. Beyond network settings, sometimes a particular instance requires specific software packages or certain software configurations.
In the case of a BIG-IP deployment, an administrator might want to automate the configuration of each of the modules, such as the virtual servers and pool configurations for BIG-IP Local Traffic Manager (LTM), specific WAF configurations for BIG-IP Application Security Manager, or perhaps firewall settings for BIG-IP Advanced Firewall Manager. The same issues face anyone installing a server instance: the base image needs additional configuration to function correctly.
AWS offers two primary approaches to configuring an instance: creating a new AMI, or using cloud-init to modify a server during the boot process. Creating a new AMI is more appropriate for changes common among multiple instances that are less likely to be updated often. In contrast, cloud-init is more appropriate for changes that impact fewer instances and have a shorter life expectancy.
For changes that are expected to persist for a longer period of time—and for changes common to multiple instances—a good approach is to create a new AMI by booting a machine from an AMI similar to the desired configuration. After the administrator has made the changes necessary to the running instance, the instance is stopped and a new AMI is generated and registered with AWS. All future instances booting from this AMI will have the changes already applied. Since this approach makes changes effectively permanent—and since generating the new AMI can consume time—changes baked into the AMI are generally those that will last for a long time and are usable across multiple instances. Another reason for using AMI is that it enables faster boot times, since some cloud-init processes can be time-intensive.
Necessary changes that are not a good fit for incorporating into a new AMI are good candidates for cloud-init, which essentially enables a startup script whenever the instance boots. Using cloud-init allows for simple and instance-specific changes to be embedded into the instance.
Disadvantages of cloud-init include that the configuration changes, such as package installations, must be run at boot time, causing the boot to take longer. A long boot time has real impact in auto-scaling scenarios where an elongated boot time could make auto scaling ineffective. For these reasons, changes that take a lot of time should be included in a new AMI instead of making the changes via cloud-init.
Managing configuration can also be cumbersome when a change can be used across several, but not all instances. For example, suppose that a particular BIG-IP deployment is used in an auto-scale group with a specific virtual server configuration. A single AMI could serve for those machines and a different AMI could serve for other BIG-IP machines in another auto-scale group. Using a single AMI for each auto-scale group ensures that only changes specific to each host are necessary within the cloud-init process. Any changes common to the group can be embedded into the AMI. The disadvantage of this approach is that it requires an update to the AMI for each change common to all machines.
Applications deliver a capability, generally to multiple users simultaneously. As the application becomes more successful, it can exceed the capacity of the computer on which it runs. Once the application needs exceed those of its computer, options for increasing capacity need to be evaluated. There are three generic approaches to scaling: scaling up, pipelining, and scaling out.
Scaling up is the simplest approach to increasing capacity because it merely involves replacing the existing computer with a faster one. By installing a faster computer, all aspects of the application, and any other services on the computer, become faster with no changes necessary to the application or infrastructure. The disadvantage of scaling up is that costs tend to increase exponentially with performance increases, leading to a point of diminishing returns. Once a threshold is crossed, scaling up becomes cost-prohibitive.
Pipelining is the result of decomposing the workload into sequential steps, similar to an assembly line. When different computers can each work independently on each step, the overall throughput can be increased. However, pipelining only increases throughput, and it often does it at the expense of latency. Put another way, pipelining can increase overall performance but can decrease performance for a single user or a single request. The other disadvantage of pipelining is that it requires a deep understanding of the decomposable workflow, and for the infrastructure to match that understanding. It tightly couples the infrastructure decisions to the business logic, which is the exact opposite of what many organizations are trying to do.
Scaling out involves leaving the application and the computer alone, and instead choosing to spread requests evenly across a pool of servers. Since applications generally process several independent requests simultaneously, requests can safely be spread out across a pool of identical application servers. Scaling out has the added benefit of redundancy in that a failure of any pool member will not cause an outage of the entire application. The disadvantage of scaling out is that it requires complex orchestration external to the application in order to ensure that the requests are balanced across the nodes in the pool.
AWS auto scale uses a scale-out approach to increasing capacity for applications that need it. The CloudWatch service monitors nodes in a pool. When nodes cross predefined thresholds, CloudWatch will automatically start up new nodes or shut down nodes in the pool as appropriate. With the BIG-IP platform, this process can take place in one of two ways: by altering the number of BIG-IP instances or by altering the number of nodes in a pool behind a single BIG-IP instance. The difference between the two approaches is a function of what is scaled: either the BIG-IP instance or a pool.
In the first scenario, a BIG-IP pool sits between a pair of ELB devices. The first ELB device controls instantiating and terminating BIG-IP members, while the second ELB device is the sole entry in a server pool for each of the BIG-IP instances. This approach makes sense when the BIG-IP instance is providing advanced application delivery services, such as SSL termination or acting as a web application firewall. The first ELB device performs the load balancing while also growing or shrinking the pool as appropriate.
In the second scenario, the number of back-end pool members grows and shrinks via CloudWatch, but the BIG-IP instance performs the load balancing. The BIG-IP instance communicates with AWS to discover nodes being added or removed from the pool. This approach makes sense when using advanced load balancing features, such as the iRules scripting language, or directing requests based on URL or content. In these cases, a single BIG-IP instance is sufficient to manage the load of servers in the back-end pool.
The BIG-IP instance must interact with the AWS infrastructure in at least two scenarios. First, a multiple-zone AWS deployment requires altering the IP address behind an AWS elastic IP. Second, a BIG-IP instance needs visibility into pool members added and removed by the AWS CloudWatch service, which scales servers up and down within a pool. Each interaction with the infrastructure takes place via API calls, and just like any other software making API calls, the BIG-IP instance must authenticate to AWS. Generally, there are two approaches to authenticating to AWS: through credentials or IAM roles.
The simplest approach to authenticating is by including the appropriate credentials with the API call. AWS credentials consist of an access key and a secret key, which roughly correspond to a username and password. The administrator generates the credentials, which the developer then embeds within the application. This gives the application access to the appropriate API calls.
While simple, embedding credentials into an application carries security risks. Unless the developer secures the credentials in the application, other people or software could recover them and use them in malicious ways. This approach also makes it difficult to alter the credentials without also altering the software. While using credentials is a reasonable approach for quick testing, a production solution should use another approach to authentication. This is why AWS best practices recommend against using stored credentials in an application.
A more secure approach to authenticating for API calls is the use of IAM roles. AWS Identity and Access Management (IAM) enables users to manage access to the AWS infrastructure. Any compute instance, such as a BIG-IP machine, can be assigned an IAM role that authorizes a specific set of capabilities. When the instance starts, IAM generates a temporary set of credentials for the instance. Those credentials last while the instance is functioning and enable only the API capabilities specified. When configured with an IAM role, the BIG-IP instance does not store credentials, but instead has access only to the infrastructure APIs necessary, thus providing more security than credential-based authentication.
As mentioned earlier, AWS data centers exist in geographical regions, each of which can exist in an availability zone (AZ). Each AZ within a region shares nothing with other AZs: no shared power, networking, or buildings. In fact, each AZ is geographically separated from the others within a region. Because of the separation between zones, AWS subscribers can be confident that an event impacting one AZ will not impact another AZ. In other words, as a rule, at most one AZ within a region should be unavailable at any moment in time. This means that any service deployed across two or more availability zones should be continuously available.
The BIG-IP platform supports high-availability across AWS AZs using an AWS elastic IP, which is an IP address not intrinsically associated with a compute instance. Instead, the IP address can be dynamically forwarded to a private IP address of a running compute instance. To enable multi-zone high availability, identical sets of BIG-IP instances and application servers are each placed in their own AZ. Initially, the elastic IP is assigned to one of the BIG-IP instances. Connections are established from each client to the elastic IP which in turn forwards them to the private IP address on one of the BIG-IP instances. Should a failure occur, the other BIG-IP instance will claim the responsibilities by placing an API call to AWS, requesting that the elastic IP address be forwarded to it.
By integrating with the ELB, the BIG-IP platform can provide application services that integrate seamlessly with AWS capabilities such as multiple AZs and auto scaling BIG-IP nodes.
Placing the ELB in front of a BIG-IP instance simplifies deployment across multiple AZs, because the ELB can seamlessly balance traffic to the individual application stacks within each AZ where a BIG-IP instance is providing application services. This approach simplifies load balancing across multiple AZs.
When elasticity of BIG-IP instances is needed, an ELB with auto scale can automatically scale up and down a pool of BIG-IP virtual appliances, providing application services such as a web application firewall, identity management, or SSL termination. Using an ELB sandwich, traffic is routed to the first ELB which balances and auto scales traffic to a pool of BIG-IP instances. To simplify configuration across the BIG-IP pool, each BIG-IP instance has a single ELB address in the server pool. The second ELB then routes traffic to the downstream server pool.
Various combinations of ELB and BIG-IP topologies provide auto scaling, availability, and application services that are unavailable to either alone. By exploiting the advantages of both ELB and the BIG-IP platform, the architect can provide the level of services needed for a particular deployment.
To enable repeatable and scripted deployments, AWS provides Cloud Formation Templates (CFTs), which simplify both deployment and ongoing management. After the creation of a CFT for the desired service or application architecture, AWS can use it to provision an application quickly and reliably. CFTs are particularly useful in DevOps environments, allowing teams to easily create repeatable processes for testing and production deployment.
F5 not only supports using CFTs to deploy BIG-IP instances, but provides several reference CFT files for typical BIG-IP deployments.
Adjusting the parameters in the reference CFT files enables scripted deployments of BIG-IP solutions for different scenarios, including automatically scaling BIG-IP instances or back-end servers behind BIG-IP instances, as well as more complicated scenarios. By automating repeatable deployments within AWS using CFTs and F5 solutions, complex application environments can be deployed quickly and with little work.
Of course, technology is of little use if it cannot be leveraged fully. To that end, F5 provides extensive documentation. Documentation is available for the BIG-IP platform in general, and for the specifics of a BIG-IP instance within AWS. A good starting point for any question is at Ask F5.
The documentation tab provides information about specific BIG-IP modules as well as an entire section on AWS integration. The AWS portal provides a searchable interface for documentation, articles, community, and resources from getting started to complex deployment scenarios.
For those questions not answered by documentation, the F5 DevCentral community is ready to provide answers and assistance.
The march toward public cloud adoption is no longer a fad, but an enduring trend in IT. Amazon Web Services, as the world's largest and most comprehensive provider of public cloud services, gives organizations the ability to build, test, and deploy applications without any on-premises equipment. F5 has made its advanced application delivery services available as part of the AWS ecosystem—and has configured them to help apps go faster, smarter, and safer in AWS cloud environments. | <urn:uuid:85b62e5d-1186-4bb2-ae83-7c3d27b25d0a> | CC-MAIN-2022-40 | https://www.f5.com/de_de/services/resources/white-papers/f5-and-aws-advanced-application-delivery-services-in-the-cloud | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00554.warc.gz | en | 0.916904 | 4,142 | 2.78125 | 3 |
Like many other sectors, the space industry needs to secure its supply chain from start to finish.
Since President John F. Kennedy announced plans for a lunar landing in 1961, the United States has led space exploration and innovation. Technology has played a massive role in the nation’s successes across the space industry, but it has also provided adversaries with new opportunities to infiltrate and dismantle our galactic efforts and national security through cyberattacks in space systems and devices.
Taking Action Out of This World
In September, the Trump administration recognized mounting threats by issuing the Cybersecurity Principles for Space Systems policy directive. Detailing the importance of protecting space systems and devices from cybercrime, the directive calls on agencies to “define best practices, establish cybersecurity-informed norms, and promote improved cybersecurity behaviors throughout the nation’s industrial base for space systems,” focusing on securing the supply chain from start to finish and sharing information with the appropriate governments and private actors.
The recommendations and examples of malicious attacks outlined in the directive align with growing concerns over the shortcomings of traditional security solutions that leave operational technology—the software and hardware that controls physical devices—vulnerable to attacks. With this knowledge, protecting space systems can be achieved by following the current approach to securing critical infrastructure: fighting credential-based attacks.
Operational Technology in Space: All Systems Go
Organizations that rely on OT branch several sectors, including utilities, transportation and manufacturing. Historically, critical infrastructure has fallen victim to cyberattacks through vulnerabilities in system designs, outsourcing, and an emphasis on function over security. IT security departments have also faced staff shortages and a lack of expertise across critical infrastructure organizations has led to the introduction of automation and machine learning as effective measures to help combat insider threats.
Examples of critical infrastructure in space include the NASA satellites orbiting Earth, which are equipped with cameras and scientific sensors to collect data about the planet. Satellites can help scientists control the spread of disease, monitor wildfires and volcanoes, and predict weather and climate. If compromised, these machines could potentially risk lives by disrupting communication, impacting food crops, or providing misleading information. Rovers, spacecrafts and the International Space Station are also susceptible to cybercrimes where hackers can infiltrate coded messages and computer algorithms, underscoring the risks associated with insider threats and the need to secure space systems against credential-based attacks.
Houston, We Have Some Critical Infrastructure Problems
Securing the systems, networks and channels for space systems starts by protecting data during the creation stage where IT teams can test the security of devices, discovering vulnerabilities early in the process. For example, the creation of shuttles, rovers and other space vehicles begins with design and identifying necessary tools, then moves to construction and rigorous testing. Issues flagged during this process allow teams to make necessary adjustments to the infrastructure. Teams examining the function and performance of these devices should also incorporate testing for security and implement tools capable of identifying issues down the line.
Used for their detection capabilities, user and entity behavior analytics, or UEBA, tools use advanced analytics, allowing IT teams to quickly identify device behavior that is abnormal and/or risky; find compromised user credentials and privileged user accounts, and malicious insiders; and create operational efficiency with automated incident response. UEBA achieves this in critical infrastructure environments by using all of the ingested data points to baseline normal behavior for all users and machines. Applying these technologies to space systems and devices means risky or abnormal behavior can be caught early, preventing severe damage, intercepting mixed messages, and ensuring that communication signals remain uninterrupted.
Small Steps for Organizations, Giant Leaps Against Insider Threats
Modern space systems require equally modern UEBA solutions over legacy tools unable to detect abnormal behavior of OT devices or their users. While the focus of securing OT and supporting resilient critical infrastructure has traditionally been applied to more familiar systems on Earth, the most recent directive recognizes that securing our space systems and their OT devices is just as vital to the continuity of critical services, protection of the nation and the safety of people as it is to expand our knowledge of the cosmos.
Trevor Daughney is vice president of product marketing at Exabeam. | <urn:uuid:d266fea9-b66b-4f64-b13f-5ce61071f0f0> | CC-MAIN-2022-40 | https://www.nextgov.com/ideas/2021/03/securing-space-next-frontier-credential-based-attacks/172357/?oref=ng-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00754.warc.gz | en | 0.925526 | 837 | 3.28125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.