text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
About 246 million years in the past, a marine reptile roughly the size of a humpback whale patrolled the seas over present-day Nevada.
This ichthyosaur wasn’t simply outstanding due to its large dimension, based on scientists who just lately analyzed a partial skeleton recovered from the Augusta Mountains. The brand new species notably lived just some million years after the very first ichthyosaurs—which had been roughly dog-sized—appeared within the fossil document. This implies these swimmers supersized themselves way more rapidly than whales did, based on research reported on December 24 in Science.
“This fossil mixed with the [other] fauna that we’re discovering in Nevada is a real testomony to how resilient life is, and how briskly evolution can proceed if the environmental circumstances are proper and the chance is there,” says research coauthor Lars Schmitz, a paleontologist on the W.M. Keck Science Division of Claremont McKenna, Scripps, and Pitzer Schools. “Even after an enormous extinction occasion the place your complete world is in turmoil, life can diversify actually, actually quick.”
Ichthyosaurs first arose 249 million years in the past and hunted the world’s oceans for the following 150 million years earlier than going extinct. Their streamlined our bodies, flippers, and enormous eyes gave them a barely dolphin-like look, Schmitz says. Certainly, ichthyosaurs shared a couple of similarities with cetaceans—whales, dolphins, and porpoises. Each teams developed from land-dwelling animals that returned to the ocean, developed comparable physique plans together with highly effective tails to propel themselves via the water, and in some circumstances grew to huge sizes.
The ichthyosaur fossil that Schmitz and his colleagues analyzed features a cranium greater than 6 ft lengthy, in addition to elements of the precise arm, backbone, and shoulders. The brand new species, which they named Cymbospondyus youngorum, had an extended snout full of pointed enamel and a slender physique.
Primarily based on the scale of its cranium, the researchers estimated that C. youngorum would have reached roughly 58 ft lengthy and weighed just below 50 tons. “This one is way bigger than all the opposite ichthyosaurs that lived earlier and on the similar time, so [it’s] basically the primary large within the oceans,” Schmitz says.
The stays of a number of different massive ichthyosaurs round 33 ft in size have additionally been found close to the C. youngorum specimen, he provides. Bulking up seemingly got here with a number of benefits; within the oceans, bigger animals are typically environment friendly hunters, are protected against different predators, and might extra simply regulate their physique temperature.
The researchers in contrast C. youngorum to different ichthyosaurs of various ages and anatomical traits, and recognized two “pulses” of fast development early within the group’s evolutionary historical past. “It actually helped us to grasp that physique dimension for ichthyosaurs developed tremendous quick,” Schmitz says.”
[Related: This ancient millipede was as big as a car]
He and his crew then in contrast the ichthyosaur and whale household bushes. They calculated that ichthyosaurs turned giants throughout the first 3 million years of their 150-million-year historical past, whereas whales took 45 to 50 million years of their 56-million-year evolutionary historical past to succeed in roughly comparable physique sizes.
“Ichthyosaurs hands-down win when it comes to attending to that dimension so early,” says Benjamin Moon, a paleontologist on the College of Bristol in England who has additionally studied ichthyosaur evolution.
C. youngorum and its neighbors are all of the extra hanging as a result of they arrived on the scene not lengthy after a mass extinction—one which worn out 81 % of all marine species round 252 million years in the past. Lots of the phytoplankton and algae that gasoline at present’s marine meals chains had but to evolve, elevating the query of how the ecosystem supported such massive predators.
To search out out, the researchers surveyed the wealth of recognized fossils from C. youngorum’s Triassic ecosystem, together with smaller ichthyosaurs, fish, and squid-like ammonites. The crew used a pc mannequin to research whether or not there could be sufficient vitality to maintain C. youngorum, assuming the fossils added as much as a consultant image of the traditional meals chain.
Considerably surprisingly, Schmitz says, the researchers discovered that the ecosystem preserved within the fossil document was “steady and properly functioning” sufficient to help a lot of beefy marine reptiles. One clue could lie in the truth that there aren’t many fish fossils within the space. “We’re chopping out a number of steps in that meals chain, so [there’s] extra direct switch of vitality as much as the highest ranges,” Schmitz says.
The following step, he says, will probably be for researchers to discover whether or not physique dimension has modified over time in comparable methods amongst different teams of extinct and dwelling animals which have returned to the water, together with plesiosaurs and turtles.
“It’s very neat, what they’ve executed with attempting to reconstruct this ecosystem,” Moon says. “That’s attention-grabbing from the standpoint of claiming that there’s sufficient meals there to help these animals getting massive and having variety.”
One other query for future exploration is the extent to which different variables resembling temperature may additionally have influenced the ichthyosaurs’ development spurt, they are saying. | <urn:uuid:5e25476a-2198-42eb-885b-05ee59efe7bf> | CC-MAIN-2022-40 | https://dimkts.com/these-ancient-marine-reptiles-got-very-big-very-fast/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00092.warc.gz | en | 0.938603 | 1,230 | 3.515625 | 4 |
Tips for Helping Your Kids Stay Safe on the Internet
Today, it’s hard to picture a life without the internet and the all-encompassing connectivity it offers. But, for those of us who grew up just before technology exploded, it’s rather difficult to fathom the constant state of connectivity and online access available for the current generation. And, while the internet is arguably the most important invention of the past 50 years, it can be a dangerous place, especially for children.
From unsafe content to dangerous online interaction with strangers, every parent wants to do everything in their power to keep their kids safe. But where do you start when it comes to keeping your children safe on the internet?
Today, we’re looking at what dangers your young ones might be exposed to online and how you can keep them safe as they enjoy everything the internet has to offer.
Why kids and teens need protection online
Kids and teens are among the most active internet users. Unfortunately, the threats and risks associated with being online will not go away just because they are young or because they are your children.
Cyberbullying, inappropriate content, social media overuse, online predators, and online scams targeted at children are just a handful of risks that children might face almost every single time they go online. Unfortunately, kids and teens tend to lack the mental fortitude required to stay safe online. So, it’s up to the parents to provide the necessary tools and information to help them navigate the online life safely. Without further ado, let’s jump in and see how you can help your children.
Tips to keep your children safe online
Here are some of the best practices from security experts here at NordPass to help your children stay safe online:
Teach your kids about the importance of privacy
Before the advent of the internet, pretty much everyone was taught not to speak or open doors to strangers. This should also apply to the internet generation. Teach your children the importance of not giving up their names, addresses, or any other personal information to strangers online. Also, warn them against chatting with strangers online and emphasize that they should never set up meetings with someone who they only know on the internet.
Make use of parental controls
Kids love wandering on the internet. Unfortunately, a fun experience could go sideways. Parental controls can help you and your children avoid unwanted consequences online. Use parental controls to limit internet use to a few trusted websites and apps on any device that offers such an option. Consider creating separate accounts for your children on Netflix, YouTube, or any other streaming services to limit their exposure to sensitive and otherwise inappropriate content.
Talk about gaming
It’s true that gaming can be educational and beneficial. But, as with everything on the internet, there’s a flipside. Children are at risk of cyberbullying, identity theft, and even credit card fraud, and these risks are especially prevalent in online gaming. So, be sure to research games before deciding whether they’re right for your children. Also, consider allowing them to play online games with real-life friends only and encourage them to tell you if they are being bullied — many games have the feature to “block” other players. Finally, be sure to monitor in-app purchases, especially if your card is linked to your kids’ gaming account.
Limit access to social media
Social media. Some argue that it’s the worst thing about the internet, while others say it might as well be the best. Whatever the case may be, nowadays, we’re quite informed about its negative aspects and impact on children. It might be borderline impossible to keep your kids off social media, so be sure to talk with your children all about it. Discuss the dark side of it and how it can be addictive and toxic. It is important they understand that it’s not just fun and games, and that posts on social media could have real-life consequences. Maybe consider rules and limitations regarding social media use, but don’t be hard on them. Remember that being on social media is a part of this generation’s Zeitgeist.
Teach your kids cybersecurity basics
Cybersecurity might not be the most interesting topic to discuss with kids, but nowadays it’s a fact of life that can’t be thought away. Try to convey the concept of identity theft and its consequences in a simple and clear way. Also, talk about the importance of password security. Warn them against using weak passwords and let them know why sharing passwords could be dangerous. One of the best ways to ensure password security in a family setting is by adopting a password manager such as NordPass. A password manager allows you to securely store and access passwords whenever you need them and offers secure password sharing. Rest assured that kids today will get the hang of a password manager in no time at all.
Lead by example
Try to raise good online citizens — lead by example. Explain that hate, bullying, and trolling have no place in both the real and digital worlds. Warn them that everything they say online will be up for public view, most likely forever, and explain how it could have an impact on their future. Show them the good side of the internet: how it connects people, how it can expand their worldview, and how it can be fun.
Subscribe to NordPass news
Get the latest news and tips from NordPass straight to your inbox. | <urn:uuid:622351fe-73b6-4c7d-a1e8-a2e2c24e7d24> | CC-MAIN-2022-40 | https://nordpass.com/blog/6-tips-kids-online-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00092.warc.gz | en | 0.937881 | 1,122 | 2.734375 | 3 |
Password Sharing: Doing It The Right Way
The BBC reports that Netflix, the leading streaming platform in the world, is trying to crack down on ineligible users. The ineligible users, the streaming service refers to, are those who practice sharing account passwords. It’s true that it’s against the company's terms and services, yet it is also true that this is a common practice that is hardly going away.
Here, at NordPass we strongly believe that sharing is caring. But when it comes to sharing passwords, security should come first. Today, we’re diving into all things related to password sharing. The security risk that it might pose, and an overview of do’s and don'ts.
Cybersecurity and password sharing
In today's world cybersecurity is a fact of life. Passwords are an integral part of that fact. After all, a password is our first line of defense, yet many of us still struggle with proper password security. According to Verizon's 2020 Data Breach Investigations Report (DBIR), over 80% of hacking-related breaches involved the use of lost or stolen credentials.
The same can be said about the home environment. Google’s US passwords statistic indicates that 43% of US adults have shared their personal passwords with a partner or a family member. The report also notes that the most popular passwords that are passed around are the ones used for streaming websites and other online entertainment platforms. As many as 22% of US adults have given their Netflix or Hulu password to a partner or family member.
Password sharing isn’t going away any time soon, so it’s best to understand how to do it the right way to avoid any cybersecurity risks.
The do’s and don’ts of password sharing
Never share passwords over insecure channels
Unfortunately, sharing passwords via text messages, emails, and other internet messaging platforms is the most common way. All of these are what in the cybersecurity world are known as unsecure channels as they are rarely encrypted. This means that if a message or an email with your password is intercepted, the password would appear in plain text and all that a hacker would need to do is copy and paste it.
Use cloud-based file storage services
To reduce the chances of getting your passwords exposed to undesirable third-parties, you can choose to share them over cloud-based file sharing platforms. The important thing here is that those services should ensure security of the file you store. NordLocker is one such service that provides encrypted cloud-storage.
Use a password manager
Password managers are considered to be the safest option when it comes to password sharing, and for good reason. After all, password managers such as NordPass are designed to securely store passwords and other sensitive data in an encrypted vault. Most reputable password managers will offer you a way to securely share passwords. Usually a password in such an instance will be sent in an encrypted form to another user. With NordPass, sharing passwords is quick and simple. All you need to do is select a password you want to share in a vault, select the three dots icon and click share.
Take a look at how to share your Netflix passwords (while you still can) with NordPass.
We've looked at the best do's and the absolute don'ts when it comes to sharing passwords. As long as you take these steps and protect yourself, you can feel safe about password sharing. Just remember that sharing passwords needs to be taken seriously on both sides of the equation because a single mistake could lead to unwanted and even dangerous consequences.
Subscribe to NordPass news
Get the latest news and tips from NordPass straight to your inbox. | <urn:uuid:1ea27ad1-15e8-41e1-99e0-d986d5293ff5> | CC-MAIN-2022-40 | https://nordpass.com/blog/sharing-passwords-right-way/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00092.warc.gz | en | 0.948834 | 756 | 2.515625 | 3 |
You Are Concerned About Data Privacy – but Are You Doing Anything About It?
You’d be hard-pressed to find anyone who doesn’t care about their privacy. It’s human nature. You want control over what private information you share and who you share it with. Unfortunately, you can lose this control with a careless click.
What is private data?
Private data is anything that reveals information about you. It’s your name, your photos, your posts on social media, your email addresses, your IP.
Some of these details are highly sensitive. Sensitive data includes your banking information, genetic data, health records, social security number, and home address. As a rule of thumb, if you think your privacy would be violated if this data became public, it’s probably private.
What is data privacy and why is it important?
Data privacy, also known as information privacy, generally refers to a person’s ability to choose for themselves when, how, with whom, and to what extent they want to share their private data with others.
As Internet usage has become ubiquitous over the years, so has the importance of data privacy and protection. Various websites and applications often collect your private data in exchange for its services.
Some platforms and applications may exceed their reach when it comes to data collection, storage, and usage. Some may have a lax attitude toward private data protection.
The central questions to ask when we talk about data privacy are:
Who has access to information about you?
Who controls this access?
Is it secure?
When private data falls into the wrong hands, consequences can be dire. A data breach on an online platform could put your sensitive information into the hands of cyber crooks. Users whose data is leaked are put at risk of identity theft, bank fraud, and other online-related scams and crimes. These days, data is king and there’s no way around it. Thus, it’s not surprising that its protection is paramount.
Level up your online safety
With advanced features.
Your privacy in the hands of the government
Various entities handle your private data. The first among them is the government and its institutions. Let’s take the justice system as an example. You cannot go to court or file a claim without revealing your identity. And that’s fine — it wouldn’t be fair to the other side if you were suing them anonymously.
Similarly, you can’t get public services (for example, electricity, a high school education, healthcare) without identifying yourself.
In a perfect world, the government does not infringe upon your privacy more than it’s necessary. In the real world, some governments store every bit of data they can get their hands on. Even worse, others engage in mass surveillance of their citizens.
Your privacy in the hands of business
You can buy apples at a stand and remain a stranger to the fruit seller. But buy apples online, and you’ll give away private information about yourself. It may be a fact as simple as that you like apples. This information will be sold to an advertiser, and the next time you go online, an ad for apples will pop up on your screen.
Almost everything you do online leaves a data breadcrumb. You have little control over how these breadcrumbs are collected.
Usually, it works like this. Before you start using a new online service, you have to read a wall of fine print. You do not do so, because you don’t want to wade through paragraphs of jargon. You click that you agree, and that’s how you begin to give away your private data. You cannot change the agreement, and you cannot bargain — it’s take it or leave it. This service will collect your data and use it for marketing purposes or sell it to the highest bidder. There’s nothing you can do about it.
It’s easy to say, “Don’t use these services.” The problem is that most online services collect information. If you want none of your private data on the internet, you have to quit using the internet. And that’s a price most people find too high to pay.
Data protection laws
Over the years, as technology and the internet came to be an inseparable part of our lives, governments around the globe took part in creating and passing laws regulating private data. Most countries today have various laws governing data collection, storage, and usage. Here are some of the most important and impactful ones:
The General Data Protection Regulation (GDPR)
The GDPR regulates data privacy laws across all EU member countries. It was designed to replace previous data regulation laws and provide greater protections and rights to individuals, essentially giving subjects the right to control their personal data and ensuring the right to be forgotten. The GDPR also outlines how individuals’ private data should be collected, stored, and used and outlines limitations. The GDPR is one of the most impactful and comprehensive regulations to have been developed in the past decade
National data protection laws
Many countries around the world, including Australia, Canada, and Japan have comprehensive data protection laws in place that outline the ways personal data should be handled much like the GDPR.
Important data privacy and protection trends for 2022
The UK to roll out its own data regulation laws
The UK is looking to roll out and fully implement its own version of the GDPR this year. The UK is looking to move away from the European GDPR to what it believes will be a more agile framework. Experts debate whether the new British regulations could endanger the data adequacy agreement it has with the EU, which facilitates data flow between companies in the EU and UK.
More data privacy and data security regulations around the world
Due to the sheer number and scale of breaches that took place in 2021, governments all around the globe will likely issue more specific regulations and requirements around breach reporting. Additional storage and usage regulations will likely make an appearance. An increase in fines related to violations of data protection regulations is also expected as governments will be looking to make examples of non-compliant businesses.
Changes in the AdTech landscape
New regulations and regulatory techniques are set to increasingly examine the use of AdTech to track individuals online. Throughout the year, we’re also expecting AdTech companies to shift toward privacy-forward business models in an effort to address changing expectations. Experts note that 2022 could also bring the death of the cookie as a means to collect data. Rob Shavell, the CEO of Abine/DeleteMedemise, suggests that the death of the cookie will be a consequence of changes within the AdTech landscape driven by new approaches taken by tech giants such as Google and Apple. Throughout 2022, we’ll see post-cookie solutions put to the test extensively.
More transparency in data privacy
With large-scale data breaches making the headlines every other day, users are much more aware of data protection laws and much more concerned about privacy in general. Trust in social media platforms over the years has plummeted to an all-time low.
Companies are expected to build trust through transparency. While it may not be an easy endeavor, those that fail to be transparent about their data collection, storage, and protection practices are likely to face excessive scrutiny from the public.
The rise of data protection as a service
Companies focusing on data security are striving to meet the increasing needs of concerned consumers. In fact, the data protection as a service market is expected to be worth $18.96BN by 2026. So it’s not surprising that tech hubs all around the world are developing new and exciting ways to scale and deploy data protection services to the end user.
What can you do?
Information privacy will become an even hotter topic once technologies create more invasive tools. You’ll be surrounded by facial-recognition cameras, smart speakers that listen to your conversations, e-textiles, wearable health monitors, and other data-gathering gadgets.
That means you must take action now:
Privacy protection comes with informed politicians. When you’re deciding who to vote for, choose wisely. Sure, it’s hard to find a politician who understands tech. But if enough voters begin to take privacy issues seriously, more politicians will be incentivized to become informed.
Use tools and services that enhance your privacy. Choose private search engines, private email providers, and privacy-focused browsers. And use encryption tools — they’re much more user friendly than they sound. NordPass itself uses state-of-the-art encryption to protect your passwords. In addition, NordVPN makes sure your traffic is invisible to your internet service provider.
Don’t need it? Don’t use it. Don't sign up if you don’t really need the service. And if you do need it, read the fine print before clicking “Agree.” If the fine print is too dense to be read, look for comments and reviews regarding the service’s privacy policies.
Fight for information privacy and make the internet better for all.
Subscribe to NordPass news
Get the latest news and tips from NordPass straight to your inbox. | <urn:uuid:3bb9515e-ba0a-4055-86b8-bb494e9f5d4d> | CC-MAIN-2022-40 | https://nordpass.com/blog/why-is-data-privacy-important/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00092.warc.gz | en | 0.934473 | 1,915 | 2.96875 | 3 |
Face Authentication is a technology that enables people to access online services, physical settings, and other resources using images of their face.
Face authentication, also called face/facial recognition, relies on mobile and other devices' native sensing technology. Some third-party biometric algorithms, however, are deployed as software that leverages device cameras for this purpose. Liveness detection — with the user prompted to nod, smile, or otherwise move during authentication, or continuously during the session — is often added as an additional security layer.
Some face authentication solutions are architected in a decentralized model using FIDO standards that ensure a consumer or employee face template is secured on the user’s mobile device. Here, a user’s face scan is verified locally against itself, a token is sent to the service provider, and access is granted. The biometric itself is not stored at the service provider (true secret).
Other face authentication solutions are architected in a legacy centralized scheme in which user templates are stored at the service provider, and matching is done against a library of all other users’ biometrics (shared secret). These systems are commonplace in criminal justice, international border crossings, and national security settings.
"Some airlines and retail establishments are using face authentication to deliver people a faster, more personal experience. Before opting into these services, it's a good practice to ask the airline or store where the biometrics are stored." | <urn:uuid:ed503adc-9096-49cf-bae5-776cd1f8c453> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/face-authentication | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00292.warc.gz | en | 0.935215 | 298 | 3.203125 | 3 |
Hashcat is a password cracking tool used for licit and illicit purposes.
Hashat is a particularly fast, efficient, and versatile hacking tool that assists brute-force attacks by conducting them with hash values of passwords that the tool is guessing or applying. When used for benign purposes, such as in penetration testing one’s own infrastructure, it can reveal compromised or easy to guess credentials.
Hashcat is, however, better known for being used for nefarious purposes. Hackers use Hashcat, readily available for download on all major operating systems, to automate attacks against passwords and other shared secrets. It gives the user the ability to brute-force credential stores using known hashes, to conduct dictionary attacks and rainbow tables, and to reverse engineer readable information on user behavior into hashed-password combination attacks.
“Breaches of complex passwords are on the rise as hackers use Hashcat as a means of cracking passwords using known hashes. This is next-level hacking that goes beyond the simple stuffing of credentials into username/password fields on web applications.” | <urn:uuid:d23d1339-63d0-4555-9911-0a0cf10b1b50> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/hashcat | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00292.warc.gz | en | 0.91993 | 220 | 2.65625 | 3 |
The new economy, digitization, big data along with big data analytics are business buzzwords that have one thing in common: they are still big business. They represent the biggest change possible. Everybody can think of them in some way, but they are terms that are still in widespread use.
Big data analytics can be understood more broadly. It’s one reason to study the subject in depth and understand the opportunities it presents for your business.
Big data analytics involves the analysis of large amounts of data from different data sources (Big Data). It uses the knowledge gained to make decisions, optimize business processes and exploit competitive advantages.
What Happens When Analyzing Large Amounts Of Data?
Big Data Analytics Takes Place In Three Stages:
Extracting Data From Data Sources
Today, data can be extracted from a variety of sources, from web analytics tools to smart home and smart factory applications. The challenge is to bring together this usually unstructured mass of data.
The term data mining is often used for this purpose. This means that the data is available in raw form, for example in a mine, and needs to be extracted before it can be processed in a targeted way.
Structuring and Optimization The data Set
After the first step, there is a large amount of data that is still practically unusable. To do this, the right software will structure this amount of data according to parameters that you define.
Data Analysis And Processing
If the first two steps are mainly useful for working with the dataset, the real value lies in the third step: you can gain insights from the data analysis and use them to make decisions and optimize your business.
This step usually corresponds to big data analysis, sometimes used synonymously with big data analysis. It is a subsection of the big data analytics review.
Application Of Data Analytics
Big data analytics is used by companies in the business intelligence field. Analytics can provide users with important contextual information that can be used to optimize one or more processes. Efficiency gains can give you an advantage over your competitors.
You can also process the data for specific purposes, such as digitizing sales: effective sales tracking increases the likelihood of reaching and convincing potential customers in the long term.
Challenges In Big Data Analytics
The challenges of big data lies in the data itself:
- Unstructured data is available in many places in many formats.
- Data sources must be consistent with each other.
- Data must be diverse and disparate across many sources.
Define What You Want To Achieve By Analyzing Big Data:
If you really want to use data to achieve your goal, you need to define the goal you want to achieve by analyzing big data. To do this, you need to know your company’s capabilities, know how to perform the analysis, select the right technology and use it.
The final cost of big data analysis depends on these decisions. To achieve a high return on investment, the investment should depend on the desired, preferably specific, objective.
Big Data Analytics Tools
There are many different technologies for analyzing large amounts of data. The ones listed here are well known and each focuses on a different area:
Informatica PowerCenter is one of the most widely used ETL (Extract, Transform, and Load) tools in the world. No matter if you have a number of databases or a data warehouse, Informatica PowerCenter lets you safely process the data they hold while maintaining its integrity.
Today, modern businesses need different applications for proper data analysis, track events, find indicators or reporting in order to better acquisition and decision making. To solve this problem and provide a unified solution for businesses, IBM has created the IBM Cognos Business Intelligence suite. With the growing popularity of BI solutions, the demand for IBM cognos has increased dramatically.
Apache Hadoop can be used in different architectures and on different hardware. It allows you to aggregate large amounts of data in a relatively fast cluster.
The use of SAP Business Objects is becoming extremely important in our constantly evolving and changing world. SAP BusinessObject BI tools are highly scalable and extensible. It can serve tens of hundreds of thousands of users and can be scaled up or down depending on the needs of the organization using it.
Splunk provides centralized, real-time, cross-system access to historical and current data. Splunk thus becomes a data platform that enables faster problem identification and resolution.
With Tableau, you can extract and process data. With visualization, you can gain instant insights that you can use to optimize your processes.
Zoho is a big package with many programs. These include CRM, home office toolkit, financial platform and data analytics.
Importance of Big Data Analytics In Modern Business
Today, big data has become an asset. Take a look at some of the world’s biggest technology companies. They value their data, which they constantly analyze to make their operations more efficient and develop new products.
In a recent survey, 93% of companies consider big data initiatives “very important”. Using big data analytics solutions helps companies uncover strategic value and make the best use of their resources.
Finding value in big data is not just about analyzing the data. It’s a full exploration process that requires analysts, business users and managers to ask the right questions, identify patterns, make educated guesses and predict behavior.
The importance of big data does not depend on how much data a company has. It’s about how the company uses the data it collects.
Each company uses the data it collects in its own way. The more efficiently a company uses its data, the faster it grows.
In today’s market, companies need to collect and analyze data. Let’s see why big data is so important:
Big data tools such as Apache Hadoop, Spark, etc. offer advantages to companies when they need to store large amounts of data. These tools help companies to find more efficient ways of doing business.
In memory, real-time analytics helps businesses collect data from multiple sources. Tools such as Hadoop help them analyze data instantly and make informed decisions quickly.
Understanding Market Conditions
Big data analysis helps businesses better understand market conditions.
For example, analyzing customer buying behavior helps companies identify their best-selling products and manufacture them accordingly. This helps companies to stay ahead of competitors.
Monitoring Social Media
Companies can use tools to process large data sets to analyze emotions. This allows them to get feedback about their company, i.e. find out who is saying what about it.
Companies can use big data tools to improve their online presence.
Improve Customer Acquisition
Customers are an important asset on which all businesses depend. No business can succeed without a solid customer base. But even with a good customer base, they should not ignore the competition in the market.
Not knowing what your customers want will affect the success of your business. This results in loss of customers, which has a negative impact on the growth of the company.
Big data analytics helps companies identify trends and patterns with customers. Analyzing customer behavior leads to profitable business.
Providing Market Information
Big data analytics shapes every business process. It enables companies to meet customer expectations. Big data analytics helps transform a company’s product portfolio. It provides effective marketing campaigns, stimulates innovation and product development.
Benefits Of Big Data Analytics
Big data analytics is well established across a variety of industries. Thus, big data is used in many industries such as finance, banking, healthcare, education, government, retail, manufacturing and many more.
Many companies such as Amazon, Spotify, Linkedin, Netflix etc. use big data analytics. The banking sector is the largest user of big data analytics. The education sector also uses data analytics to improve student performance and to help teachers teach.
Big data analytics helps retailers – both traditional and online – to understand customer behavior and offer products that match their interest. This helps them to develop new and improved products, which is very beneficial for the business.
However, many companies are still not clear about what big data is and how this analytical capability in commerce can benefit their business model. Lets see some of the sectors that are already using big data analytics.
Analyzing large amounts of data can be a crucial advantage during development. For example, by assessing social media channels or customer feedback, you can identify social trends and market gaps early on.
As manufacturing becomes smarter, it is no surprise that big data is playing an important role in this area. For example, many processes are monitored by sensors that generate large amounts of data. This data can provide predictive maintenance and prevent delays or failures in production.
Distribution And Logistics
Sensors are also increasingly being used in the supply chain, for example to measure fuel consumption or to record data on the location and condition of wearing parts. The structuring of this data means that costs can be reduced in the long term by scheduling deliveries on time, changing routes and loads, and reducing downtime and maintenance costs.
Marketing And Sales
Data analysis can significantly improve customer relations. By gaining a deeper understanding of your customers’ needs, you can target individual customers directly with personalized offers.
Big data analytics can help the financial sector make reliable forecasts or risk calculations. For example, the investment sector can react more quickly to market developments or price falls.
We find that big data helps companies make informed decisions and understand their customers preferences.
It helps companies achieve rapid growth by analyzing data in real time. It enables companies to outperform their competitors and achieve success.
Big data technologies help us identify inefficiencies and opportunities in-our business. They play an important role in determining the growth of a company.
Do you have experience with big data analytics? Want to get involved but don’t know where to start?
At ExistBI, we look forward to sharing our ideas with you. We’d love to help you discover the potential of big data analytics for your business and put it into practice. | <urn:uuid:c3814878-6e44-4289-a8aa-dd5dd900b590> | CC-MAIN-2022-40 | https://www.existbi.com/blog/big-data-analytics-importance-and-benefits-in-modern-businesses/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00292.warc.gz | en | 0.92332 | 2,112 | 2.921875 | 3 |
Real–Time Streaming Protocol (RTSP) session helper (rtsp)
The Real-Time Streaming Protocol (RTSP) is an application layer protocol often used by SIP to control the delivery of multiple synchronized multimedia streams, for example, related audio and video streams. Although RTSP is capable of delivering the data streams itself it is usually used like a network remote control for multimedia servers. The protocol is intended for selecting delivery channels (like UDP, multicast UDP, and TCP) and for selecting a delivery mechanism based on the Real-Time Protocol (RTP). RTSP may also use the SIP Session Description Protocol (SDP) as a means of providing information to clients for aggregate control of a presentation consisting of streams from one or more servers, and non-aggregate control of a presentation consisting of multiple streams from a single server.
To accept RTSP sessions you must add a security policy with service set to any or to the RTSP pre-defined service (which listens on TCP ports 554, 770, and 8554 and on UDP port 554). The rtsp session helper listens on TCP ports 554, 770, and 8554.
The rtsp session help is required because RTSP uses dynamically assigned port numbers that are communicated in the packet body when end points establish a control connection. The session helper keeps track of the port numbers and opens pinholes as required. In Network Address Translation (NAT) mode, the session helper translates IP addresses and port numbers as necessary.
In a typical RTSP session the client starts the session (for example, when the user selects the Play button on a media player application) and establishes a TCP connection to the RTSP server on port 554. The client then sends an OPTIONS message to find out what audio and video features the server supports. The server responds to the OPTIONS message by specifying the name and version of the server, and a session identifier, for example, 24256-1.
The client then sends the DESCRIBE message with the URL of the actual media file the client wants to play. The server responds to the DESCRIBE message with a description of the media in the form of SDP code. The client then sends the SETUP message, which specifies the transport mechanisms acceptable to the client for streamed media, for example RTP/RTCP or RDT, and the ports on which it receives the media.
In a NAT configuration the rtsp session helper keeps track of these ports and addresses translates them as necessary. The server responds to the SETUP message and selects one of the transport protocols. When both client and server agree on a mechanism for media transport the client sends the PLAY message, and the server begins streaming the media.
Having trouble configuring your Fortinet hardware or have some questions you need answered? Check Out The Fortinet Guru Youtube Channel! Want someone else to deal with it for you? Get some consulting from Fortinet GURU!
Don't Forget To visit the YouTube Channel for the latest Fortinet Training Videos and Question / Answer sessions!
- FortinetGuru YouTube Channel
- FortiSwitch Training Videos | <urn:uuid:afbcb1e1-08d8-4f3e-baac-c651cc633fa3> | CC-MAIN-2022-40 | https://www.fortinetguru.com/2016/12/real-time-streaming-protocol-rtsp-session-helper-rtsp/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00292.warc.gz | en | 0.841762 | 644 | 2.640625 | 3 |
It's become increasingly clear that human-robot interaction is expanding rapidly, and it's time to think about how this will work.
This article is going to be a little technical and involves some math, so I'll start with the conclusions:
- Growth in human-robot interactions is happening faster than we thought. These are not devices that possess advanced AI. They are everyday devices meant to work with humans, e.g., Automated Transportation, ATMs, Education, Surveillance and Safety, Cooking, Medicine, Home Maintenance, and Mining Equipment.
- A robot does not possess the same reality as humans.
- Therefore it is unlikely that robots and humans will ever share a language.
- Asimov's Laws cannot be programmed into a robot.
- Asimov's Laws include terms like "Human" and "Harm," which are so semantically ambiguous that a robot cannot understand them.
- And what is impossible is to program into a robot is every possible action and consequence.
- What is needed is something similar to Asimov's Three Laws, with embellishments, that is generic enough that it is universally applicable to robots and indifferent to their specific morphology.
Many things are problematic about AI today, particularly the abuses of it from social media, disinformation, bias, and just poorly made applications, that thinking about living with robots seems a bit remote. I haven't given the issue a lot of thought until I came across a paper, Empowerment As Replacement for the Three Laws of Robotics. It's a provocative title for dense, academic writing, which I'll try to summarize.
The authors focused on the rapidly expanding use of robots interacting with humans today, not the fantastical vision of super-intelligent robots and how to control them. Their solution, surprisingly, is not control but empowerment. It reminds me of the Zen saying, "If your cow is unruly, give it a bigger pasture."
There is a great deal of mostly uninformed chatter about the dangers looming from human-intelligent or even super-intelligent robots and how they will be controlled from taking over from humans if they can be at all.
In Rethinking AI Ethics, I proposed that Asimov's Three Laws were incomplete, inconsistent, and not realistic. Just to review, the rules were introduced in his 1942 short story Runaround, and compiled in the novel I, Robot of 1950)
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given by human beings except where such orders would conflict with the First Law.
A robot must protect its existence as long as such protection does not conflict with the First or Second Law.
Italicized are concepts that require a far advanced cognitive ability than current and near-term robots. If you think about the Three Laws, how would you incorporate them semantically and cognitively into a robot? How would we explain "harm" (if you recall, in the film iRobot of 2004, rogue robots could interpret the words "human" and "harm" as they wished. Because they observed humans harming other humans, they concluded they needed to kill humans to protect humans (a conclusion that humans often reach themselves).
As everyday robots become more ubiquitous, it's essential to understand they aren't people (even though we consistently anthropomorphize AI with words like train, learn, deicide, neural, and names like Ross-IBM, Kensho, Zo-Microsoft Amelia, Sofia, Alexa). The problem with incorporating the Three Rules into current-day robots is the difficult semantic comprehension of rules expressed in natural language. Already familiar and straightforward concepts, such as "harm," cannot be naively related to the robot's perspective. What that means is that robots have a radically different perspective from humans and utterly different reality. It's doubtful that robots and humans could ever have an approximation of a common language. Because it is challenging to build robots that understand what constitutes harm, how could the robot avoid inflicting it?
Asimov's Laws cannot be programmed into a robot. The authors of this paper agree, but not for the usual epistemological reasons. Instead, their thesis does not establish an explicit verbalized understanding of human language and conventions in the robots.
The empowerment concept
Empowerment endows the robot with the resources to react to a wide variety of different situations and types of robotic embodiment. This entire program is based on a mathematical model, not rules or code, especially the generous use of Metric Spaces, a branch of Topology branch. I studied Metric Spaces in the distant past, so I'll include this section in the event someone else has too. The authors claim this idea is mathematically developed, so it seems reasonable to include. A formal definition is (not necessary to read for this paper):
Metric space, in mathematics, especially Topology, an abstract set with a distance function, called a metric, that specifies a nonnegative distance between any two of its points in such a way that the following properties hold: (1) the distance from the first point to the second equals zero if and only if the points are the same, (2) the distance from the first point to the second equals the distance from the second to the first, and (3) the sum of the distance from the first point to the second and the distance from the second point to a third exceeds or equals the distance from the first to the third. The last of these properties is called the triangle inequality. The French mathematician Maurice Fréchet initiated the study of metric spaces in 1905. From: Metric space | mathematics | Britannica.
It is surprising how many papers and books have been written about Metric Spaces based on such simple principles.
The whole concept of Empowerment is a formal mathematical model that has not been implemented in actual robots. This is a theory. But their model can be used generically to computing concepts, such as self-preservation, protection of the human partner, and responding to human action. In that way, it can approximate Asimov's Three Laws of Robotics operationally without the need for a language. Some guidelines are added for the robot to effect actions based on the current set of factors and the robot's morphology. Metaphorically, it is the same as starting your Lincoln with a Toyota key fob without separate instructions. The authors propose three such policies:
First, "robot initiative": The robot can apply the principles because they are generic enough to novel situations using new goals and derived from recent cases.
Second, "breaking action equivalence": what if different actions all produce outcomes?" What facilities does the robot have to act on one of several options, and can the robot optimize once it can ensure that the primary task is satisfied?" (this is not a new concept, often referred to as Hierarchical Constraint Resolution - when there are multiple solutions that fit the criteria, just choose one)
Finally, "safety around robots": The easy answer is the "kill switch," i.e., the drastic step of shutting down the. But if the robot is carrying out a vital function where or when it is maintaining safety or preventing damage, the robot must act rather than stop acting. There is a clear need for generic, situation-aware guidelines that can inform and generate robot behavior.
How they propose to do this is in a formal, non-language-based method to capture the underlying properties of robots as tools (here I disagree. As a mathematician myself, I see Metrics Spaces and Topology in general, as a language. But I suppose their point is that it isn't Natural Language).
Instead of employing language, they apply the information-theoretic measure of Empowerment (this is not an original robotics idea, it goes back to 2008 in a paper Keep Your Options Open: An Information-Based Driving Principle for Sensorimotor Systems, and concepts from many papers of potential and causal information flow in general, which the authors describe as a "heuristic to produce characteristic behavioral phenomenologies which can be interpreted as corresponding to the Three Laws in certain, crucial aspects."
Sometimes, I think the language they use is more difficult to understand than the science. How do robots exhibit "characteristic behavioral phenomenologies?" I feel like they slipped for a moment into the anthropomorphizing syndrome.
Pressing on, the ability to cope with different, quite disparate sensorimotor configurations is desirable for the definition of general behavioral guidelines for robots. This means not defining them separately for every robot or changing them manually every time the robot's morphology changes.
Here's the math.
The authors claim they have worked this out mathematically and provided a glimmer of a multi-step action. It looks like this:
𝔈(r) := C(At→St+1∣r) ≡ maxp(at∣∣r) I (St+1; At∣r)
This simple explanation is that many actions can influence a state in the future, not only the next step but also future outcomes, say, t +3, so the distribution of results is generated starting at time t.
The equation is a "probabilistic communication channel," where the agent transmits information about its actions ( At+1, At+2, At+3) through a channel and evaluates how it affects the outcome at time 3 ( St+3.)
There's a maximal conditional in here too, (maxp(at∣∣r) I (St+1; At∣r)) which is the maximal influence of an agent's actions on its future sensor states (its potential efforts). That can be modeled by a Shannon channel capacity (another well-understood model from information theory), which returns the received signal-to-noise power ratio and what the agent may have changed at the end of its 3-step action sequence.
Another way to look at this and simplify it is that Empowerment is the channel capacity between the agent's actuators A in a sequence of time steps t and its sensors S at a later point in time.
A more visual way to depict this is with a Bayes Network:
In the diagram, S represents the robot sensor, while Sh implies the human sensor. The robot actuator is A, and, correspondingly, Ah is the human actuator. The rest of the system is represented by R. The index or subscript t denotes the time the variable is evaluated. Causal connectors of the Bayesian networks are the black arrows.
The dotted and dashed lines denote the three types of causal information flows of the three kinds of Empowerment. The direction of the potential causal flow for 3-step robot empowerment is seen as the red dotted arrows. The blue dotted arrows show human Empowerment. Lastly, human-to-robot transfer empowerment is seen as the dashed purple line.
I have this overwhelming feeling that they've pulled together a tidy model, but I'm not convinced that it meets the requirements. They've composed a reasonably simple framework, but when the authors added the need for additional "guidelines," I have the impression that the simplicity and plasticity of the approach could disintegrate. However, they have raised the issue that we need to control roots now, not in the future, and the naïve hope to be able to do it with language is probably impossible.
Without some evidentiary data that this concept can work, I can offer no opinion, but I take from this article that academics in robotics are beginning to think about managing robots while giving them a measure of autonomy. Today, we have Natural Language Processing, and we're allowed to speak to a device and have it talk back. NLP programs can write credible prose and compose music. In some ways, the interactions are remarkable, but in no way does the machine have any idea what you or it is saying.
OpenAI's GDT-3 is the largest language model ever created and can generate amazing human-like text on demand but won't bring us closer to true intelligence.
It is entirely contrived by training in a specific domain. This is what the authors mean when they say robots exist in a different reality from humans, and we need to turn our attention to efficient ways to manage robots instead of wishing they would just understand what we want. | <urn:uuid:4cb2e92d-4f26-4983-9939-5d96b0ff3c58> | CC-MAIN-2022-40 | https://diginomica.com/robot-empowerment-viable-alternative-asimovs-three-laws-robotics | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00292.warc.gz | en | 0.948299 | 2,549 | 2.78125 | 3 |
The deep web (also called the invisible or hidden web) is the part of the Internet that is not indexed by search engines and does not appear in search results.
For example, the deep web includes private profiles on social networks and other resources, e-mails, corporate websites, password-protected documents, paid content, and so on. Meanwhile, online content that does get indexed by search engines is called the surface web (or visible web).
How Web pages can become part of the deep web
Online content can evade search engine indexing in several ways:
- Using a noindex meta tag in a page’s HTML code prevents search robots from indexing it;
- Placing an exclusion in the robots.txt file tells search engine crawlers to ignore certain site content;
- Using dynamic content generation to show each visitor a different version of a page, such as a personalized recommendations page in an online store;
- Password-protecting access to content, such as is common for private online planners or corporate cloud storage;
- Placing a website in a domain that requires specialized software for access (for example, a normal browser cannot access .onion sites; users need the Tor browser).
In addition, search engines do not index pages for which no links exist in public resources.
Deep web, dark web, and darknet
The term deep web is often confused with the terms dark web and darknet. In reality, they are three different, albeit overlapping, concepts.
- A darknet is an overlay network (i.e., it’s built on top of another network) that requires specialized software for access. Examples of such software include the Tor browser and SecureDrop, a free software platform for secure communication between journalists and sources that require anonymity. Darknets allow the exchange of information without revealing any personal information, which is why they are popular with criminals.
- The dark web is content hosted on darknets. The dark web forms a part of the deep web. | <urn:uuid:e33d2875-8382-4594-8ade-3d82de3091ca> | CC-MAIN-2022-40 | https://encyclopedia.kaspersky.com/glossary/deep-web/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00292.warc.gz | en | 0.88739 | 410 | 3.296875 | 3 |
Today is the first Thursday of May, which means it’s World Password Day. World Password Day is a timely opportunity to remind internet users to evaluate their individual password strengths and best practices. However, in reality, passwords are a significant vulnerability and even the strongest passwords can easily be stolen and compromised.
According to the 2019 Verizon Data Breach Investigations Report, as many as 80% of all breaches result from weak, default, or stolen passwords. Hackers use a variety of methods to get their hands on passwords, including ransomware and phishing attacks, installing spyware that records keystrokes (keylogging), infecting frequently visited sites (waterhole attacks), gaining access to a free or public Wi-Fi using a fake wireless access point (WAP), or even buying them on the dark web for as little as $7.
The recent cyber-attack at the San Francisco International Airport (SFO) further confirms these notions. Hackers installed harmful code on two SFO websites to steal Windows user credentials and gain access to SFO worker’s personal Windows devices.
So, as passwords are widely recognized as one of the weakest links in an organization’s security, is World Password Day really something we should be celebrating?
In light of the increased remote workforce due to COVID-19, it's more critical than ever to go beyond solely relying on passwords for security. Individuals and organizations must prioritize cybersecurity as a year-round endeavor that goes beyond simply reinforcing password best practices and turn to more robust authentication methods.
In this blog, we will discuss how passwords fundamentally put you and your organization at risk, and explore the more secure and usable options you can leverage to replace them entirely.
What are the Inherent Insecurities of Passwords?
Best practices recommend making your passwords as long as possible and to use a different password for each platform or account. As today's users typically juggle an average of 70-80 passwords, passwords are cumbersome to manage and often result in loss in productivity. In fact, recent research from the Ponemon Institute found that users spend over 12 minutes a week entering or resetting passwords, which adds up to nearly 11 hours per year.
The near impossibility of remembering each password creates a major failure point at the user level. Users often revert to the path of least resistance by selecting passwords that are easy to remember, replicating passwords across multiple accounts and applications, and sharing passwords with other employees. They also resort to shortcuts, such as writing passwords down or storing them in unencrypted spreadsheets, which are all major security risks.
Many organizations try to increase the security of passwords by mandating policies that call for frequently changing passwords. Unfortunately, this often leads to end-users creating workarounds that cripple security, such as choosing weak passwords, reusing passwords, or transforming them in ways that are highly predictable to hackers. For instance, Cowboysfan#1 becomes cOwboy$fan#12, then CoWboy$f@n#123, and so on. This makes it easy for hackers to utilize social engineering techniques to learn users’ passwords, then hack into systems.
In addition, enforcing strict and complex password policies forces employees to spend longer accessing the systems they need to do their jobs or to turn more frequently to the IT department for help, which wastes everyone’s time. Gartner estimates that 40 percent of all help desk calls are for password resets, and Forrester researchers have calculated the cost of a single password reset to be $70, so the time and soft cost savings can add up quickly.
Furthermore, many platforms and applications make the problem worse with a three-strike lock-out policy. If the user is locked out of their account, they are unable to be productive and thus fully dependent on the IT department or help desk before they are back up and running.
It’s also important to consider contract and seasonal employees who have become a necessity to many organizations whose workforce needs to contract and expand frequently. When members of the contingent workforce need access to sensitive systems and assets during their time with the company, poor password practices often creep up, like sharing passwords or using generic passwords such as “admin.”
How MFA Augments and Even Eliminates Passwords
Multi-Factor Authentication (MFA) offers many advantages over passwords and better secures your sensitive data and assets as it strikes a balance between usability and protection. MFA combines three authentication factors to limit risk and can act to augment passwords, which are a single authentication factor.
The three authentication factors include something you know, such as password or pin; something you have, like an RFID card or token; and something you are, such as biometric authentication, which includes facial recognition and fingerprint authentication. MFA has been proven to render attacks harmless even in the event that a user’s credentials are stolen or compromised because the attacker would still not have the additional authentication factors.
Many of today’s MFA solutions offer the flexibility to increase security without negatively affecting usability. The availability of MFA methods to best fit your unique situation are extensive and include: push authentication, Radio Frequency Identification (RFID), bluetooth authentication, FIDO U2F tokens, fingerprint biometrics, and one time passwords (OTPs) in the form of hard and soft tokens among others.
There are also a number of benefits to implementing MFA. For starters, MFA can make user authentication a much more fluid and seamless experience than using passwords. When used as a password replacement, such as through push authentication on a user’s phone with fingerprint authentication enabled, MFA does away with countless password resets, saving time and effort for your IT department.
With MFA, you can even take advantage of existing security investments in physical access by leveraging the same proximity card technology your employees already use to unlock and open doors to also unlock Windows devices. MFA also helps organizations comply with regulations that require or strongly recommend robust authentication, including SOX, CJIS, DFARS, HIPAA, HITECH, EPCS, Positive ID, and PCI-DSS.
Organizations can even adapt MFA authentication policies to include specific contextual factors that govern which MFA method is actually needed. These factors are based on criteria, such as the location the user is authenticating from, if the device used to authenticate is trusted, and time of day. Each of these variables can trigger a warning flag when outside the norm or usual patterns identified. For example, if a user authenticates from another country or if they try to login outside of business hours. The system could then present the user with additional factors for logging in to protect the environment.
Finally, MFA enables organizations to safeguard data and systems that are accessed via remote access solutions such as VPNs, portals, virtual desktop infrastructure, and remote desktops. As more and more staff work off-site due to COVID-19, it’s more crucial than ever to verify the identities of all remote users through MFA.
Advance Your Multi-Factor Authentication Capabilities Today
Passwords are deep-rooted to weaken your organization’s security and leave you vulnerable to hackers. As few as eight characters can separate your sensitive data from hackers who can take advantage of your data and compromised credentials for their own profitability on the black market.
As more information becomes available to the public on the internet, we have to do more to verify identity data. The reality is that if you’re going to continue using passwords to combat today’s threats, you need to combine them with flexible, multi-factor authentication.
The recommendation of MFA by best practice frameworks and standards, such as ISO 27001 and COBIT, have only further driven MFA adoption. Whether your organization is just looking into MFA or already has an MFA solution in place, there’s no better time than now to evaluate your organization’s current authentication strategy and determine the steps needed to increase your MFA maturity level.
To discover how you can advance your MFA capabilities and move away from passwords all together, check out our on-demand webinar: Advancing Your Identity Management Strategy with the IAM Maturity Model, Part 2 - Multi-Factor Authentication. This webinar provides actionable insights into how to evaluate your organization’s current authentication maturity level and take your MFA strategy to the next level.
Access the webinar here. | <urn:uuid:32493160-da26-4d1b-a39e-a240fd972d03> | CC-MAIN-2022-40 | https://blog.identityautomation.com/how-to-best-celebrate-world-password-day-implement-passwordless-authentication | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00292.warc.gz | en | 0.943526 | 1,735 | 2.71875 | 3 |
When most people hear the term “hacker” they think of a solitary, shrouded individual, illuminated by the glow of a computer in a dark room as they commit theft and crime via the internet. While in some cases this iconic imagery may ring true, this perception is largely the result of pop culture in the form of television shows and films like “The Matrix” and “The Girl with the Dragon Tattoo.”
While we tend to only hear about hacking when it is related to underground criminal activity, so-called ethical hacking is becoming increasingly common as more components of everyday life continue to take place online and more companies realize that they need creative thinkers to outwit the bad guys on their own playing field.
A career as an ethical hacker can be a rewarding one for those who have an interest in computer technology and code breaking but don’t aspire to engage in a criminal enterprise.
What is ethical hacking?
Ethical hacking is the act of gaining unauthorized access to a network, computer system, program, server or database at the request of an administrator. An ethical hacker’s job is to poke and prod a system from the perspective of a theoretical criminal in order to test security measures and search for bugs, exploits and other vulnerabilities. In doing so, an ethical hacker is able to help an organization tighten up their security before a malicious hacker has an opportunity to break into it.
Ethical hackers are sometimes referred to as “white hat” hackers, calling back to a common trope in traditional Western films where the good guys were easily distinguishable from the villains thanks to their white cowboy hats.
Ethical hackers adhere to a strict code of conduct in order to maintain a trusted and legal reputation. They are officially contracted or hired by companies or organizations and perform their work in exchange for a bug bounty or paycheck, as opposed to malicious hackers who use their expertise for criminal activities like theft or fraud.
How can I become an ethical hacker?
Hackers on both sides of the law come from various backgrounds. Some are self-taught, having spent time learning to code and practicing on their own. Many notable white hat hackers and programmers, including Apple co-founder Steve Wozniak and current cybersecurity expert Kevin Mitnick, actually began their careers by breaking into systems illegally for fun, although most would agree that this is not a recommended route to take.
Many of today’s hackers start their career by investing in the education needed to obtain a computer science degree. Some also seek out A+ certification, which is gained after passing two separate exams. A deep understanding of computer language and real world application is required to hack successfully, and potential employers will be looking for the most highly learned and experienced individuals as they search for security experts.
Today, there is no shortage of online classes and “boot camps” available for those who want to dive into the field of cybersecurity. While almost any of these programs provide networking opportunities and experience, the job market is becoming flooded with candidates who have taken these courses. As a result, those who have a college degree may find themselves favored over those who do not.
Becoming an ethical hacker takes more than just a degree or certificate. You’ll need experience working in the real world with network administrators, IT departments and other engineers to help build knowhow, confidence and references.
After obtaining an education or certification, many aspiring hackers find work in network support. Testing for vulnerabilities and maintaining network health and security in this area will form the foundation needed to move forward in the field of cybersecurity.
After gaining experience in network support, future white hat hackers should look for work as a network engineer. Network engineers build the systems that network support specialists monitor and test. As a network engineer, you will have the option to further hone in on the security aspects of network construction.
After putting in some time as a network engineer, seek to further your career in the field of information security. Information security analysts examine networks for potential flaws and exploits and ensure that their systems maintain top of the line protocols.
Those who work in information security who want to take the leap and market themselves as an ethical hacker may want to get a Certified Ethical Hacker (CEH) certification from the Council of Electronic Commerce Consultants.
With the field of information technology being so extensive and fluid, there are many different certifications one can seek out in order to bolster one’s resume, including those tailored specifically to ethical hacking.
Keep in mind, however, that no amount of certification can stand in for actual experience. Defending against criminal hackers often employs outside-the-box thinking and methodologies that simply cannot be anticipated or duplicated in a controlled testing environment.
Hacking competitions and conventions
At any point in your journey, you may wish to enter into hacking competitions or visit hacker conventions. Competitions allow you to flex your skills and conventions give you further opportunities to expand your network, continue to build your experience and get a heads up on what new trends are popping up in cybersecurity.
Conventions in particular attract hackers of all kinds, providing a unique arena in which hackers on both sides of the fence mingle. However, don’t expect those who engage in destructive or potentially criminal hacking to announce their intentions or presence.
DEF CON is a hacking convention that takes place annually. Started by hacker Jeff Moss as a gathering of hacker friends and associates, DEF CON is now one of the largest and longest running cybersecurity conventions in existence, attracting enthusiasts from all over the world.
When you have your education, experience and certification ducks in a row, you’ve set the stage and can now begin marketing yourself as an ethical hacker.
One of the best ways to break into the industry is to specialize in a certain kind of security or hacking. Even if you don’t have employment experience, you can leverage your skills to create projects or take tests that will allow you to produce a portfolio to show prospective clients or employers.
Use YouTube and other social media platforms to create content that showcases your skills. This not only displays your abilities, but also shows that you go the extra mile to work hard and have a genuine passion for the field.
Employers go through piles of resumes, but those who show an interest in hacking as more than just a means of employment will undoubtedly shine brighter than those who do not exist outside of a LinkedIn profile.
Your career as an ethical hacker
As you accomplish work for clients and build your portfolio, you may wish to seek out contract jobs or apply for an in-house position at an organization. Many companies also offer bug bounties, paying hackers large sums of cash if they are able to find and document exploitable vulnerabilities in their products.
Ultimately, the means by which you practice will be dictated by how you wish to approach your career. In-house work allows for job security, but will likely not provide the range of experience that multiple, unrelated clients can yield.
The most important thing to keep in mind is to stay on the cutting edge of technology. In the fast moving world of cyber, complacency can quickly put you at a disadvantage compared to peers who continually seek to increase their knowledge and anticipate the trends that will make or break tomorrow’s networks.
How to Become an Ethical Hacker in 2022? by Rahul Venugopal, 17 Feb 2022, Simplilearn
Becoming an Ethical Hacker: What You Need to Know 17 May 2021, Baker College
IT Security Certifications You Need Today (2022) by Don Hall, 17 Feb 2022, CIO Insight
How to market yourself as an ethical hacker by Shimon Brathwaite, 23 March 2021, Security Made Simple | <urn:uuid:e570a01d-fdb3-4762-9344-1f43bb649170> | CC-MAIN-2022-40 | https://news.networktigers.com/all-articles/how-to-become-an-ethical-hacker/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00292.warc.gz | en | 0.947398 | 1,589 | 3.125 | 3 |
October 11, 2018 by Siobhan Climer
As we explained in the introduction to the Mindsight Tech Terms Dictionary, the technology industry has its own specific lexicon, complete with an alphabet soup of acronyms and too many almost-synonyms to count. But we’re going to try, which is why we created the Tech Terms Dictionary in the first place. In our last edition, we covered computer basics, including the system components such as CPU, motherboard, and graphics card.
Today, we’ll be examining the broader system in which a computer operates: the network.
The Network Defined
In technology, when we refer to a network, we are talking about the digital telecommunications network that enables the exchange of data between computing devices through nodes. Broadly, it’s the system that connects computing devices, such as your computer to mine.
Tech Terms Dictionary Network Edition: Architecture
Network – a set of connected computers or computing devices.
Nodes – the computing devices on a network, such as a computer.
Network host – a computer or other device connected to a computer network that relays resources, services, or applications. In other words, a general-purpose network device with a network address.
Network address – an identifier for a node or host on a network.
Open Systems Interconnection (OSI) model – a conceptual model of networking that characterizes seven layers of a communication system abstractly to achieve interoperability of diverse systems with standard protocols. The OSI model layers include: 1-physical layer, 2-data link layer, 3-network layer, 4-transport layer, 5-session layer, 6-presentation layer, and 7-application layer.
Topologies – the arrangement of the elements within a network.
Communication protocol – the set of rules used to exchange information over a network. Examples include IEEE 802, Ethernet, Wireless LAN, Internet Protocol Suite (TCP/IP), SONET/SDH, and Asynchronous Transfer Mode (ATM).
Ring topology – a network design where the nodes are connected via a single cable; however, unlike a bus topology, the end nodes are connected to each other, creating a ring.
Collapsed ring – the most common topology in use today that uses the network protocol called the Ethernet. There is a central hub (also called a router or switch) that runs a ring topology with node plugins, creating an independent cable for each device.
Star topology – a network design that uses a central hub with individual connections between each node on the network.
Bus topology – a network design where a single cable connects all nodes on the network, culminating in the final hub.
Ethernet – a network language that supports the Internet, local area networks (LAN), and wide area networks (WAN).
Local area network (LAN) – a network that connects devices within one location, such as in a building, home, or adjacent buildings.
Wide area network (WAN) – a network that connects devices across an extended area (at least half mile radius or more).
Tech Terms Dictionary Network Edition: Networking Devices
Data – the quantities or symbols a computer processes, stores, and transmits in the form of electrical signals.
Networking hardware – the physical devices required to communicate and share information with other devices on a network.
Gateway – an interface that converts transmission speeds, protocols, codes, or security measures between networks, enabling compatibility.
Network bridge – a network device that creates a single aggregate network from multiple networks or network segments.
Modem – a network device that modulates a carrier signal to encode and decode digital information.
Line driver – a network device used in base-band networks to increase transmission distance through signal amplification.
Switch – a network device that uses packet switching to receive, process, and transmit data to a destination device.
Hub – a common connection point for network devices in a network.
Repeater – a network device that boosts a received signal past an obstruction, increasing the signal coverage distance.
Hybrid network devices – the devices that operate a network made up of more than type of topology, and may include components such as multilayer switches, protocol converters, bridge routers, proxy servers, firewalls, network address translators, multiplexers, ISDN terminal adapters, and other hardware.
Tech Terms Dictionary Network Edition: Components
Cable media – networking cables connect one network device to other network devices, such as printers, scanners, or hubs. Examples of cable media include coaxial cable, optical fiber cable, and twisted pair cables.
Wireless network – a computer network that uses wireless data connections between network nodes, typically implemented through the radio communication in the physical layer of the OSI model network.
Routing – the process by which traffic is moved through a network, or between and/or across multiple networks. Typically, this refers to the selection of the traffic path, regardless of the type of network routed.
Network packet – a formatted unit of data carried by a packet-switched network that consists of the payload.
Payload – the part of transmitted data that included the intended message, which in networking consists of the control information and user data. It is usually included in a type of frame.
Frame – a simple container for a single network packet.
Server – a device, or computer, that provides functionality for other devices – called clients. It’s a computer without a monitor or user interface used to help user-facing computers communicate. Servers reside in data centers or server farms.
Bandwidth – a term used to describe the maximum data transfer rate of a network.
From The Network To The Data Center
While not an exhaustive list, reviewing the Tech Terms Dictionary Network Edition should give you insight into the common structures and components used in describing networking. Since networking is essential to the system-level understanding of computing, it is an essential building block to understanding how computers communicate with one another.
Data centers are a fundamental element of computing today. In our next Tech Terms Dictionary Data Center Edition, we’ll examine how servers and networks work together to create functionality for users and businesses around the globe. Stay tuned!
Like what you read?
Contact us today to learn more about networking for your business.
Mindsight, a Chicago IT consultancy and services provider, offers thoughtfully-crafted and thoroughly-vetted perspectives to our clients’ toughest technology challenges. Our recommendations come from our experienced and talented team of highly certified engineers and are based on a solid understanding of our clients’ unique business and technology challenges.
About The Author
Siobhan Climer, Science and Technology Writer for Mindsight, writes about technology trends in education, healthcare, and business. She previously taught STEM programs in elementary classrooms and museums, and writes extensively about cybersecurity, disaster recovery, cloud services, backups, data storage, network infrastructure, and the contact center. When she’s not writing tech, she’s writing fantasy, gardening, and exploring the world with her twin two-year old daughters. Find her on twitter @techtalksio. | <urn:uuid:ddd05682-57b2-4b6a-9bad-7c16cfbc9b60> | CC-MAIN-2022-40 | https://gomindsight.com/insights/blog/tech-terms-dictionary-network-edition/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00492.warc.gz | en | 0.904897 | 1,478 | 4.15625 | 4 |
Toilet paper manufactured with a combination of softwood and hardwood trees have long fibers that wrap gives paper strength. The average American uses over 100 single rolls about 21,000 sheets each year.
Researchers at the University of Amsterdam’s with colleagues from the Utrecht University has implemented first techno-economic analysis of converting waste toilet paper into electricity. In fact, most people usually prefer not to think about it at all. Yet it is a rich source of carbon, containing 70-80 wt% of cellulose on a dry basis.
Solid oxide fuel cells
By establishing new technique increasing resource efficiency and creating a truly circular economy. The cellulose in WTP comes from trees, the electricity produced is renewable. Using WTP as a resource for generating electricity therefore the ultimate waste recycling concept. The Amsterdam region alone generates some 10,000 tons of WTP per year, enough to power 6400 homes.
The Amsterdam-Utrecht research project, led by UvA professors Gadi Rothenberg and Bob van de Zwaan of the UvA’s Van ‘t Hoff Institute for Molecular Sciences, proposed a simple two-step process for the conversion of WTP, creating a direct route from unwanted waste to a useful product. Master’s student Els van der Roest examined the possibility of combining devices for the gasification of WTP (step 1) with high-temperature solid oxide fuel cells (SOFCs) able to directly convert the WTP-gas into electricity.
Furthermore, the project’s goal was to assess the feasibility of such a WTP-to-electricity system at a scale of 10,000 ton WTP per year, based on real-life parameter values. Using techno-economic analysis methods, the team presented a basic process design, an overall energy balance and an economic study for this concept. Although, data for the experiments and calculations were obtained in collaboration with the Amsterdam waste-to-energy company (AEB).
Leveled cost of electricity
Moreover, the overall electric efficiency is 57%, similar to that of a natural gas combined cycle plant. The leveled cost of electricity LCOE, a measure used for consistent comparison of electricity generation methods is 20.3¢/kWh. This is comparable at present to residential photovoltaic installations.
The system’s capital costs are still relatively high, mainly due to the fuel cell investment costs. But these are expected to decrease as the market for fuel cells develops. The operating costs are relatively low, to the high thermodynamic efficiency (ca. 70%). The researchers expect learning effects could make the system more competitive in future, with an LCOE of about 11 ¢/kWh. The project team concludes that there is a future in turning waste toilet paper into electricity.
However, no Dutch company or municipal authority has as yet been willing to invest in further development. The team is now considering taking their concept abroad. | <urn:uuid:837be43f-26d3-41af-b809-91ee3d7ac205> | CC-MAIN-2022-40 | https://areflect.com/2017/09/18/scientists-convert-waste-toilet-paper-into-electricity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00492.warc.gz | en | 0.928932 | 607 | 3.109375 | 3 |
Ruby is the language leveraged in workflow nodes and connectors. These are some resources to help you better understand and use Ruby.
Programming Ruby - Book and website devoted to learning Ruby.
Programming Ruby: The Pragmatic Programmers' Guide, Second Edition is a fantastic book (ISBN 0974514055) dedicated to learning Ruby. It is a very good reference for experienced and unexperienced programmers alike. There is also a website, extracted from the first edition of the book, available here.
rubular.com - Web based regular expression editor and tester.
Regular expressions, also referred to as Regex and Regexp, provide a concise and flexible means to determine if a string of text matches a specific pattern. They are very commonly used to do a find and replace or to determine if an input string matches some set of requirements. Rubular.com provides a very easy to use interface for writing regular expressions and testing them against different strings of text.
JRuby Applet - Use JRuby code in your browser
This JRuby applet allow JRuby code to be executed in your browswer. Test JRuby/Ruby code in this applet before placing it in your tree. This can reduce testing cycles because the results of the code can be seen immediately
Ruby Documentation - Ruby Classes and Methods Documentation
Browse and search the Classes and Methods of Ruby. This documentation can be very useful when attempting understand what functionality is avaialable for use when writing Ruby code into your task tree. For example search the String Class to see how a String can be manipulated for node parameters.
Updated 11 months ago | <urn:uuid:d89f3934-0633-4fce-af43-600b49e85a36> | CC-MAIN-2022-40 | https://community.kineticdata.com/docs/ruby-documentation-materials | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00492.warc.gz | en | 0.885492 | 335 | 2.9375 | 3 |
In our recent work, we detail an AI and machine learning mechanism able to assist in correlating a large body of text with numerical data series used to describe financial performance as it evolves over time. Our deep learning-based system pulls out from large amounts of textual data potentially relevant and useful textual descriptions that explain the performance of a financial metric of interest – without the need of human experts or labelled data.
Deep learning may have revolutionized AI – boosting progress in computer vision and natural language processing and impacting nearly every industry. But even deep learning isn’t immune to hacking.
Enter microcontrollers of the future – the simplest, very small computers. They run on batteries for months or years and control the functions of the systems embedded in our home appliances and other electronics.
Our team of researchers based at the IBM Research-Almaden lab in California have been pursuing an ambitious challenge of building machines that can perform a preliminary read of chest X-rays provably at the level of at least entry-level radiologists.
Hydrogen is the simplest element in the universe, yet its behavior in extreme conditions such as very high pressure and temperature is still far from being well understood. Dense hydrogen constitutes the bulk of the content of giant gas planets and brown dwarf stars and it’s a material of interest for both fundamental physics and […]
Published in our recent ICASSP 2020 paper in which we successfully shorten the training time on the 2000-hour Switchboard dataset, which is one of the largest public ASR benchmarks, from over a week to less than two hours on a 128-GPU IBM high-performance computing cluster. To the best of our knowledge, this is the fastest training time recorded on this dataset.
In a recently published paper in this year’s INTERSPEECH, we were able to achieve additional improvement on the efficiency of Asynchronous Decentralized Parallel Stochastic Gradient Descent, reducing the training time from 11.5 hours to 5.2 hours using 64 NVIDIA V100 GPUs.
It is no surprise that following the massive success of deep learning technology in solving complicated tasks, there is a growing demand for automated deep learning. Even though deep learning is a highly effective technology, there is a tremendous amount of human effort that goes into designing a deep learning algorithm.
At the 18th European Conference on Computational Biology and the 27th Conference on Intelligent Systems for Molecular Biology, IBM will present significant, novel research that led to the implementation of three machine learning solutions aimed at accelerating and guiding cancer research.
IEEE ICC 2019 “Best Paper” details novel deep reinforcement learning approach to maximize overall performance of Software-Defined Networking that supports 5G.
Our team of IBM researchers published research in Radiology around a new AI model that can predict the development of malignant breast cancer in patients within the year, at rates comparable to human radiologists.
A distributed deep learning architecture for automatic speech recognition that shortens run time without compromising model accuracy. | <urn:uuid:69b80442-f4dd-4354-8a67-b314c0948f99> | CC-MAIN-2022-40 | https://www.ibm.com/blogs/research/tag/deep-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00492.warc.gz | en | 0.929044 | 609 | 2.515625 | 3 |
Everyone is aware of the word “hacker,” even if they are not related to the computer world. The term “Hacker” is broadly used to portray an individual with advanced computer technology skills who can bypass security and penetrate networks with proper authorization or trick organizations. Some hackers use their skills to commit theft, fraud, or wrongful acts, while others consider breaking strict security a challenge and simply enjoy doing it. Whatever the motivation, cybercrime costs a lot to the world, with one business falling victim to ransomware every 11 seconds.
Organizations need to spend time and money to recover from a cyber attack. Moreover, there’s no guarantee that the hacked organization will recover completely. Cybercrime, mainly data breaches, can damage the company’s name and reputation among clients and customers and even lead to legal action. That’s why it becomes necessary for every organization and business to hire a professional with CEH certification. These certified professionals have ethical hacking skills important to any business with a significant digital footprint. Ethical hackers are different from malicious hackers, but they think like malicious hackers that any company is trying to stop. They spot the weak points in any organization’s networks and procedures through stress-testing and correct them to avoid any cybercrime to happen before it’s too late.
Let’s discuss more about ethical hackers and the skills & certifications they possess.
What Is An Ethical Hacker?
Ethical Hackers are security experts who perform security assessments in an organization. They perform proactive tasks to improve an organization’s security posture. A company needs ethical hackers to keep a check on the security of a system or network’s infrastructure. They find vulnerabilities in improper system configuration and hardware or software flaws. They perform all this ethically with proper permission from the company owner to probe their network and discover security risks. Apart from private businesses, government organizations also hire ethical hackers.
Who Is A Certified Ethical hacker?
A certified ethical hacker or CEH is an individual who possesses a core certification in ethical hacking. This specialized program involves teaching ethical hacking fundamentals. Learning the basics of computer systems and operating systems to get hands-on practice with some of the most common hacking tools is a part of the curriculum. Advanced techniques around penetration testing, cybercrime countermeasures, blocking cyberattacks, firewall testing, and others are all part of the course. The CEH course also introduces new techniques that seasoned cyber security professionals do not even know. Those who find it interesting and wish to make their career as certified ethical hackers can consider joining this course.
What Does A Certified Ethical Hacker Do?
A certified ethical hacker can help both private and government organizations in several ways. These include:
An ethical hacker helps organizations determine which IT security measures of their company are effective and need to be updated as they contain vulnerabilities that can be exploited. After finishing the complete evaluation of an organization’s systems, certified ethical hackers report to the authorized people about the vulnerable areas. These may include insecure applications, insufficient password encryption, or exposed systems running unpatched software. The data can be used to make informed decisions about improving the company’s security posture to prevent cyber attacks.
Demonstrating Techniques Used By Cybercriminals
Several demonstrations exhibit different hacking techniques that malicious hackers could use to attack their security systems and wreak havoc in the organizations. After getting complete knowledge of the methods that the attackers use to break into the systems, businesses can better prevent those incursions.
Prepare In Advance For A Cyber Attack
Cyberattacks can cripple or completely destroy a business, significantly smaller businesses. Several companies are still not prepared for cyber attacks. Ethical hackers know how threat hackers operate and how they use new information and techniques to attack systems. Security professionals can work with ethical hackers and prepare their companies for future attacks. Certified ethical hackers help any organization better adapt to the constantly changing nature of online threats.
Organizations need to respond quickly and adequately to all the weak spots or problems that ethical hackers find out; otherwise, the most sophisticated ethical hacking skills get easily wasted.
How To Become A Certified Ethical Hacker?
The International Council of Electronic Commerce Consultants or EC-Council is an American organization that offers cybersecurity certification. It is a global leader in InfoSec Cyber Security Certification Programs like Certified Ethical Hacker, Certified Network Defender, Computer Hacking Forensic Investigator, etc.
If you wish to become a certified ethical hacker, you must clear the EC-Council Certified Ethical Hacker exam.
At School.Infosec4tc, we provide ethical hacking certification training to help you successfully earn ethical hacking certification and become subject matter experts (SME) in a particular area within the ethical hacking domain.
The goal of our ethical hacking courses is to help you master an ethical hacking methodology to use in an ethical hacking situation or penetration testing. By the end of the course, you will learn ethical hacking skills that are highly in demand and globally recognized Certified Ethical Hacker Certification.
School.Infosec4tc also offers ethical hacking and other cyber security courses online for free. Visit this link to enjoy free courses and learn the necessary hacking skills or tools for free. | <urn:uuid:5fb70cd5-9e7e-49da-a265-2a0e4e6317cf> | CC-MAIN-2022-40 | https://www.infosec4tc.com/2022/03/15/what-does-a-certified-ethical-hacker-do/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00492.warc.gz | en | 0.922598 | 1,066 | 3.03125 | 3 |
The Ultimate Guide to Web Development
Web Development is responsible for creating the millions of websites and webpages comprising the internet. These pages can be as simple as basic text, or as complex as web applications which are complete programs in their own right. Web developer skills include programming, as well as elements of design and content production. Web Development often involves integration with multiple data sources, other internal and external software systems, and knowledge of the workings of the internet. | <urn:uuid:7660f5d5-a31e-4cd6-8166-bfa1322fea4b> | CC-MAIN-2022-40 | https://securitybrief.asia/tag/web-development | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00492.warc.gz | en | 0.952375 | 91 | 3.046875 | 3 |
The elements defined in the MIB can be extremely broad (for example, all objects created by private businesses) or they can be extremely specific (like a particular Trap message generated by a specific alarm point on an RTU.)
Each element in the MIB is given an object identifier, or OID. An OID is a number that uniquely identifies an element in the SNMP universe. Each OID is associated with a human-readable text label.
One of the best tactics for addressing MIB problems is to simply read through the file. As a MIB (SNMP) file is just ASCII text, you can view it in any word processor or text editor (even Notepad). Some manufacturers provide grouped MIBs in binary format, but those aren't readable. You want the raw ASCII version of the MIB (SNMP) file.
Your SNMP manager needs the MIB in order to process messages from your devices. The MIB is also your best guide to the real capabilities of an SNMP device. You need to be able to read the MIB so that you can have a good idea of what assets you do have.
The OIDs identify the data objects that are the subjects of an SNMP message. When your SNMP device sends a Trap or a GetResponse, it transmits a series of OIDs, paired with their current values.
The location of the OID within the overall SNMP packet is shown above.
Here's an example: 126.96.36.199.4.1.26188.8.131.52
You need to see DPS gear in action. Get a live demo with our engineers.
Download our free SNMP White Paper. Featuring SNMP Expert Marshall DenHartog.
This guidebook has been created to give you the information you need to successfully implement SNMP-based alarm monitoring in your network.
Have a specific question? Ask our team of expert engineers and get a specific answer!
Sign up for the next DPS Factory Training!
Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today | <urn:uuid:dfe06c58-b414-4c74-b9c4-05efb2522b08> | CC-MAIN-2022-40 | https://www.dpstele.com/snmp/mib/what-terms-oid-function.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00492.warc.gz | en | 0.884906 | 459 | 2.546875 | 3 |
Intel Creating Cryptographic Codes That Quantum Computers Can’t Crack
(IEEESpectrum) An Intel team has created an improved version of such a quantum-resistant cryptographic algorithm that could work more efficiently on the smart home and industrial devices making up the Internet of Things.
The Bit-flipping Key Encapsulation (BIKE) provides a way to create a shared secret that encrypts sensitive information exchanged between two devices. The encryption process requires computationally complex operations involving mathematical problems that could strain the hardware of many Internet of Things (IoT) devices. But Intel researchers figured out how to create a hardware accelerator that enables the BIKE software to run efficiently on less powerful hardware.
BIKE securely establishes a shared secret between two devices through a three-step process, says Santosh Ghosh, a research scientist at Intel and coauthor on the paper. First, the host device creates a public-private key pair and sends the public key to the client. Second, the client sends an encrypted message using the public key to the host. And third, the host decodes the encrypted message through a BIKE decode procedure using the private key. “Of these three steps, BIKE decode is the most compute intensive operation,” Ghosh explains. | <urn:uuid:906fb078-4942-4e35-a648-bebaeaae5ae0> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/intel-creating-cryptographic-codes-that-quantum-computers-cant-crack/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00492.warc.gz | en | 0.878157 | 259 | 3.09375 | 3 |
Kathy Berkidge, Agile BA Consultant and Coach, Mind at Work Consulting
Effective stakeholder engagement can mean the difference between successful project delivery and project failure. We must work closely with our stakeholders to understand their needs and concerns to ensure they support the project and are willing to contribute. If we don’t engage stakeholders successfully, information may be missed or misinterpreted leading to products and services that fail to deliver the outcomes expected.
There are many barriers to stakeholder engagement, including:
- Lack of vision or not understanding the project context
- Resistance to share information
- Failure to understand what’s in it for them
- Misinterpretation of their needs
- Lack of available time
- Poor communication
- Lack of trust
- Previous history or negative perceptions from past experiences and projects
- Fear of change
We must plan our approach carefully to avoid these issues and create effective working relationships. Conducting stakeholder analysis helps us understand various stakeholder types and attitudes to plan how to engage them and keep them engaged.
While there are many tools and techniques to perform stakeholder analysis, we need to analyse the mindset of our stakeholders – a deeper level of analysis – to understand how they will react in various situations, and how we may respond to them.
We need to examine their perceptions, beliefs, and attitudes in detail to identify how we can best work with them to maintain healthy, productive relationships. This means we need to be prepared to adapt to situations that arise with them, while avoiding causing misunderstanding, confusion, or conflict. Importantly, we must look within ourselves to understand how our behaviour, words and actions may be perceived. This is where mindfulness is needed.
Through greater awareness, we can remain conscious of our own attitudes when engaging with our stakeholders, as well as their position. Mindfulness helps us be more attuned to our stakeholders’ needs while remaining calm and considered with even with the most challenging stakeholders. It helps us focus, remain present and truly listen to our stakeholders. As this great quote from Maya Angelou says, “People won’t remember what you said or did, but they will remember how you made them feel”. We can help our stakeholders feel heard, understood and that we care about their needs and concerns. The ‘Stakeholder Engagement Canvas’ is a new tool that helps us perform that deeper, more thoughtful level of stakeholder analysis. It looks at various aspects of the stakeholder’s mindset in context to the project as well as examine how we can remain mindful throughout our engagement with them.
The canvas contains nine key elements.
- Them – Who are they and what’s their world like?
- Needs – What are their requirements?
- Goals & Objectives – What does success look like for them?
- Context – What is the nature of the work we will do together?
- Activities – What interactions will we have?
- Outcome – What is their expected outcome of this engagement?
- Me – What perceptions do I have of this stakeholder?
- Risks – What potential problems could occur during this engagement?
- Practices – What practices will I employ to mitigate engagement risks?
Each element is considered carefully to enable us to conduct a detailed analysis of our stakeholder. With all elements explored, we can put together a plan that will ensure that we effectively work with our stakeholder and ensure successful stakeholder engagement. | <urn:uuid:9b8263d3-ecd3-4b02-a063-4d3fdeb0c0e7> | CC-MAIN-2022-40 | https://www.irmconnects.com/successful-stakeholder-engagement/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00492.warc.gz | en | 0.950945 | 708 | 2.59375 | 3 |
Few things make for a more devastating reading than teen suicide statistics. Unfortunately, parents often aren’t aware of their child’s tough times, as teens are sensitive and have a hard time expressing their feelings.
Teenagers often share their thoughts on social media, yet, parents tend to overlook this form of expression. Sadly, sometimes it can be too late to prevent a child’s death, despite the multitude of potential warning signs.
Teenage Suicide Statistics (Editor’s Choice)
- Adolescent boys are five times more prone to committing suicide than girls their age
- In 2019, almost 19% of teenagers considered attempting suicide
- In 2020, suicide was the third leading cause of death among people aged 15–19
- Four in ten LGBT youth are seriously contemplating self-harm
- Since 2007, the teenage suicide rate for those aged 10–24 has been increasing
- More than 55% of students know someone who self-harms or has committed suicide
- 80% of teens who committed suicide showed warning signs
- Six in ten parents report their children are cyberbullied
General Teenage Death Statistics
Many teenagers commit suicide every year worldwide, but awareness of the early signs can prevent some of those deaths. Disturbing as it is, it is essential to talk about the issue and show support and understanding to the kids suffering such thoughts.
Let’s take a look at some heart-breaking teenage suicides statistics.
1. Intentional self-harm is the second most common cause of death in 10-to-14-year-olds.
When it comes to teen mortality, unintentional injuries are the leading cause of death, followed by suicide, and thirdly, cancer, stats on teenage suicide show.
2. Teenage boys aged 15 to 19 commit suicide five times more often than girls their age.
In comparison, boys aged 10 to 14 are two times more likely to take their own lives than girls the same age, adolescent suicide facts reveal.
3. People between 15 and 29 years old commit one-third of all suicides.
(BMC Public Health)
Suicide statistics show that suicide is the second leading cause of death for people aged from 15 to 29 years. Reasons associated with suicide are being bullied, suffering an injury, loneliness, and alcohol abuse.
Teen Mental Health Statistics
While it’s normal for teenagers to be emotional and experience moodiness, recognizing the signs of a mental illness or struggle is of utmost importance. Teens suffering from poor mental health are more likely to commit suicide, and being aware of their needs and challenges can help save their lives.
4. 50% of lifetime cases of mental illness will develop before a person becomes 17 years old.
(Adolescent Wellness Academy, Time)
Half the cases of long-term mental illness begin at 14 years and will be developed by 17.
Moreover, suicidal teenagers statistics show that in August 2020, there was a 334% spike in intentional self-harm in those aged 13–18 in the Northeast, in comparison to the same month in 2019.
5. Major depression cases in teens around the age of 17 rose by 69%.
(Adolescent Wellness Academy)
Approximately one in five teens suffers from at least one diagnosable mental illness, according to teen depression and suicide statistics.
6. Depression affects one-fifth of adolescents.
Depression is a common occurrence during teenage years that lasts a year or longer in more than 8% of the cases.
7. In April 2020, requests for professional assistance with general anxiety disorder increased by 93.6%.
The social isolation and uncertainty accompanying the pandemic had a profound impact on the teens’ mental health. During March and April 2020, the number of teens aged between 13 and 18 years in anxiety therapy almost doubled compared to 2019.
US Youth Suicide Stats
There are many reasons teens decide to self-harm — mental illnesses, stress, school, bullying, or overwhelming events at home. Raising awareness of the most common triggers is essential in preventing teen suicides.
8. Suicide ranked third among the death causes for people aged 15–19 in 2020.
(CDC, US News)
With 2,216 suicides, 2020 marks the highest teen suicide rate since 2000.
Furthermore, teenage suicide stats note that since May 2020, there was an increase in the number of suicides among girls between the ages of 12 and 17, which reached its peak in February and March 2021.
9. 18.8% of teens seriously considered suicide in 2019.
Research in 2019 revealed that 18.8% of US students had been seriously contemplating suicide, and 15.7% decided how to do it. At the same time, about 8% have attempted to take their own lives at least once.
Girls, in general, are more inclined towards suicide ideation (24%), planning (20%), and attempting (11%) than boys, according to the US teen suicide rate.
10. Teen suicide rates for 10-to-24-year-olds have been increasing since 2007.
(UC Davis Health)
The rate of suicides committed by 10-to-24-year-olds has been increasing every year. It is likely due to poor mental health and lack of professional help.
Some of the signs parents should look out for are depression, withdrawal from family and friends, increasing substance use, losing interest in usual activities, or acting out.
11. 81% of deaths in the 10 to 24 age group were male.
While girls are more likely to report suicide attempts, the percentage of teen suicide acts divides into 19% female and 81% male cases.
12. Rhode Island has the lowest suicide rate among teens in the US — five out of 100,000 adolescents.
(America’s Health Ranking)
The latest available data show that states in the Northeast region have the lowest suicide rates. Besides Rhode Island, New Jersey has 5.2 cases per 100,000, New York with 6, Massachusetts with 6.1, and Connecticut with 6.8, according to the teenage suicide rate by state.
Meanwhile, the states with the highest rate of teen suicide per 100,000 people are Alaska (34 cases), South Dakota (29.2), Montana (26.7), Wyoming (25.6), and Idaho (22.2).
LGBT Youth Suicide Statistics
All teenagers have a hard time fitting in, but LGBT teens often suffer worse. Regardless of whether they experience a lack of support, discrimination, or violence, they need an affirming space where they will feel respected and loved.
13. 40% of LGBT youth have seriously considered resorting to suicide in the last year.
(The Trevor Project)
The teenage suicide percentage signifies a health crisis, disproportionately affecting the LGBTQ youth. The lack of recognition and understanding has forced many LGBT teens to consider committing suicide to resolve their problems.
14. 21% of LGBT teens aged 13 to 17 years have attempted suicide in the last year.
(The Trevor Project)
According to the transgender youth suicide statistics, 48% of LGBT survey responders aged 13 to 17 have considered suicide. 48% also admitted having practiced self-harm over the last 12 months.
15. 20% of LGBT youth without an affirming space attempted suicide.
(The Trevor Project)
Affirming gender identity among transgender and nonbinary youth is associated with a lower risk of suicide attempts. Those who reported having their pronouns respected by all or most people around them attempted suicide half the rate compared to those who didn’t have their pronouns respected, statistics about teen suicide show.
It’s good to note that 78% of LGBT youth reported having access to at least one affirmative space, and 86% report having high support levels from at least one person.
16. 26% of LGBT youth who had no access to gender-affirming clothing have attempted suicide.
(The Trevor Project)
Gender-affirming clothing such as binders, shapewear, or bodysuits, helps those who experience gender dysphoria align the looks of their bodies with their gender identities. Sadly, 14% of those who attempted suicide had access to most of these types of clothing.
High School Suicide Statistics
Pressure from the parents, too much homework, and chasing good grades take a toll on a teen’s mental health. High school also comes with bullying, and according to statistics, almost every teen has experienced some form of abuse. All of this can often result in attempted suicide.
17. 8.9% of high school students attempted suicide in 2019.
(America’s Health Rankings)
According to US teen suicide statistics, almost 9% of all high school students attempted suicide in 2019, with most cases being female.
18. 56% of students personally know someone who self-harms or has considered suicide.
Teens keep reporting high levels of anxiety and depression. So much that more than half of them know someone who seriously thought about self-harm or suicide. However, only 32% believed their school could tackle this issue, and 42% ensured that their school is doing its best to create a safe environment.
Suicide Rate Due to Social Media
Cyberbullying is a common occurrence on social media that can trigger suicidal thoughts in many cases.
19. Teens who spend over three hours on social media daily are at high risk of mental issues.
Spending more time on social media is associated with increased odds of internalizing problems, meaning a higher risk of developing mental health problems.
20. TikTok has been partly responsible for at least 41 teen deaths so far.
According to statistics on teen suicide, the social network TikTok is responsible for dozens of teen deaths globally and at least nine suicides so far.
Teens have developed a strong relationship with this platform, and unfortunate incidents go as far as committing suicides while live-streaming.
21. 60% of parents report their children being cyberbullied
According to youth suicide statistics, 60% of parents with children aged 14 to 18 years reported their kids bullied online and offline. In most cases, the bullying incidents occur at school, on the bus, and lastly, on social media.
22. 19.2% of bullying happens on social media.
Social media platforms are the most common place where cyberbullying occurs. Other digital means through which children often suffer bullying are text messages and online video games.
23. Suicide attempts are twice more likely with teens suffering cyberbullying.
According to teen suicide stats, cyberbullying strongly increases suicide and self-harm rates among the youth.
It is essential to work on cyberbullying prevention to preserve the mental health of children intact.
Facts About Teen Suicide During the 2020 Pandemic
Teens also suffered a lot through the pandemic. The latest data shows that self-harming behavior, overdosing, and suicide attempts significantly increased at the beginning of the pandemic.
24. Self-harming behavior among US teens increased by 90.7% during the pandemic.
Teenage death statistics 2020 showed the pandemic had a severe impact on the youth’s mental health. In fact, data on self-harm among people aged 13 to 18 showed an increase of 90.7% during March 2020 worldwide.
The problem became even more prominent in April 2020, with a rise of 99.8% compared to the same time in 2019.
25. Overdosing among teens aged 13 to 18 years increased by 119%.
Teen suicide statistics reveal insurance claims for overdosing increased by 94.9% in March and by 119.3% in April 2020 for 13-to-18-year-olds.
The substance use disorder also increased by 64.6% in March, and 62.7% in April 2020, confirming that the young have a hard time dealing with the pandemic.
26. Four out of five teens who attempted suicide showed warning signs.
Teenage suicide statistics show that the warning signs include: suicide threats, anger and irritability, appetite changes, preoccupation with death and suicide, previous suicide attempts, or final arrangements, like giving away possessions.
On their own, these behaviors don’t necessarily mean a teen will commit suicide, but they do indicate that a person is struggling with some issues. If left untreated and ignored, these symptoms might become more severe and lead to suicidal attempts.
Teen suicide statistics show that cyberbullying, lack of support and mental health issues are among the reasons for the high suicide rates. Teenage death statistics 2019 show that close to one in five teens considered suicide that year.
Unfortunately, with the arrival of the pandemic, these numbers have increased. According to statistics, in March and April 2020, the number of teens reported for self-harm, overdose, and a general anxiety disorder doubled.
As one in five suicidal teens shows clear signs, people must pay closer attention to the symptoms and offer support and understanding to help them with their issues.
People Also Ask
How many teens commit suicide a year?
Suicide is prevalent among youth. Suicide cases among 15-to-29-year-olds account for one-third of suicides worldwide. Teen suicide statistics show that the number of cases committed by people aged 15 to 24 places self-harm as the second leading cause of death for that age group.
How to help a teenager with suicidal thoughts?
The first step is recognizing the warning signs. Parents must create a safe environment where children can share how they feel and what is going on in their lives. The issues the teen expresses must be taken seriously, and parents should seek immediate professional help.
Schools should also work on preventing violence and bullying. The educational institutions have to develop a crisis and response plan. It should include procedures for assisting students suffering from self-harming tendencies or thoughts.
Teachers can learn to recognize behavioral patterns, actively intervene, and ensure that young people get the care they need.
What are the factors leading to an increase in suicidal tendencies in the youth?
Teens often experience immense pressure to succeed in school, suffer mental illnesses, face bullying, move to a new environment, or face losses, all of which may lead to suicidal thoughts.
Age, gender, and cultural and social influences also play a part in teen suicide tendencies. Unfortunately, they often perceive death as the only way out of a difficult period.
Why do many teenagers feel depressed?
Today’s unprecedentedly high rates of depression among teenagers are due to a plethora of reasons. Among the causes are financial situation worries, more exposure to alcohol and drugs, and social media.
Social media is linked to sleep deprivation, unrealistic expectations, and cyberbullying.
Is school linked to depression?
Schools are often linked to teen depression — the need to be successful and score perfect grades, fit in, socialize, and, of course, peer pressure and bullying are among the many factors contributing to teens’ fragile mental health, show teen suicide statistics. | <urn:uuid:5916c0f6-d9c4-40a9-945b-3715b47f830b> | CC-MAIN-2022-40 | https://safeatlast.co/blog/teen-suicide-statistics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00492.warc.gz | en | 0.95736 | 3,170 | 2.8125 | 3 |
Published On December 04, 2018Financial firms that act quickly to apply artificial intelligence tools will gain a significant competitive edge.
Nearly 80% of the information captured during the processing of a mortgage loan is in forms that – until recently – have been all but inaccessible to computers. This so-called unstructured data such as emails, phone calls, text documents and images could have great value to financial institutions in helping them better understand customers, assess risk and create new products, but there simply aren’t enough people to read and analyze so much information.
That’s changing, however, thanks to two artificial intelligence-related disciplines called natural language processing (NLP) and machine learning (ML). These companion technologies enable computers to sort through vast amounts of unstructured data to identify patterns that can yield valuable insights that would elude even human analysts, who are limited in the amount of data they can absorb. NLP and ML are already speeding up loan and mortgage application processing, but the more important dividend will be the customer insights they deliver that can help bankers make better decisions.
On the efficiency front, AI can scan documents to look for missing or mis-categorized information and immediately notify applicants of the need for corrections, reducing delays and rework. Machines can also create descriptions of information inside documents, emails and recordings – a type of label called “metadata” – that makes the content easily searchable. That means employees can more quickly find the information they need.
The more intriguing potential of ML and NLP, though, is to better understand customers. For example, emails and phone calls can be analyzed for language that can indicates extreme satisfaction or dissatisfaction. Happy customers can thus be presented with rewards or new product options while angry customers are immediately connected with a sympathetic employee who helps relieve their frustration. Machine learning programs can also comb through large amounts of customer interaction data to identify areas of common concern, helping the bank to prioritize its investments.
The same technology can be used to learn more about what customers want. For example, a bank could mine social media conversations to identify customers who are struggling with eldercare costs or about to send a child to college and reach out with customized loan offers. Or it could scour a customer’s account records to see where that person frequently does business and reach out with an offer.
Conversely, machine learning algorithms can identify patterns of behavior that point to potentially fraudulent activity or raise warning flags about a person’s credit-worthiness. That lowers risk, enabling the institution to invest in better products and services.
The goal is ultimately to get to “markets of one,” in which each customer’s experience is unique and tailored to his or her interests.
Much of what we wish we knew about customers is currently locked up in forms that are impossible or impractical for humans to process. Those barriers are quickly falling, though. Financial firms that act quickly to apply these new tools will gain a significant competitive edge.
To learn more, read the white paper on Data Search and Discovery in Banking | <urn:uuid:12eb3701-d040-4086-b973-07950ef4bc4e> | CC-MAIN-2022-40 | https://www.ironmountain.com/blogs/2018/competitive-advantages-of-implementing-artificial-intelligence-in-banking | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00492.warc.gz | en | 0.946901 | 623 | 2.515625 | 3 |
Boil down information technology to its basic elements, and what do you get? Bits on a wire (or whizzing through the air, as the case may be).
While early military applications of networking date from the late 1950s, it was the semi-automatic business research environment (SABRE) connecting two mainframes back in 1960 that marks the first commercial computer network. To this day, SABRE remains at the heart of the global airline reservation system.
Networking technologies have undergone many generations of innovation since the early days of SABRE, of course. Today, all of IT depends on it – including the entire scope of cloud computing.
In fact, if the physical layout of cables, switches, and routers still determined the network topology (as it certainly did in the 1960s), then cloud – along with the rest of modern IT – would be a non-starter.
Cloud computing as we know it today wouldn’t exist if it weren’t for a fundamental networking innovation borrowed from the telephony world: the separation of concerns.
Dial 0 for Operator
As any old movie buff knows, human operators used to handle phone call routing manually. Automating the switching system required a separation of concerns between call routing control and the technology that connected the call and kept it live until someone hung up.
These two areas of concern are the control plane and data plane, respectively.
Modern computer networking takes a page out of the telephony playbook, separating the control of the network from the data passing through it.
Network operator configuration of the network takes place in the control plane, including the layout of the network topology itself. Devices on the data plane then move packets according to the policies and rules set out in the control plane.
The power these planes bring to the organization is profound. We can configure the entire network via an abstract representation of it – a model, if you will – and then expect the physical network to follow the instructions in the model automatically.
Control planes are at the heart of cloud computing. Any cloud user can log into their cloud account and configure everything about their instance of the cloud, from the networking on up. How the bits get from place to place is entirely hidden from view, as the cloud provider’s data layer takes care of all the details.
This separation of concerns is as essential to cloud-native computing as it is to the cloud itself. After all, cloud-native means extending the best practices of the cloud to all IT – including the power of the control and data planes.
But the cloud-native story doesn’t end with networking. In fact, the concept of a control plane (along with its corresponding data plane) represents a core cloud-native abstraction that unifies many of the architectural principles behind this new computing paradigm.
Beyond the Service Mesh
The most familiar cloud-native control planes appear in the architecture of service meshes. Service meshes separate the control of east-west (container to container) interactions from the networking paths that define those interactions.
The result is better management of the security considerations for such interactions, along with the configurability of the interactions themselves – even though individual microservices may be ephemeral.
Modern service mesh technologies are still immature, but the cloud-native control plane pattern is in place. The path forward is becoming clearer.
Today, service meshes generally don’t handle north-south traffic between microservices and other systems (aka ingress traffic), leaving API management technologies to handle the challenges of such traffic.
However, the writing is on the wall: there is no fundamental difference between north-south and east-west traffic – that is, no difference at the data plane. Instead, the control plane should be entirely responsible for supporting the configurations of traffic to and from all points of this virtual compass.
Tackling Integration at the Control Plane
Once we abstract all endpoint interactions with a combination service mesh/API gateway that operates at the control plane, what we have done is essentially reinvented enterprise integration.
Ever since enterprise application integration came on the scene in the 1990s, it has largely been a static, design time exercise. Define your endpoints and then figure out the protocols and transformations necessary to get data from here to there.
Dynamic integration – where the middleware responsible for the integration made the choice of a particular endpoint at runtime – was a popular idea in the early 2000s, but turned out to be nothing but a SOA pipedream. The underlying problem: unclear separation between the control and data planes.
In essence, the only way to perform integration back in the day was to include some of the controls in the data plane – a shortcoming that cemented rigidity throughout the entire architecture.
Today we’ve largely learned this lesson. By cleanly separating the control and data planes, cloud-native integration promises a new era in dynamic integration.
This separation abstracts the endpoints, allowing the data plane to resolve them – and thus, the message paths – dynamically at runtime.
At the control plane, either human operators or automated bots work entirely with such abstracted endpoints, defining and managing the policies and rules that govern interactions among them.
Dynamic integration was a nice-to-have in the SOA days, and for the most part we didn’t have it. In contrast, it’s a must-have in today’s cloud-native world, because integration endpoints are inherently ephemeral. We simply don’t have the luxury of improperly separating our control and data planes anymore.
Bringing the Control Plane to Cloud-Native Security
The challenges of securing interactions between abstracted endpoints is at the heart of cloud-native security.
Zero-trust computing, a Forrester term from 2009, considers all endpoints to be untrusted until they are intentionally granted particular privileges. Before cloud-native came computing along, however, security technologies assigned identities to endpoints as though they corresponded to human identities.
In the cloud-native world, endpoints are more likely to be both ephemeral while not associated with a particular human. After all, every smartphone, IoT sensor, microservice, or serverless function can now be an endpoint.
Cloud-native zero trust, or what I like to call trustlessness, must take a different approach. It must cleanly separate the control plane that concerns itself with defining policies for abstracted endpoints from the data plane where enforcement takes place.
Hybrid IT and Multi-Cloud also Depend upon Control Planes
While cloud computing itself depends upon the clean separation between control and data planes, cloud-native computing must extend these abstractions to multi-cloud and hybrid IT (the combination of one or more clouds plus on-premises).
This requirement is a tall order to be sure: imagine a single pane of glass for defining all policies related to the behavior of the entire enterprise hybrid IT landscape. Create or modify a policy, push a button, and an enterprisewide data plane automatically puts the policy into production.
This comprehensive control plane vision for hybrid IT is both extraordinarily ambitious and also essential to the proper working of the hybrid infrastructure. Many vendors are already implementing parts of this vision, and early adopter enterprises are on the road to deploying such cloud-native hybrid IT today.
The Intellyx Take
The cloud-native separation between control and data planes pulls together several important cloud-native principles.
The principle of immutable infrastructure, for example, calls for the configuration of any and all infrastructure in order to define its implementation. To make a change to infrastructure, change the configuration and redeploy.
This configuration takes place at the control plane. The data plane, in turn, must automatically conform to that configuration.
Intent-based computing also depends upon the separation of data and control planes. With intent-based computing, one level of policy configuration represents the business intent for the technology in question. The infrastructure then automatically translates that business intent into configurations that the data plane can automatically implement.
The final piece of intent-based computing: the infrastructure must also continually check to ensure the deployed technology hasn’t drifted from the business intent. If it has, it should bring it back into conformance automatically. Such automation depends upon the clear separation of the planes.
A final word of advice: don’t fall for vendor confusion on this topic. Some vendors are trying to define a third, separate ‘management plane’ – because they have management gear to sell. In reality, management functions are all part of the control plane. There’s no need to overcomplicate matters by adding a third plane.
Similarly, don’t fall for fake planes where control isn’t fully abstracted away from the data plane. Traditional integration software, for example, is notorious for falling into this trap. Some vendors of so-called ‘zero-trust’ software also succumb to this mistake.
Cloud-native is more than a label – yet as with all jargon, people can apply it improperly. Be sure you look more closely at the software wares on offer to be sure they cleanly separate the control and data planes. If they don’t, you can be sure they’re not following cloud-native best practices.
© Intellyx LLC. Intellyx publishes the Intellyx Cloud-Native Computing Poster and advises business leaders and technology vendors on their digital transformation strategies. Intellyx retains editorial control over the content of this document. Image credit: Robin Hutton. | <urn:uuid:546b59d5-1356-492e-abe0-d3cd222ef3f0> | CC-MAIN-2022-40 | https://jasonbloomberg.com/cloud-native/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00692.warc.gz | en | 0.909488 | 1,967 | 2.640625 | 3 |
Anyone who worked in machining within the past 70 years knew that the future of machining was automation. In the 1940s, computer pioneer John T. Parsons and engineer Frank L. Stulen invented, via a contract with the U.S. Air Force, numerical control milling machines. The machines were designed to help the Air Force produce geometrically complex parts more economically and accurately.
Those machines were the precursor to today’s computer numerical control (CNC) machining tools that use programming to automatically process some material, such as metal or wood, to specification. By the late 1980s, such machines were standard within tool and die, machine shops and manufacturing floors. Today, such digitization of manufacturing is accelerating.
Workers, for the most part, embrace the change and would do more to meet the technical skills demands needed for their jobs – if only their employers supported them. That’s the primary finding from a survey, Digital Illiteracy in the Factory: Providing Workers with Low-Code Tools, commissioned by low-code platform provider Mendix.
Development Tools in Manufacturing
Interestingly, today, low-code development tools are finding a home in manufacturing. According to the survey, nearly 9 out of 10 said they want to learn low-code software development, while only 3% currently use low-code in their jobs. More than two-thirds (67%) say they want to create a software app that would solve work issues. According to this survey and a parallel survey conducted in Europe, U.S. workers are more willing to welcome and contribute to workplace digitalization than their counterparts in Germany (78% versus 61%) — a surprising result, given that German manufacturing workers enjoy a reputation as some of the most skilled in the world.
“Incorporating digital applications is vital to the ongoing competitiveness of manufacturing in the U.S., and it is heartening to see such a large percentage of the manufacturing workforce ready to contribute to this effort,” Gardner Carrick, vice president of strategic initiatives at The Manufacturing Institute said in a statement. “Low-code applications provide our workforce with the tools to contribute to this digital transformation, and we should prioritize training in these skills.”
Workers are Eager to Upskill
The survey found that 87% of U.S. workers want to learn low-code development, and within those respondents, 88% of women who responded indicated being very interested in learning low-code, while 51% of men responded in kind. Perhaps not surprisingly, younger workers are more interested in learning low-code than those nearing retirement.
That may be so, but U.S. manufacturing employers aren’t getting the memo. The survey found that U.S. manufacturers are giving their workers little help in developing their digital skills, and only 18% of workers are happy with their digital skills. “That only 18% are happy with their digital skills is just one sign that more training is required. At a time when the country’s manufacturing industry remains mired in a decades-long slump and when managers clamor for more useful software applications, news reports indicate that the nation’s workforce is ill-equipped to deal with writing code,” Mendix said in a statement.
“The United States needs to move now to get their workers into the digital game,” said Derek Roos, Mendix CEO. “If the country wants ‘Made in the USA’ to mean something again, and if it wants to reduce unemployment and keep its economy vital and thriving, manufacturing and industry should provide low-code and other tools that help bridge the digital divide.”
The survey was conducted from Jan. 29 through Feb. 11, among 250 full-time manufacturing workers in the U.S. and 250 within Germany. The survey’s objectives included learning workers’ attitudes and motivations, as well as obstacles that prevent them from acquiring low-code and other digital skills. | <urn:uuid:71614d02-fd2d-4767-bd08-974d8f76a1ca> | CC-MAIN-2022-40 | https://digitalcxo.com/article/manufacturing-workers-eager-to-digitally-upskill/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00692.warc.gz | en | 0.962936 | 828 | 2.5625 | 3 |
In this blog post
Social Determinants of Health or SDoH refers to the conditions that people are surrounded and affected by during the different phases of their lives, at personal as well as professional levels. This includes conditions at home and work. These situations at local, national, and global levels are influenced by the distribution of resources, money, and power. SDoH are also often referred to as medical social determinants of health owing to how important they are in determining the quality and accessibility of medical care. The pace at which we are moving forward, we might be looking at biological and genetic determinants not too long from now. However, whether it will be as easy to modify them as it is to modify social determinants is a question that remains unanswered.
Clinicians and others in the healthcare industry believe that the healthcare outcome of patients has more to do with their conditions and surroundings than anything else. It would be wrong to take SDoH as a panacea but learning and understanding the social conditions can play a very important role in building a strategy that goes beyond social boundaries and ethnic norms to ensure affordable and improved care as well as healthy people and communities.
It is important for the healthcare stakeholders, especially healthcare entities to understand the relationship that exists between medical care and social care. And there are three categories that we can divide health and care into – social health (everything to do with food, housing, and other social needs), medical care, and behavioral health. These three categories put together provide a more comprehensive view of health and well-being of people, negating the idea that eliminates the inclusion of behavioral and social services as integral parts of healthcare.
Social determinants of health have a direct impact on health at a personal level, community level, as well as national level. And they are found to have a greater impact on health and well-being of people than medical care, biology, and behaviour amongst other important factors. They also affect the kind of access that people have to things that can help them achieve optimal health. The only factor that is a bigger contributor to illness and death at a global level, than factors like smoking, diabetes, alcohol, physical inactivity, and obesity amongst others, is the socioeconomic status of people.
There is a long-standing relationship between people’s health and their socioeconomic status that can be seen in easy and quality access to health that people with higher socioeconomic status get. The same can’t be said about those who are present lower down the socioeconomic tree. Medical care is often thought of as being one of the biggest contributors to optimal health; however, in actuality, it doesn’t seem to have too much of an impact. It only has a very small share in the set of modifiable contributors that impact the health of a community or a population. The key factors that account for a larger share of the contributor pie include economic, social, and environmental factors.
The neighborhoods and environment of people also affect their health outcomes significantly. There are many people across the country, especially those belonging to minority groups, who are exposed to risks associated with polluted air, polluted water, violence, and other such things. In addition, some people are also exposed to health and safety risks at work. The authorities at different levels need to come up with measures to minimize these risks. Some of the things that they can do is to provide these people with amenities that can help them improve the quality of their lives.
Access to quality healthcare has always been an issue. People who need healthcare services the most don’t get timely or any access to them. There are many people out there on the streets who don’t have any health insurance. It is very difficult for these people to get the care and treatment they need. The governments at local, state, and federal levels need to work on increasing insurance coverage rates to improve healthcare accessibility for the less privileged.
Educated people have been found to be more conscious of their health and safety. This is the reason they live a healthier and longer life than those that aren’t as educated. The lack of education amongst people is most likely caused by low income, social discrimination, and disabilities. People without higher education are also less likely to land high-paying jobs that can help them afford quality healthcare services. Authorities need to work on making higher education more accessible and affordable and build better-performing schools in neighborhoods that aren’t as developed as others.
Now, where do data and insights come into the picture? By using advanced tools, stakeholders in the healthcare industry will have access to data that clearly showcase various factors that impact an individual’s life and health. It involves an individual’s occupation, socioeconomic status, neighborhood factors, and more. This data can provide insights to not only analyze but also improve population health. These data-backed insights are much more effective in studying the factors and triggering a response to health issues than information that comes from traditional sources.
The primary objective of healthcare systems is to ensure timely and effective delivery of care to individuals as well as communities that they are supposed to cater to. Healthcare systems often run on very small margins but still manage to provide care to the economically underprivileged people in the society, most of whom are either are not covered at all under any insurance or have very limited access to quality care as a result of underinsurance.
One way of overcoming this gap is by bringing more easily accessible and at the same time affordable healthcare technology into the mix. The journey towards automation and digitalization has already begun in the healthcare industry. And with the advancement in technology, healthcare establishments are in a better position to promise improved care, irrespective of social determinants of health. Technology that aggravates inequities is only going to pose a concern for population health. On the other hand, technology that is developed keeping into consideration SDoH, as well as socioeconomic and sociodemographic factors, can be more efficient in providing access to quality care to all people, regardless of their location, age, income, ethnicity, and education amongst other things.
Having said that, it is highly likely that older people, people living in rural and remote areas as well as those from the minority groups or who are lower down the socioeconomic status, don’t have the knowledge and literacy required to access digital health solutions available to them. To not let social divisions deny deserving people access to quality healthcare solutions, it is important that there is an incessant push to upgrade healthcare technology. Technological innovations such as artificial intelligence, telehealth, big data analytics, and others can have a huge impact in improving different aspects of healthcare for public – medical diagnosis, care management at home, treatment decisions supported by data, and personalized care to list a few.
The biggest need for healthcare organizations right now is to address SDoH when building their quality care strategy. This is one of the many issues that are stopping people from gaining access to optimal health solutions. Organizations need to work with their tech partners to develop and prioritize healthcare solutions that are in line with SDoH. The objective of building any such solution should always be to deliver affordable healthcare. | <urn:uuid:7612d1f7-60f3-418d-84d0-a57b9a044ca4> | CC-MAIN-2022-40 | https://www.gavstech.com/impact-of-social-determinants-of-health-in-the-healthcare-industry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00692.warc.gz | en | 0.97551 | 1,453 | 3.828125 | 4 |
The rapid adoption of remote work alongside the migration of data to the cloud has made cyber security an increasingly important priority for organizations and consumers alike. 2021 broke the record for most data breaches in a year, and there is good reason to believe 2022 will continue that trend.
When most people think of breaches they imagine hackers brute-forcing their way through a system, but that is not representative of a typical data breach. Most security incidents are a result of hackers exploiting human weaknesses to gain access to systems containing sensitive data.
One of the most common ways hackers infiltrate secure systems is through social engineering. According to Verizon's Data Breach Investigations Report, 93% of successful breaches are conducted using social engineering attacks! This post will explain what social engineering is, give a few examples of common social engineering techniques, and provide 5 ways to prevent social engineering attacks.
Social engineering is when hackers use psychological manipulation to trick people into giving up confidential information or login credentials. The most common way they will do this is known as phishing. Phishing attacks are when a hacker will pretend to be someone else (often someone the victim trusts) and persuade the victim to give up private data. People typically disclose this information either by willingly sending it directly to the hacker or by clicking on a link that steals their information.
Depending on the situation, many people will be completely unaware that they are allowing unauthorized access to private resources until it’s too late. Effective phishing attempts make it difficult for a victim to realize they are being phished, especially when the sender appears to be someone they trust. This type of social engineering attack is particularly effective because it is much easier to trick someone into letting you into a system than brute-forcing your way in. Cracking someone’s password can take lots of time and resources, but social engineering attacks can be conducted quickly and cheaply. With the rise in phishing and social engineering attacks, it’s important to be able to recognize some common forms of social engineering attacks in 2022.
An extremely dangerous form of social engineering is when hackers will pretend to be a bank, credit card company, or other financial institution. They will often strike around times where money is a common issue such as holidays or tax season and convince the victim they need their login information to resolve a financial issue. These attacks are some of the most devastating since a hacker can drain someone’s bank account or max out their cards if the victim discloses too much personal or financial information.
Pretending to be someone the victim trusts is one of the most common forms of social engineering attacks. Hackers will typically scour the internet for someone’s personal information in order to convincingly come across as someone the victim knows. They often pretend to be in distress to create a sense of urgency. If the hacker has already breached someone’s email address, phone, number, or social networking sites the message might actually come from their real accounts. A hacker may also pretend to be someone’s co-worker and request logins or sensitive data for work-related purposes. This is one of the most effective ways to socially engineer someone since people are naturally more trusting with people they know (or think they know).
Everyone’s heard jokes about the Nigerian Prince scam, but fake offers and giveaways are no laughing matter. These types of attacks happen when social engineers convince people to click on links or send them personal information by signing up for a fake offer. These offers usually seem too good to be true, and in this case they are! An increasingly popular social engineering attack in 2022 is scamming for cryptocurrency. Hackers will offer to make people money by giving away cryptocurrency if the victim discloses their wallet information. They may even send links to fake cryptocurrency wallets that contain malicious software. Spoiler alert: the victims never get their money back.
Fake website pages often go hand-in-hand with fraudulent offers, but they can be a part of any kind of social engineering attempt. Social engineers will create fake website landing pages that resemble a companies real website. These malicious websites are often e-commerce or banking websites where people enter financial credentials. People that click on links to fake websites will willingly fill out forms with their usernames and passwords, only for the hackers to instantly gain access to their real logins.
Now that you know some of the most common forms of social engineering attacks, here are some best practices to protect yourself and your organization.
If you get a message from a friend, family member, or co-worker that seems like it may be a social engineering attempt, contact them via some other medium or in person if possible to double check that they were the ones who sent the message. Do not respond to the suspicious messages directly in case the hackers have gained control of their accounts.
If you get a message from your bank or some other corporation, contact them separately to make sure they actually sent you a message. No financial institutions will ask you to submit sensitive information via email or social media. In general, reject requests to send information over non-secure channels. This also applies to if you get a text message from your phone company containing a link. If you ever have reason to believe any messages you receive may contain a phishing attempt, make sure to do your own research to verify they are from legitimate companies.
Although social media phishing is on the rise, email still remains by far the largest medium for hackers to trick people into disclosing valuable information. A spam filter is a great way to stop risky emails from reaching your inbox in the first place.
Spam filters will automatically detect common signs of fraudulent emails such as misspelled words, extravagant offers, or sketchy links. Another indicator that an email is a phishing attack is if it asks you to install software. Almost all companies will use anti-virus software firewalls on top of email filters on work devices, but you should use one for personal devices as well. Most spam filters also have settings that allow you to adjust how strict they are in filtering emails. For maximum security, set your spam filter to a ‘high’ setting to keep your inbox secure.
As an increasing amount of our lives are shared online via social media, it’s becoming more and more important to know what information is publicly available to malicious parties looking to optimize their phishing scams. Although many phishing attacks are conducted on a massive scale, spear-phishing refers to the practice of using personal information to target specific individuals. Spear phishers can be very convincing when pretending to be someone and are experts at taking advantage of people that don't take steps to protect themselves.
One of the best ways to prevent spear-phishing is to monitor your digital footprint. Oversharing private information online allows social engineers to learn more about you in order to impersonate your friends, family, or co-workers. You should also make sure your social circles are not posting your personal information online without asking you first. Monitoring your digital footprint isn’t just important for stopping breaches, it’s crucial if you want to maintain your privacy and protect against identity theft.
Utilizing multi-factor authentication for logins is a great way to ensure your accounts are not accessed even if you share your credentials or click on a phishing link. Multi-factor authentication offers a backup layer of protection to make sure that security breaches don’t cause significant damages.
Consider using an authenticator app such as Duo, Google Authenticator, or Authy instead of a mobile number. SIM hijacking, a process where hackers gain control of a mobile number to pass text-based 2FA, is becoming increasingly common. Physical authentication keys are an even better way to protect your accounts from unauthorized logins.
The final tip for preventing social engineering attacks is to always remain diligent. Treat every email you get with suspicion and report any sketchy emails to your IT team if relevant. If someone sends you an offer that is too good to be true, they are trying to take advantage of you. If you receive an email that may contain a fake link to a companies website, manually type out the domain to verify it is a legitimate site. Don't believe that any unsolicited messages from tech support are real. Even a seemingly innocuous phone call might actually be voice phishing. While there are tons of little steps you can take to stop social engineering, staying alert and emphasizing security at all times is the best way to stay safe.
Social engineering schemes are becoming more and more sophisticated. The days of typo-riddled emails offering millions of dollars are behind us. Hackers are utilizing more personal data to personalize phishing communications with an unprecedented level of detail. With this rise in social engineering, it is more important than ever to remain cautious and use best online security protocols at all times.
All companies should conduct frequent security awareness training that includes anti phishing techniques. Hackers will target the weakest link in a security chain, so it's essential that all employees get properly trained. If your organization is looking for security awareness training that's relevant to modern work, check out Haekka! Haekka was built from the ground up with a focus on protecting companies from rapidly evolving cyber threats. Haekka is integrated into Slack to make securing your workforce as simple as possible. If you want to check out Haekka for yourself, schedule a demo with one of our founders today! | <urn:uuid:70b98ad6-d70d-4b11-8f91-65545fa00623> | CC-MAIN-2022-40 | https://www.haekka.com/blog/5-ways-to-prevent-social-engineering-attacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00692.warc.gz | en | 0.944182 | 1,916 | 3.078125 | 3 |
Today In History May 9
1868 The city of Reno, Nevada, is founded
Reno city in the northwest segment of the U.S. province of Nevada, around 22 miles (35 km) from Lake Tahoe, known as “The Biggest Little City in the World”. Known for its gambling club and the travel industry, Reno is the district seat and biggest city of Washoe County and sits in a high desert waterway valley at the foot of the Sierra Nevada and its midtown zone (alongside Sparks) possesses a valley casually known as the Truckee Meadows, which because of enormous scope speculations from Seattle and Bay Area organizations, for example, Amazon, Tesla, Panasonic, Microsoft, Apple and Google has become another significant innovation center point in the United States. The city is named after Union Major General Jesse L. Reno, who was murdered in real life during the American Civil War at the Battle of South Mountain on Fox’s Gap.
Reno is a piece of the Reno–Sparks metropolitan zone, second generally crowded in Nevada after Las Vegas-Henderson, the two of which are a piece of the Las Vegas Valley. More noteworthy Reno, which comprises of Washoe, Story, and Lyon regions in addition to Carson City (the capital of Nevada), is the second biggest metropolitan zone in Nevada.
1944 1st eye bank opens in New York
The primary eye bank was established in 1944. As of now, in the United States, eye banks give tissue to more than 80,000 corneal transplants every year to treat conditions, for example, keratoconus and corneal scarring. Sometimes, the white of the eye (sclera) is utilized to carefully fix beneficiary eyes. In contrast to different organs and tissues, there is a sufficient gracefully of corneas for transplants in the United States, and abundance tissue is traded globally, where there are deficiencies in numerous nations, because of more prominent interest and a less-created eye banking framework.
The foundation of the world’s first eye bank was just the start of the incredible advances taken to improve corneal transplantation and to expand eye banking’s impact in the transplantation network. In 1955, 27 ophthalmologists (speaking to 12 eye banks), met with four significant clinical gatherings under the support of the American Academy of Ophthalmology and Otolaryngology (AAO&O). During that gathering, a Committee on Eye-Banks was framed and Paton was named Chairman.
1960 US becomes the first country to legalize the birth control pill
The Food and Drug Administration (FDA) supports the world’s first monetarily delivered conception prevention charge Enovid-10, made by the G.D. Searle Company of Chicago, Illinois.
Improvement of “the pill,” as it turned out to be prevalently known, was at first charged by conception prevention pioneer Margaret Sanger and supported by beneficiary Katherine McCormick. Sanger, who opened the principal anti-conception medication center in the United States in 1916, would have liked to support the advancement of an increasingly functional and powerful option in contrast to contraceptives that were being used at that point.
In the mid-1950s, Gregory Pincus, an organic chemist at the Worcester Foundation for Experimental Biology, and John Rock, a gynaecologist at Harvard Medical School, started take a shot at a conception prevention pill. Clinical trial of the pill, which utilized engineered progesterone and estrogen to quell ovulation in ladies, were started in 1954. On May 9, 1960, the FDA endorsed the pill, conceding more noteworthy regenerative opportunity to American ladies.
1966 1st black member of Federal Reserve Board (A F Brimmer)
Andrew Felton Brimmer was a prominent United States market analyst, scholastic, and business pioneer who was the main African American to have filled in as a legislative leader of the Federal Reserve System.
While he was still at Harvard, Brimmer worked at the Federal Reserve Bank of New York as a financial expert, and built up the national bank of the Sudan. After graduation, Brimmer became associate secretary of monetary issues in the U.S. Branch of Commerce. In 1966, under arrangement from U.S. President Lyndon B. Johnson, Brimmer started an eight-year term on the leading group of governors of the Federal Reserve, turning into the principal African American in that position. In 1974, Brimmer left the Federal Reserve and instructed at Harvard University for a long time. From that point, he shaped his own counseling organization, Brimmer and Company. He was a trustee of the Economists for Peace and Security. | <urn:uuid:b92f25bd-d43d-4281-ae69-da97fa238182> | CC-MAIN-2022-40 | https://areflect.com/2020/05/09/today-in-history-may-9/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00692.warc.gz | en | 0.956252 | 968 | 3.046875 | 3 |
The name seems tailor-made for a Seinfeld bit: “What is the deal with floppy disks? They’re not floppy, and they’re not disks!”
Certainly the 3.5-inch colorful plastic squares that today represent the majority of floppies in circulation don’t appear to fit the description implied by their moniker, but the name refers to what came before, and what still lies beneath.
The first floppies were flexible 8-inch plastic disks coated with iron oxide and housed in a protective jacket lined with a fabric that would clean the surface of the disk as it rotated. Now obsolete, they were produced by IBM in 1971 in response to a problem with its System 370 computer. (The 370’s operating instructions were stored in semiconductor memory; turning off the machine erased the instructions.) The disk could store about 80,000 bytes, and it launched the era of the personal computer.
As computers got smaller, so did the disks. The 5.25-inch version came along in 1976—its size allegedly inspired by a cocktail napkin its developers came across while talking shop in a Boston bar. It has since been usurped by the 3.5-inch diskette, which Sony launched in 1981. Though even these are losing ground to newer tools for transferring files between machines, most computers still include slots to hold these little items, a comfort to the wise (or paranoid) among us who still rely on them to back up our hard drives. | <urn:uuid:183e28d5-4af9-40c2-946e-f7bb400b026e> | CC-MAIN-2022-40 | https://www.cio.com/article/264568/infrastructure-the-history-of-floppy-disks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00692.warc.gz | en | 0.968312 | 308 | 2.78125 | 3 |
The initial stage of a penetration test requires the testers to gather information about the target organization and its IT infrastructure. This is also similar to what the attackers do in planning their attacks: attempt to gather as much as information possible to plan a successful attack. There are many mechanisms to gather intelligence secretly, but it begins with finding data from publicly available sources. Given that we have a plethora of online platforms available, plenty of open-source intelligence is readily available. In our engagements over the years, we have understood that sufficient information about organizations is available in the public domain that we can use to craft targeted attacks. This article explores what open-source intelligence is, what it entails, various types, approaches, and how it helps us in security testing exercises.
LIFARS Cyber Resiliency Program is a subscription-based program that provides the manpower and expertise to immediately respond and remediate cyber incidents and breaches, in addition to providing full array of services to increase your company’s cyber resiliency.
What is OSINT?
Open-source intelligence refers to gathering data from freely accessible and available sources for a variety of purposes. In other words, open-source intelligence includes any data that you collect lawfully from publicly available sources about organizations or individuals. Law enforcement agencies, cybersecurity professionals, as well as attackers, utilize OSINT techniques to siphon through the massive pile of data and find relevant information.
It is possible that a penetration testing team can find information about an organization that they do not comprehend is publicly available. To improve an organization’s security posture, security teams are increasingly relying on OSINT techniques to expand the scope of their defensive measures. OSINT can include information in any form available on the internet. This can include videos, images, webinars, online courses, articles, books, etc. It can range from a web search for IP addresses to legislative records or court records maintained by the governments and courts respectively.
Tools utilized for OSINT exercises may not be necessarily open-source. There are commercial tools available in the market that help you in finding the relevant information. Moreover, in some cases, you may not even need dedicated tools to gather and extract data. OSINT is limited to information gathering and does not involve unauthorized access to employees’ social media accounts and similar activities. OSINT information must not be drawn from the restrictive or limited access.
Why is OSINT important?
Security teams are adopting OSINT techniques to adopt attacker-like approaches to implement defensive security measures for their organizations. Some of the common reasons that we come across as to why organizations pursue OSINT are:
- Identifying unintentional leakage of sensitive data through social media networks and other publicly available platforms
- Finding insecure devices connect to the organizational network with open network ports
- Obsolete or updated software and application packages
- Leakage of highly confidential information such as trade secrets and source code
As threats continue to grow in sophistication, it becomes harder for security teams to keep a continuous check on the entire IT infrastructure. While there are automated tools and technologies minimizing the burden, OSINT can contribute to security operations by providing information about attack tactics and techniques. While the information gathered from OSINT sources is often unstructured, security teams are expected to establish a relationship between various data points.
OSINT Methodologies and Approaches
Individuals and organizations have been using OSINT for the longest possible time, without even knowing it. For example, marketing companies collect data about their potential customers to boost their conversion rate. In the cybersecurity industry, adopting a practical approach for OSINT turns out to be a boon. For any security program, recognizing and mitigating existing risks are a primary concern. An organization needs to utilize all possible resources to put their best foot forward. In such a situation, there does not appear to be any reasonable cause as to why organizations avoid deriving the benefits of OSINT.
While there are no strictly defined methodologies for organizations to choose from, the onus lies on organizations to determine their objectives from OSINT exercises. For example, an organization may seek to find:
- Scope of personal and professional information available in the public domain
- Relevant search queries for organization, its technical infrastructure, and components thereof
- Employee activity on online discussion forums and the nature of information shared therein
- Contact information (corporate as well as employees’) available on the internet
- Using relevant keywords to check the availability of confidentiality information
- Using open-source for satellite images for obtaining topographical pictures of a location
Your organization’s OSINT methodology will evolve with time, and the return-on-investment (ROI) will start getting visible. While there is no hard and fast methodology, your organization can either adopt either an offensive or defensive approach.
1. Offensive/active approach
When your team establishes contact with individuals for gathering information, it is referred to as the offensive approach. However, targeted individuals may be able to identify the team members involved in the exercise. In some cases, we have observed that as soon as an individual becomes aware of the team’s actual motives, they are likely to avoid any communication with the victim. Scanning a website using a vulnerability scanning tool will be an example of offensive open-source intelligence exercise. Although, some targeted individuals may identify the team members and make an attempt to trace them.
2. Passive/inactive approach
In terms of visibility, this approach is comparatively better than the previous one. It relies on gathering information hosted by third-party sources and archival platforms. When instantaneous information is not readily available, third-party sources prove to be crucial. Given that the information may be outdated or incorrect, a team must not rely on every piece of information they encounter with absolute surety. For better handling and analysis of collected information, analysts commonly use machine learning tools in large scale OSINT operations.
Threats associated with OSINT exercises
- Identification threat: This threat is evident in exercises involving dynamic OSINT approach. As you are establlishing direct communication with the targeted individual, it may reveal one or more team members’ identity.
- Data loss threat: If a targeted individual becomes aware that someone is tracking their digital footprints, they may undertake the required efforts to eradicate their imprints. In some cases, it is even possible that the relevant pieces of content are taken down altogether.
- Victim threat: It is possible that your organization ends up on the other side of an OSINT assessment. OSINT teams shall not disclose Organization-specific information at any point in time, and they must prefer trusted utilities such as VPN, proxies, and APIs, among others.
While discussing open-source intelligence techniques and how they can be useful, one cannot deny the fact that it is a double-edged sword. Just like your team will collect information about your organization, the attackers can do the same. Modern-day attackers invest sufficient time and resources to plan their attacks carefully. Gathering information about their potential target is a vital component in their preparation process. Attackers may use social engineering techniques like phishing and vishing to trick individuals into sharing sensitive information. This information will likely form the base of their attack. On the other hand, organizations can utilize OSINT techniques to minimize the information available in the public domain about their business operations. For any organization that seeks to utilize OSINT exercises, they must do so within the boundaries of the relevant laws. | <urn:uuid:10fe2d31-01ea-4098-b5dc-36b3bbb65e7d> | CC-MAIN-2022-40 | https://www.lifars.com/2021/01/what-is-open-source-intelligence-and-why-is-it-important/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00692.warc.gz | en | 0.92867 | 1,517 | 2.6875 | 3 |
23 Apr What Is Google Fiber and How Could It Change the Internet?
Google Fiber is Google’s Internet service that builds fiber optic networks and provides Internet access for people in their homes in major cities. However, Google is exploring expanding their Fiber division by developing new technologies to offer high speed wireless broadband service to give people another alternative to having Internet access in their homes.
Currently, people can choose between a wired service, like Fiber, or a wireless data plan through their smartphone provider. The problem with current wireless cellular plans is the cost for high speed data. Google is looking to change that by offering people a more affordable option for wireless access on a flat rate plan, similar to the plans offered for wired high speed Internet access, and at much higher speeds than available through cellular providers.
The only thing holding Google back at this point is developing the technology and determining how to effectively “beam” broadband Internet access into people’s homes. Google has been connecting their existing fiber optic network lines to wireless transmission towers and conducting experiments with proprietary wireless technologies. If Google is able to successfully create a faster wireless broadband solution, it would not only open up more competition in the wireless Internet market, but provide people another option for obtaining Internet access in their homes.
It will be interesting to see whether Google is able to develop an effective solution and overcome current speed limitation barriers in wireless technologies. In the meantime, for cost-effective Cloud computing, and virtual desktop services and solutions for your small business, call the experts at CyberlinkASP now at (972) 262-5200. | <urn:uuid:75343bcc-841c-4200-b5fe-33be19f44cae> | CC-MAIN-2022-40 | https://www.cyberlinkasp.com/insights/google-fiber-change-internet/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00092.warc.gz | en | 0.935627 | 322 | 2.84375 | 3 |
WASHINGTON—The U.S. Census Bureau announced today that the 2020 Census shows the resident population of the United States on April 1, 2020, was 331,449,281.
The U.S. resident population represents the total number of people living in the 50 states and the District of Columbia. The resident population increased by 22,703,743 or 7.4% from 308,745,538 in 2010.
“The American public deserves a big thank you for its overwhelming response to the 2020 Census,” Secretary of Commerce Gina Raimondo said. “Despite many challenges, our nation completed a census for the 24th time. This act is fundamental to our democracy and a declaration of our growth and resilience. I also want to thank the team at the U.S. Census Bureau, who overcame unprecedented challenges to collect and produce high-quality data that will inform decision-making for years to come.”
“We are proud to release these first results from the 2020 Census today. These results reflect the tireless commitment from the entire Census Bureau team to produce the highest-quality statistics that will continue to shape the future of our country,” acting Census Bureau Director Ron Jarmin said. “And in a first for the Census Bureau, we are releasing data quality metrics on the same day we’re making the resident population counts available to the public. We are confident that today’s 2020 Census results meet our high data quality standards.”
The new resident population statistics for the United States, each of the 50 states, the District of Columbia and Puerto Rico are available on census.gov.
- The most populous state was California (39,538,223); the least populous was Wyoming (576,851).
- The state that gained the most numerically since the 2010 Census was Texas (up 3,999,944 to 29,145,505).
- The fastest-growing state since the 2010 Census was Utah (up 18.4% to 3,271,616).
- Puerto Rico’s resident population was 3,285,874, down 11.8% from 3,725,789 in the 2010 Census.
In addition to these newly released statistics, today Secretary Raimondo delivered to President Biden the population counts to be used for apportioning the seats in the U.S. House of Representatives. In accordance with Title 2 of the U.S. Code, a congressionally defined formula is applied to the apportionment population to distribute the 435 seats in the U.S. House of Representatives among the states.
The apportionment population consists of the resident population of the 50 states, plus the overseas military and federal civilian employees and their dependents living with them overseas who could be allocated to a home state. The populations of the District of Columbia and Puerto Rico are excluded from the apportionment population because they do not have voting seats in Congress. The counts of overseas federal employees (and their dependents) are used for apportionment purposes only.
- After the 1790 Census, each member of the House represented about 34,000 residents. Since then, the House has more than quadrupled in size (from 105 to 435 seats), and each member will represent an average of 761,169 people based on the 2020 Census.
- Texas will gain two seats in the House of Representatives, five states will gain one seat each (Colorado, Florida, Montana, North Carolina, and Oregon), seven states will lose one seat each (California, Illinois, Michigan, New York, Ohio, Pennsylvania, and West Virginia), and the remaining states’ number of seats will not change based on the 2020 Census.
Upon receipt of the apportionment counts, the president will transmit them to the 117th Congress. The reapportioned Congress will be the 118th, which convenes in January 2023.
“Our work doesn’t stop here,” added acting Director Jarmin. “Now that the apportionment counts are delivered, we will begin the additional activities needed to create and deliver the redistricting data that were previously delayed due to COVID-19.”
Redistricting data include the local area counts states need to redraw or “redistrict” legislative boundaries. Due to modifications to processing activities, COVID-19 data collections delays, and the Census Bureau’s obligation to provide high-quality data, states are expected to receive redistricting data by August 16, and the full redistricting data with toolkits for ease of use will be delivered by September 30. The Census Bureau will notify the public prior to releasing the data.
— Source: U.S. Census
A D V E R T I S E M E N T | <urn:uuid:cea2bb5d-dde4-490f-af97-8b590a9a2795> | CC-MAIN-2022-40 | https://bdpatoday.com/2021/04/26/2020-census-data-is-out-apportionment-results-delivered-to-the-president/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00092.warc.gz | en | 0.944536 | 986 | 2.609375 | 3 |
Tragedy can bring out the best and worst in people. On the one hand, the United States saw an outpouring of support from other nations following the bombing that occurred at the Boston Marathon. In addition, numerous corporations and countless citizens have donated money to charities in support of the attack’s victims. On the other hand, several cybercriminals have crawled out of the woodwork looking to capitalize on that spirit of generosity and profit from those attempting to help others in need.
The FBI issued a notification warning the public that cybercriminals were using the tragic event to try and scam internet users with malware. Some instances simply involved fraudulent links to purported news sites covering the bombing. When users clicked on them, however, they were taken to compromised sites rife with malware. In more insidious cases, cybercriminals posed as official organizations representing the Boston Marathon and attempted to solicit donations from members of social media sites.
The FBI recommended that internet users take several precautions to avoid scams such as these:
- Never agree to download software in order to view site content
- Practice due diligence before making any donations to a purported charity
- Only make donations through secured financial transactions such as SSL certificate-secured credit card payments
- Never click on links received in an unsolicited email, even if they appear legitimate
Precautionary internet surfing practices are a good safeguard against cybercriminals, but users should take steps beyond that as well. People cannot be expected to be alert to the threat of hackers and data thieves at all times. That is why quality application control or whitelisting tools can be major resources for any internet user. If a malware program attempts to launch on a computer, whitelisting software will recognize it as an unauthorized application and block it. With these tools, people can be assured that unidentified malware cannot run on their systems. | <urn:uuid:b6a7dc23-edfe-4a40-b445-bd25ebe74ad3> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/cybercriminals-try-to-capitalize-on-tragedy | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00092.warc.gz | en | 0.953456 | 374 | 2.546875 | 3 |
A recent 60 Minutes television program exposed vulnerabilities in the world’s mobile carrier networks. This particular show talked about a flaw in SS7, a key protocol used by wireless networks, that lets hackers listen in on your phone calls and read your texts.
This information will come as no surprise to some. Like the Internet itself, mobile wireless networks were never designed for enterprise-grade security and protection against determined and sophisticated hackers. For example, IMSI-catchers represent another threat to privacy when using mobile networks.
But there are simple actions you can take to protect the privacy of your sensitive data – phone calls, text messages, e-mails, etc. – that you transmit over mobile networks using mobile devices. The simple rule of thumb: always encrypt your data before it hits the wireless network.
Phone Calls: use encrypted voice-over-IP – examples include BBM Enterprise(formerly known as BBM Protected) and SecuSUITE as well as a wide range of apps in major app stores (as long as you’re comfortable with the security of the app and its developers).
Text Messages: encrypted services include BBM Enterprise as well as a wide range of apps in major app stores.
E-mail: be sure to use services that employ end-to-end encryption. Many common consumer e-mail services offer encryption between device and cloud e-mail server but fall down when messages are then forwarded to users not on the same e-mail service.
Business Users: BlackBerry provides end-to-end encryption of the communications channel as well as S/MIME and PGP message encryption, an extra level of protection that ensures only your intended recipients can access the mail, regardless of their choice of e-mail service.
File Sharing: BlackBerry’s Workspaces is one way to ensure file data is protected regardless of which networks are used to transmit the data.
For all the aforementioned technologies – encrypted voice, text, e-mail – BlackBerry’s apps are cross-platform, supporting any operating system (iOS, Android, Windows, BlackBerry) that you (or your friends, family, and co-workers) may prefer.
Another important data privacy tool for mobile networks is the VPN, or virtual private network. If all your information flows over a VPN, it will be protected between the device and the VPN server on the other end. All mobile devices managed by BlackBerry UEM software – including those running BlackBerry, Android, iOS, and Windows Phone operating systems – include a built-in end-to-end protected connection between the device and the enterprise network. Users can use any physical network – including open Wi-Fi networks and the carrier mobile networks – and still rest assured that business information is protected.
Hiding In Plain Sight
There remains one other privacy concern brought up by the 60 Minutes piece: location. Unfortunately, mobile networks were designed to uniquely identify mobile devices (and by association, their users). For example, the IMEI number from the modem chipset and the IMSI number from the SIM card are incorporated into mobile network communications emanating from your mobile device and cannot be inhibited by the mobile OS or any apps. Rules set up by the mobile network policy organization – GSMA – require these identifiers be present.
When your device connects to mobile networks, this identifying information is recorded and could be disclosed via lawful government access requests to mobile network providers or by hackers that gain unauthorized access to the mobile network infrastructure. If you are worried about your location being tracked, the safest thing to do is avoid mobile networks entirely: use Wi-Fi data networks (with trusted access points and the aforementioned data encryption enabled) for all communications and disable mobile networks in your device settings.
Modern VoIP and text message services provide excellent quality, often better than the built-in mobile network calling and messaging services. If you must use the mobile network, as many of us do, then maximize your use of encrypted communications as described above. For example, if all of your phone calls are VoIP-based, then identifying information associated with the caller and receiver on a mobile network will simply not exist to be hacked.
Cloudy With A Chance of Hacking
Protecting your privacy on mobile goes beyond just the recently reported cell network risks, of course. When you use cloud apps such as Facebook, Yahoo, and Dropbox, your personal information is being stored in servers managed by these service providers and therefore could be exposed by hackers who can gain access to those servers or by lawful access requests made to those service providers. Services like WatchDox and BBM Enterprise that enable the data owner to control encryption – instead of the service provider – assure your privacy regardless of network or cloud service.
Your location information may also be tracked by these third-party services. And again, those services could be hacked (or subject to lawful access requests) that would expose your location. Most mobile operating systems provide options to disable location services, a draconian approach that limits device functionality but an option when privacy is at a premium. BlackBerry’s PRIV smartphone has a unique feature called DTEK which lets you track, receive notifications about, and disable location-gathering attempts made by your apps.
These simple steps will go a long way towards protecting you against the most common online attacks. At BlackBerry, we’re committed to providing all of the necessary tools to help you encrypt your data and protect your privacy.
Security standards around connected medical devices are woefully lacking, but that’s about to change. Don’t miss the unveiling of DTSec, the first consensus cybersecurity standard for medical devices with security and assurance requirements, by BlackBerry Chief Security Officer David Kleidermacher. It’ll happen May 23-24 at MEDSec 2016, the first international conference covering security and privacy for the Internet of Medical Things. Learn more and register today at MEDSecMeeting.org. | <urn:uuid:e67e2398-71ca-4149-819c-518370612404> | CC-MAIN-2022-40 | https://blogs.blackberry.com/en/2016/04/how-to-protect-yourself-from-ss7-and-other-cellular-network-vulnerabilities | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00092.warc.gz | en | 0.926594 | 1,206 | 2.859375 | 3 |
IoT – what is it? The term “IoT” was first used in ’99 and has only entered our lexicon in the last 10 years. But what does it really mean? IoT as an acronym stands for the “Internet of Things,” which could be summarized as anything that collects data and shares it over a wireless connection onto a network for processing.
What is IoT?
IoT is more than just a device connected to the internet; it includes the whole data supply chain, from the device collecting data, to the wireless network used to transfer it, to the back-end infrastructure that will analyze/process it, to the data itself and the app that you use to control or view the data from the device.
Although we may not even realize it, IoT has become more and more prevalent in our daily lives for many years now.
In recent history, the Smartwatch has become commonplace and, from a consumer’s point of view, it’s an extension of their phones. Peeling back the technology, the smartwatch is an IoT device; it collects data about the wearer such as Heart Rate, GPS location, Number of Steps, etc. and all that data gets sent to the phone and eventually the cloud, where the user can access data about their health over the course of a day. Involved in that simple watch are a local wireless connection from the watch to the phone, then a larger wireless connection from the phone to the internet, and a cloud provider where the data ultimately resides and is viewed.
Another example is what is commonly referred to as “smart home,” which includes home automation devices, from door sensors and carbon-dioxide detectors that can send us notifications when triggered, to light bulbs, TVs, or AC systems we can control from an app on our phone or hands-free using a virtual assistant.
IoT devices don’t just make our lives easier as consumers, the industry has taken up IoT to alleviate the resource workload. Many utility companies have deployed smart meters that report monthly utility consumption, which cuts costs on having to deploy a team to read the meter at every house.
Those examples are only the tip of the iceberg as there are thousands of use cases that we improved with IoT. IoT can be used to automate repetitive tasks (e.g., opening and closing a window in a greenhouse), provide insight on an environment (e.g., temperature and humidity of a building) and the status of remote location (e.g., number of parking spots available in a parking lot), and can also serve as a trigger button (e.g., panic button for your lone worker), access control (security NFC reader), and many more.
At the heart of the IoT ecosystem is the connectivity of devices – those devices being able to connect to the internet and have a path to deliver its payload. There are three general network architectures used to connect IoT devices: point-to-point, mesh network, or star network, depending on the technology used to connect them. There are dozens of protocols available to connect IoT devices. However, only a handful of them have reached critical mass that allows them more widespread acceptance (e.g., Bluetooth, NFC/RFID, etc.).
In our forthcoming “Demystifying Technology” blog posts, we will discuss different IoT use cases and scenarios to help us better understand how we can deploy a robust and secure IoT ecosystem ensuring access to data when needed.
(C) Rémi Frédéric Keusseyan, Global Head of Training, ISEC7 Group | <urn:uuid:895997dd-3de7-42e2-b578-dd9751041c9e> | CC-MAIN-2022-40 | https://www.isec7.com/2021/01/26/demystifying-technology-internet-of-things-iot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00092.warc.gz | en | 0.947149 | 752 | 3.21875 | 3 |
We need to be able to verify both the integrity and origin of data. That is, for an e-mail message or a data archive or program stored on a server, we need to prove to a high level of confidence that the data is precisely in its original form and that it came from the claimed source. In other words, that precisely that message came from that sender or source. This combination provides non-repudiation, in which the sender cannot convincingly deny having sent that message. The technique is called a digital signature as it provides a much more trustworthy signature method than examining ink scrawls, and it is performed by a digital computer program combining two cryptographic components.
If you send a message and its hash, all you prove is that you know how to calculate a hash!
Instead, send the message and the result of
encrypting its hash with your private key.
That result is called
a digital signature.
The digital signature is:
E( H(message), Kprivate )
Anyone can calculate the hash of the received message. They can also use your public key to decrypt the digital signature. Then they compare those two results. If the two are the same, you are the sender, and the message wasn't altered.
For an application of this technology, see my page explaining how to use this to verify downloaded software and operating systems.
For a less technical explanation of how this can be used
with e-mail messages, see my page
referenced in my
explaining it to people puzzled by their clueless mail tool's
report of "unknown attachment".
Verifying Digital Signatures With PGP Tools
I use the
GNU Privacy Guard, or
It can be used from the command line, where
does pretty much everything.
(Note that the command itself is
many Linux distributions.)
And, the GnuPG package functions as a plug-in for mail tools.
One of four things must happen when you attempt to verify a digitally signed message or data file:
1: The message was signed with a trusted key, and the signature could be verified. This is what you want to see!
2: The signature could be verified, but the key is not trusted. You have the signer's public key on your key ring, but you have not yet decided to trust the validity of that key, in the sense of being able to say, "I am quite certain that this key belongs to that person or group."
You need to fix this. The problem is, how do you decide if that key really belongs to that person or organization? If you got the key from shrink-wrapped media that you purchased (for example, a set of Red Hat Enterprise Linux install media), it would make sense to believe that the keys on the media really belong to he people who sent you the media.
However, these days companies want to save money by
having you download data instead of shipping media.
So, you could check the fingerprint (a hash)
of the key you have:
$ gpg --fingerprint keyID
$ gpg --fingerprint SignersNameHere
Then call them on the telephone and read that fingerprint to them so they can verify that what you have is really their key.
Or, you could ask some trusted third party. If you're looking at Linux software, you could ask me as I might happen to have a copy of that key. Then it's up to you to decide if you should believe me.
It is hard to decide on the validity of a key — this is the hardest problem to solve in the area of digital signatures and public-key cryptography.
3: The signature could not be verified because you do not have the needed public key. You have no idea whether the data might be valid or not, because you do not have the public key needed to answer that question. It's time to go look at public key servers, and if that doesn't work, maybe ask Google.
Once you import that public key in your keyring, your problem becomes case #3 immediately above this one...
4: The signature is invalid! This is the worst possible result!
Something bad has happened! Maybe it was malicious. Maybe it was innocent (for example, you are looking at an e-mail message that passed through a Lotus Notes gateway that decided to add extra blank spaces to the ends of lines and otherwise reformat the message). Whatever the cause, something has modified the data after the digital signature was created.
Hashed Message Authentication Code (HMAC)
The sender and receiver share a secret key.
HMAC is formed as
The sender transmits message and HMAC.
The receiver performs a corresponding
to verify the message integrity and sender authentication.
Note that a digital signature requires a hash of the entire message followed by an encryption of the hash. If you are only sending one large message, the work required for the hash overwhelms that for the encryption, and there is no real computational advantage to an HMAC.
An HMAC requires only a hash. If you are sending a large number of small messages, an HMAC has a computational advantage.
So, digitally sign electronic mail messages and archives to be downloaded from web servers, as you only send or create them every once in a while. But use HMAC within IPsec to verify all IP datagrams in a rapid stream of network traffic.
X.509v3 Digital Certificates
None of this makes much sense if you cannot be absolutely certain who you are talking to!
A digital certificate is a message that says, "The public key of so-and-so is such-and-such", with that message digitally signed by an entity called a Certificate Authority (CA). Attributes of a CA:
You are absolutely certain the CA will tell the truth.
You are absolutely certain you know the CA's public key.
For instance, Verisign, Comodo, and others are CAs whose public keys have been coded into your web browsers by the software developers (and so you must absolutely trust your software providers, too).
When you download a secure page via HTTPS:
1: The server sends your browser a digital certificate signed by some CA you trust.
2: Your browser verifies the digital certificate and thus is certain that it really has the public key for some claimed identity.
3: However, anyone could send you the well-known digital certificate, we must verify that the server really is who it claims....
4: Your browser generates a random number and encrypts it with that public key. It then sends the result to the server, asking the server to decrypt it with the corresponding private key and send it back. [OK, I have simplified this somewhat for the sake of this explanation, go read the SSL/TLS specifications for the whole story if you care.]
X.509v3 is simply the format of digital certificate that everyone uses.
Also be aware that a digital certificate is proof of
identity, but not proof of intent or
Anyone could get a digital certificate for
set up a web server, and get you to tell it your
credit card numbers over an SSL/TLS connection.
I have some pages with applications of digital certificates:
Next in this series ❯ Cultural Cryptography | <urn:uuid:e8ff5599-2339-4e3d-9650-acf92df3956f> | CC-MAIN-2022-40 | https://cromwell-intl.com/cybersecurity/crypto/digital-signatures-and-certificates.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00293.warc.gz | en | 0.932156 | 1,578 | 2.765625 | 3 |
The most promising development in clean energy innovation lies in energy storage, as battery, hydro, and thermal systems are able to fill in production gaps in solar and wind energy. According to a report from Clean Technia, we don’t need to wait on some huge, groundbreaking technological fix to speed up our movement away from fossil fuels, as energy storage is quickly becoming more efficient and cost-effective.
Batteries are the newest and fastest-growing system in this energy storage development. The falling costs of solar energy and battery storage make the two energy sources so productive that they are competing heavily with natural gas peaker plants and are projected to shut down up to 10 GW of peaker plants by 2027, though some experts say this could happen as early as 2020.
Some studies claim that a plant combining solar energy with battery storage is already cheaper than a gas peaker plant. Tesla’s battery storage operation in Australia was constructed more quickly than originally thought possible and is having a major influence on battery storage pricing. Right now, it’s the biggest battery storage plant in the world, but is projected to soon be the average size as battery systems become more popular and cost-effective.
Information is the lifeblood of any company; how reliable is your storage and backup system? Keep your data safe and accessible. Download your copy of “Tips for Buying Storage and Backup Technology” today.The Technology Manager’s Guide: Tips for Buying Storage and Backup Technology
Battery storage is energy’s newest, most exciting innovation, but other storage systems can be just as effective. Hydro storage, for instance, provides Australia with more than enough energy storage, with 22,000 sites throughout the country. The battery systems are still developing and thus have productivity gaps, so hydro storage is another stable way to keep energy production clean and renewable
But the storage of energy does not function alone, as its main purpose is to fill the gaps in productivity for clean energy sources. Battery and hydro storage paired with solar and wind farms are quickly becoming Australia’s go-to energy solution on the path to eliminating fossil fuels and one day relying 100% on renewable energy.
The forecast for these new energy solutions is not quite as optimistic in the US as in Australia, but the States could soon see significant transitions from natural gas to clean storage, with one study supporting a goal of 80% renewable energy. There are about 50 upcoming projects that could altogether provide 40 GW of new, clean storage. | <urn:uuid:ecc3d5ee-bf60-4553-916c-d7bfe6bc5c16> | CC-MAIN-2022-40 | https://mytechdecisions.com/it-infrastructure/clean-energy-lies-storage-options/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00293.warc.gz | en | 0.95452 | 508 | 3.03125 | 3 |
Winning the fight against cancer may end up being more of a nano-war than a surgical strike. A team led by the National Institute of Standards and Technology (NIST) has just successfully combined an antibody with single-walled nanotubes to create a precision search-and-destroy weapon that targets aggressive forms of breast cancer.
These tiny dual-mode weapons strike at the molecular level, delivering the kill in two ways: The antibody attacks the HER2 protein (an overabundance of which is associated with fast and deadly tumors); and the nanotubes detect and blow up invading tumor cells.
In brief, here is how the nanotube weapons work: Nanotubes attached to the antibody also link to the tumor, thereby effectively detecting and targeting HER2 breast cancer cells. Near-infrared laser light at a wavelength of 785 nanometers reflects intensely off the nanotubes, and this strong signal is easily detected using a technique called “Raman spectroscopy.” Increase the laser light’s wavelength to 808 nanometers, and it will be absorbed by the nanotubes, incinerating them and anything to which they’re attached — in this case, HER2 tumor cells.
Boom, baby! Take that you serial-killing cancer!
Building a Nanotube Aresenal
This science may sound immensely cool, but achieving the desired outcome is no small feat. For one thing, the antibody must be loaded with nanotubes that are about 90 nanometers long, or 5,000 times shorter than an amoeba. So how are we doing in producing short nanotubes in sufficient numbers to defend all the cancer-riddled human bodies?
“The fact is that carbon nanotubes can be produced in bulk today, but their relevance to real-world applications is often hindered because the delivered product is in the form of a powder, like shaved pencil lead,” Peter Antoinette, president and chief executive officer at Nanocomp Technologies, told TechNewsWorld.
Turns out pencil shavings of short nanotubes are perfect for tiny cancer explosives. The longer nanotubes are better for manufacturing bigger stuff such as CNT conductive wire and cable shielding, electromagnetic interference (EMI) shielding panels, thermal spreaders, high-strength composite structures and ballistic protection material.
“The real trick is to use longer nanotubes to manufacture deliverable ‘macrostructures’ that not only carry forward the attractive properties of each individual nanotube, but can also be inserted easily into existing applications and manufacturing processes,” explained Antoinette.
“We can currently produce kilometers of yarns and hundreds of square feet of finished CNT material per week, and are developing our path to scaled-up production,” he said.
Nanocomp focuses on nanotube materials in the form of a yarn or sheet.
However, NIST doesn’t want to wrap up the whole human body in nanotubes. Most likely, the loaded short nanotubes will be administered by injection, although delivery methods have not yet been disclosed. Still, the thought of taking a shot to cure breast cancer in the near future is a wonderful dream.
Which Comes First – the Chicken or the Nanotube?
HER2 is one of a whole family of genes that handle traffic control in the growth and proliferation of human cells. Normal cells carry around two copies of this gene. About a fourth of breast cancer cells carry around multiple copies of the gene, which leads to way too much of a HER2-encoded protein.
The antibody against this stuff is cooked up in chickens. It’s called “chicken immunoglobulin Y” (IgY). Kind of looks like it should be called ‘Iggy’ with that tag, but no, it’s chicken IgY. Anyway, chickens resemble humans — well, not at all — which is why chicken antibodies are so perfect: They react very strongly against the HER2 proteins on tumor cells and completely ignore other human proteins in normal cells.
The broad genetic difference between the species allows the antibody to be more precise in identifying a very specific foe as it is less confused over similar human proteins.
The chicken antibody is then attached to the nanotube and sent on its search-and-destroy mission.
Next in the Nano Wars
NIST scientists conducted the experiment in laboratory cell cultures and reported their findings in a paper published in the BMC Cancer journal. Using the HER2 IgY-nanotube complex to selectively identify and target HER2 tumors, they achieved a nearly 100 percent eradication of cancer cells, while nearby normal cells remained unharmed. In comparison, there was only a slight reduction in cancer cells for cultures treated with the anti-HER2 antibody alone.
The research is being funded under an interagency agreement between NIST and the National Cancer Institute (NCI), and in part by a grant from the National Science Foundation. In addition to the NIST researchers, the team included scientists from Rutgers University, Cornell University, the New Jersey Institute of Technology, NCI and Translabion, a private company located in Clarksburg, Md.
The next step is for the team to conduct the same experiment in mice to see if they get the same results in animals as in lab cultures. If every stage is successful, they will continue to move through the standard testing procedures until they eventually reach the human testing phase.
Meanwhile, in a separate but related project, the team is testing a similar nanotube-antibody complex targeting MUC4 to treat pancreatic cancer. | <urn:uuid:eb46b1b1-d942-413d-b952-c6b7df55713e> | CC-MAIN-2022-40 | https://www.ecommercetimes.com/story/can-nanotech-cure-breast-cancer-68790.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00293.warc.gz | en | 0.932444 | 1,176 | 2.8125 | 3 |
Diversity, equity, and inclusion endeavors in the workplace, also known as DEI, are of key importance, especially today. With the pandemic, work from home or hybrid working, and global social justice campaigns; business and strategy leaders are focusing on implementing changes to support and drive the movement.
But what is DEI?
Diversity refers to the representation of a range of traits and experiences within a company’s workforce. The characteristics included are, gender, race, physical ability, religion, age, and socioeconomic status, among others.
Even if a company’s workforce is diverse, employees may still feel undervalued.
Inclusion refers to how people feel as part of a company’s workforce; if they feel safe, respected, accepted, encouraged, and valued.
Combined, can these two components drive better business outcomes?
According to this Harvard Business Review study conducted in 2018, companies with higher-than-average diversity had 19% higher innovation revenues. The findings showed that the most diverse companies are also the most innovative, enabling them to market a greater range of products to customers. HBR calculated each company’s diversity across six major factors: migration, industry, career path, gender, education, and age.
On May 10, Chicago-based Professional Diversity Network held the Minneapolis Diversity Virtual Career Fair, enabling job candidates from anywhere around the world to easily connect with employers that are committed to diversity and are seeking to hire diverse candidates through this virtual environment for jobs that can be done remotely or on-site.
The event was designed to provide the opportunity for job candidates to view and apply for an array of positions from top leading companies. Participants learned and interacted with businesses about jobs that are available, their company culture, and why diversity in the workplace is so valuable to them. Then, potential candidates spoke live with employers or recruiters and were interviewed through chat and audio conferencing.
“Initiatives like the Virtual Career Fair are exactly the kind of creative ideas we need to further the success of organizations by assembling talented teams with different backgrounds and experiences, a practice proven to be effective in building high-performance cultures,” said Bita Milanian, SVP Global Marketing at Ribbon Communications, who has been active in advocating for workplace diversity for decades. “Navigating through the pandemic, we have learned valuable lessons, and have become more enlightened in the tech industry, and it is heartwarming to see those lessons evolve into actions like these.”
This event gave job candidates the chance to showcase their skills and talents based on their knowledge and experience, simply by registering for the event and uploading their resumes. Employers that are interested are then able to reach out and send interview invites before the event while candidates are free to source hundreds of jobs available and apply in real-time.
Some of the companies participating in the event this year included Verizon, Wells Fargo, Xcel Energy, ACR Homes, Allianz, American Academy of Neurology, Cambrex, Microsoft, and Tennant Company, to name a few.
“Highly successful companies, like the ones who participated in the event this week, have proven that when we cultivate talent and embrace diversity at the same time, teams become more inspired, loyal, and productive,” Milanian said. “We’ve seen this work at Ribbon Communications, through our robust DEI and sustainability programs. Organizations across all private and public sectors stand to benefit in attracting, retaining, and motivating team members when they step up and commit to diversity, equity, and inclusion with tangible, actionable programs that can be measured and matured into a future where there will be no success without a commitment to providing opportunities to all those who wish to contribute, excel and make their own difference in the world.”
DEI, hiring, and other related topics are top of mind for business leaders today as they navigate new workplace and workforce environments, and are part of the many business strategies in focus at ITEXPO 2022 and Future of Work Expo 2022, both taking place June 21-24, 2022 in Ft. Lauderdale, Florida. ITEXPO and Future of Work Expo are part of the #TECHSUPERSHOW experience, bringing buyers, sellers, and partners together to discuss the latest in the business tech space.
Future of Work Contributor
Pillr is a cybersecurity operations solution designed and engineered to address the evolving demands of IT teams and service providers.
Rossum revealed new email automation capabilities to help customers manage and proactively respond to document communication tasks.
Owl Labs and BlueJeans by Verizon will provide access to Owl Labs' meeting technology combined with BlueJeans Meetings video conferencing software lic…
Organizations are leaving behind the "cube farm" idea and instead moving to activity-based-working (ABW).
The Milwaukee Bucks partnered with AI provider GameOn to bring a conversational AI chat experience to Bucks fans. | <urn:uuid:36dac83a-6958-4d13-aa43-3c0898b6b75a> | CC-MAIN-2022-40 | https://www.futureofworknews.com/topics/futureofwork/articles/452326-will-work-from-home-enhance-dei-initiatives.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00293.warc.gz | en | 0.94921 | 1,015 | 2.578125 | 3 |
Computers, just like any other electronic device, need regular maintenance. The same way that you attend your annual medical check-up, your computer also needs scheduled maintenance of its computer hardware to ensure that you are doing everything you can to extend its lifespan. This is what many people call preventative maintenance.
However, before dwelling on the purpose of preventative maintenance, first consider the basic questions of: what is computer hardware, and what is in the preventative maintenance program?
What is Computer Hardware?
All the physical components of the computer are hardware. This includes your keyboard, hard drives, internal CD or DVD drive, fans, and etc. All of these components are included in computer hardware maintenance.
What is Included in Preventative Maintenance?
Most people believe that preventative maintenance programs are only useful for visible components of your computer. However, computer hardware maintenance also includes the not-so-visible components of the computer, as well.
Typically, preventative maintenance is performed at both the system and physical levels.
Physical Level Maintenance:
This is where the physical components of the computer are cleaned. Clean the keyboard to ensure that you remove the dust sitting between the keys. It’s important to remove and clean the fans that help maintain CPU temperature. Also, wipe off the monitor and blow out the dust sitting inside the CPU.
Make sure you complete this entire cleaning process carefully. Using any type of liquid or solvent can cause damage to the physical parts – instead, use a soft cloth and the right type of solvent. During the maintenance process, do not expose physical components to extreme temperature changes.
System Level Maintenance:
System-level maintenance ensures that your operating system runs in an optimized manner. Check your hardware drivers, and download and install their latest versions. If you’re using any software, it is best to have the upgraded and latest versions. There are also a lot of programs on your system that you most likely do not use – remove these programs and clean up your disk space so that you can install more useful programs.
Most computers today have anti-virus and anti-malware protection installed. However, these are often outdated, and do not have newer security patches, which can pose a substantial threat to your operating system.
A lot of people make the mistake of not fragmenting their hard drives. This can cause a major data loss in adverse situations, and even cause a system slowdown. Defragment your hard disk and create multiple drives.
Why Hardware Maintenance is Important
It is just as important to invest in a periodic maintenance program as any other part of computer maintenance.
- Address Issues Before They Become Problems
Maintenance activity helps you detect latent issues with your computer that can grow into major problems if not addressed in a timely manner. It can also impact the performance of your computer, and give you sub-optimal output. Periodic maintenance helps you detect these problems at a system and physical level and address them immediately.
- Prevent Security Threats
Anti-virus and anti-malware software undergo an update process that ensures your computer has the highest level of protection against security threats. You don’t want to fall prey to a security vulnerability that can lead to business loss.
- Improve Speed
Defragmenting the hard drive, removing unnecessary programs, and updating system drivers improve the operating speed of your computer.
- Optimize Efficiency
With time, computers tend to slow down and become sluggish. It’s inevitable that software slows down and starts performing with sub-optimal output. Therefore, periodic maintenance can help address this and optimize the efficiency of the computer.
Designing a hardware maintenance calendar can depend on multiple things, such as the extent of the use of your computer, the type of processes performed on the computer, and etc. It can be complicated and overwhelming.
A professional can assist you with your preventative maintenance. Contact Dynamix Solutions to see how we can help. | <urn:uuid:beaee03c-7fde-4a9d-8c07-49191a8f9587> | CC-MAIN-2022-40 | https://dynamixsolutions.com/take-care-computer-computer-hardware-maintenance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00293.warc.gz | en | 0.92784 | 808 | 3.09375 | 3 |
Securing an IIoT network certainly seems inherently challenging, yet doing absolutely nothing and allowing cyber miscreants to pilfer data is not the answer. IIoT, short for Industrial Internet of Things, connects industrial devices to one another, allowing data to be gathered and analyzed to provide important insights and make business processes that much more efficient. If IIoT security is not properly implemented, there will be a negative impact on the organization’s cybersecurity. This is also true of every other IoT network. The last thing you want is to provide hackers with many more opportunities to hack into your system and cause a litany of problems that extend well beyond the theft of data.
The Vulnerabilities of IIoT
IIoT includes diminutive sensors as well as sizable industrial equipment. When properly applied, IIoT can help drive operational efficiency in power stations, utility companies, and manufacturing plants. In fact, predictive maintenance has the potential to help considerably by keeping costs to a minimum and ensuring essential infrastructure runs as smoothly as possible.
IIoT networks are somewhat unique in that it is possible for an existing network that was kept separate for monitored and controlled devices like valves and pumps to be combined with the rest of the IT network. Though this approach has its merits, there is a risk that once connected to a more expansive network, the essential software that ran within the previously secure environment can become an open target for hackers.
The Threat is Real
An IIoT cyberattack is not just theoretical. Hackers have already transmitted malware directly to industrial networks through exploitation of web-connected sensors that provide a clear path to network access. As an example, hackers have succeeded in turning off power for entire regions of countries. The moral of this story is the more connectivity, the greater the risk. Though businesses didn’t consider the extent of their vulnerability in the 90s, the rise in attacks in recent decades is heightening awareness all the more.
If poorly installed on the network, IIoT devices will generate that many more vulnerabilities, creating additional attack vectors that might end up compromised. Furthermore, the majority of IoT products are sold with bare-bones security, meaning the onus is on the buyer to pinpoint vulnerabilities. However, even if patches are provided to account for security vulnerabilities, manual updates are necessary, meaning the devices will not be fortified exactly as they should. The end result is a vulnerability that presents the attacker with an opportunity to wreak havoc.
Updating IIoT products or equipment is essential. Furthermore, it is prudent to change the default login credentials and passwords as they provide easy network entry. Even if the system is quite complex, default passwords are an open backdoor that will cause problems. It is also important to understand which sensors are installed on the network so there is an awareness of potential vulnerabilities.
Read more: Ways IoT Is Changing Digital Marketing
Software can also be used to analyze the network and obtain real-time information as to which devices and sensors are active within the IIoT security environment. Though manual patching takes effort, it is worth it because sensors and devices are essential components in industrial environments. In some instances, it is necessary to shut down the industrial site so the appropriate security patches can be added.
Hyper-Awareness is Essential for IIoT Security
IIoT security threats continue to evolve. Additional devices connected to the network, such as a computer that accesses the web, can lead to malware by clicking an attachment. If IIoT is used at your business, you should consider micro-segmentation with separate networks for additional protection. Keep an open mind, pivot accordingly and you will have done your part to keep your systems safe. | <urn:uuid:d7116c26-641c-41b2-b84f-4673eeba34e3> | CC-MAIN-2022-40 | https://iotmktg.com/why-need-strengthen-iiot-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00293.warc.gz | en | 0.94803 | 757 | 3.03125 | 3 |
The requirement to capture data from documents and online forms will undoubtedly always exist. The traditional data entry model, on the other hand, does not have to and can already cease existing. While the conventional models are fading and dying, the world is also facing the emergence of automation technologies that will usher faster, more precise, and less expensive to complete the same data entry activities.
- 1 Introducing: Machine Learning-Enhanced OCR!
- 2 Introducing: Robotic Process Automation!
- 3 Conclusion
Introducing: Machine Learning-Enhanced OCR!
Data entry services have been increasingly important in recent years as their value as a cornerstone of data and analytics infrastructure has become increasingly recognized. Although some may be concerned that automation and smart technology will rob workers with routine skillsets of their jobs, artificial intelligence is poised to transform the data entry business.
Machine Learning, in its most basic form, is a method of data analysis that automates the generation of analytical models. Machine learning helps computers access latent insights by employing algorithms that study and learn from data regularly, without the need for programming programs that expressly seek them out.
Unlike traditional OCR, which frequently requires human intervention to obtain complete data gathering and error-free final findings, Machine Learning, a type of Artificial Intelligence, eliminates these time-consuming activities.
Furthermore, Machine Learning Enhanced OCR enhances traditional OCR by integrating context and flexibility. For the following reasons, this technology has become a vital component of a variety of growing and established industries:
ML Constantly Improves its Comprehension and Treatment of Data
Machine Learning can effectively handle and assimilate an infinite amount of data with speedy analysis and assessment.
This method facilitates the evaluation and tweaking of communications based on previous consumer interactions and behavior. Thus, it is possible to discover relevant variables after a model has gotten developed utilizing many data sources.
Continuous ML Implementation results in More Complicated Decision-Making with Fewer Mistakes
OCR technology powered by machine learning can help preserve a smooth workflow by providing outstanding text recognition accuracy. Organizations can automate data entry, eliminate manual processing, and deal with various data sets in real-time, resulting in reduced workloads, faster processing, and more accurate data outputs.
ML does not Rely on Manual Processes
Algorithms for machine learning have a penchant for operating quickly. Because of the speed with which it consumes data, machine learning can tap into emerging trends and deliver real-time statistics and forecasts without the need for manual data entry processes.
ML can Manage both Structured and Unstructured Data
OCR Machine Learning can assist with a variety of data formats and languages as well. When it comes to languages, most traditional OCR solutions require dedicated translators for each processed language. On the other hand, ML’s translation capabilities are all-in-one, allowing businesses to translate languages in real-time seamlessly.
Unlike traditional OCR, Machine Learning OCR can “learn.” If machine learning is unable to interpret some data sets, a human can intervene to verify them. This option also offers the added benefit of “teaching” ML how to handle this process in the future if it encounters a similar issue, following the instructions and automatically executing the interpretation process.
Introducing: Robotic Process Automation!
Aside from Machine Learning-Enhanced OCR, Robotic Process Automation also holds great potential in Data Entry.
Businesses may program RPA to do repetitive human tasks intelligently. RPA operates on computer programs for businesses and connects with the business model the same way humans do.
Moreover, RPA is well-suited to enhancing the quality of high-volume, rules-driven, and programmable tasks. They cannot, however, assist with activities that need high-level decision-making and human intellect.
This technology has become a significant component of a range of rising and existing industries for the following reasons:
RPA can Execute Basic and Sophisticated Lookups as well as Transport Data between Legacy Systems
RPA can help firms extract data from common forms in the correct format with the right intelligent data capture technology. The processing server gathers photos, processes raw data, and sends the results to an RPA bot with programmed intelligence.
Finally, even if it requires keystrokes to function, the bot subsequently transfers this well-formatted data to the company’s legacy system.
RPA can Learn through Observation and Mimic Human Decision-Making
A robot controller assigns these jobs to a company’s bots and supervises their operations after getting instructions from the developers. Moreover, they make sure that businesses get a clear picture of their bot’s performance.
RPA Automates Repetitive Operations 24 hours a day, 7 days a week
IRPA implementation provides tremendous benefits to a variety of businesses. Because it improves processes and infrastructure, it broadens the capacities of employees. RPA software breaks down complex manual data into smaller, more actionable parts and can do so 24/7. This deconstruction ultimately aids firms in making more informed judgments about critical business metrics.
Data Entry is the cornerstone of a company’s data and analytics infrastructure relevant for critical business decisions. Although data entry remains and will remain essential in the years to come, that does not mean that traditional data entry methods should remain.
With newer technologies, such as Machine Learning-Enhanced OCR and Robotic Process Automation, companies can do away with monotonous, rote tasks to focus on core business functions, and ultimately, steer the company towards exponential growth.
Reach out to our team today! | <urn:uuid:705384bb-6fec-4f8f-b58f-78e5536432ab> | CC-MAIN-2022-40 | https://itechdata.ai/rip-the-death-of-data-entry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00293.warc.gz | en | 0.920491 | 1,213 | 2.71875 | 3 |
Halloween is right around the corner and, although millennials may not be doing much trick-or-treating these days, there are still several scares that they’ll need to keep an eye on. High on the list of frightening facts is how this generation deals with cybersecurity. It turns out that a freakishly high percentage of millennials tend to kick caution to the curb when it comes to protecting personal information. Understanding why this is such a widespread issue and what can be done to improve it will go a long way towards mitigating the terror that technology can sometimes present.
Millennials are touted for being a truly tech-savvy bunch, yet, a significant portion can’t decipher a fraudulent message from an authentic one. A recent UK campaign backed by the government showed that only 10% of internet users can spot a fake message, despite nine out of ten claiming they would be able to tell the difference. To be fair, this is partially due to a hectic way of life where some tech users do their grocery shopping on the phone or buy the latest popular items online as they wait for coffee, leaving barely more than a fraction of a second to decide where to click. Still, these numbers are enough to spook anyone at first glance, but understanding how this group approaches technology can shed some light.
At a high-level, it boils down to perspective. Millennials have a vastly different approach to technology than their predecessors, in that they tend to be more focused on advantages rather than risks. This is akin to being excited about an overflowing bucket of Halloween candy, disregarding that it was shipped by Freddy Krueger. Such a laser focus can give users an edge when it comes to maximizing the benefits of the digital landscape, but it also means that they can also be too lax about protecting their personal data. This is not a rare problem; even Mark Zuckerberg isn’t immune.
Zuckerberg was hacked in 2016 after reusing the same password for several social media accounts. The password wasn’t the strongest either, (it was “dadada”) which is as simple as many of the passwords out there. First Data also supports that Zuckerberg is in good company; an alarming 82% of his generational counterparts are guilty of having the same passwords across various websites and apps.
During Halloween, millennials might search endlessly for a novel costume to wear to each party but are clearly not taking the same care when crafting passwords for their accounts. The same report also notes that users commit another password-safety sin: sharing passwords with non-family members (23%) and never changing passwords (20%). Additionally, because they grew up with technology, they tend to feel more comfortable with it, which often creates a false sense of security.
To top it off, there seems to be a severe lack of interest in cybersecurity as a career among this generation. This means that there are – and will continue to be – gaps to fill in the cybersecurity field – on the flip side, cybercriminals don’t have this job-shortage problem. Millennials currently represent the largest segment of the U.S. population, with 84.3 million people, and by 2040, they are expected to account for an even greater number.
Considering this, it would benefit millennials to educate themselves on this topic. Still, while they do care about security, their strategies and priorities are different than those of other population segments.
Fending off the fright
For example, millennials tend to do more research to find automated solutions that can protect data and online activities so that they don’t have to do it themselves. Two of the most popular measures are using the cloud and creating additional passwords. Rather than ramping up personal cybersecurity knowledge, they are moving data to cloud services, as big security brands can provide better protection.
As for passwords, millennials are the group with the highest number of distinct passwords; between three to five. On top of that, at least one-third of millennials are also more likely to use password managers to organize credentials and to implement two-factor authentication, than other age groups – which are huge security boosters. While the millennial perspective on technology takes a research-driven approach, there’s certainly room for improvement. This involves taking additional measures to protect against incoming threats, ensuring that passwords are not only changed frequently, but are also complex and exclusive to one account, proceeding with caution when signing up for new services, and generally playing a more active role in securing personal data. | <urn:uuid:b3ff871e-cb0e-4671-ab24-f434f6ba77b7> | CC-MAIN-2022-40 | https://www.cpomagazine.com/cyber-security/why-millennials-are-likely-to-get-tricked-this-halloween/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00293.warc.gz | en | 0.962673 | 924 | 2.515625 | 3 |
Is the Mandela Effect Leaving Your AWS Data Vulnerable?
Humanity’s collective memory can be a little fuzzy. Case-in-point: The Mandela Effect, or the phenomenon of large groups of people misremembering the same fact or event in a similar way. The effect is named after Nelson Mandela, who many people believe died in prison in the 1980s despite the fact that he actually went on to become President of South Africa in the 1990s.
A Few Examples of the Mandela Effect:
- Although most people remember childhood character Curious George as being drawn with a tail, he does not have one.
- Many people believe there is a brand of peanut butter called “Jiffy” but there is not. There’s Jif, and there’s Skippy, but no Jiffy.
- In the iconic scene in “Silence of the Lambs,” Hannibal Lecter does not say “Hello Clarice” as many people remember. He says “good morning.”
The Mandela Effect in the Cloud
The Mandela effect is not just a pop culture phenomenon, it exists in the world of enterprise technology, too. It’s a common misconception that data stored in the cloud is automatically backed up. The truth is that cloud providers operate on a shared responsibility model, in which the cloud provider is responsible for the security of the software, hardware and infrastructure, but not customers’ data.
Shared Responsibility Model from AWS
For example, when AWS refers to their highly durable Amazon S3 service, it’s true that the platform is incredibly durable, which means you’re unlikely to experience any trouble with their systems or infrastructure. However, this has nothing to do with protecting your data from accidental deletions, ransomware attacks or insider threats. According to the shared responsibility model, that’s your job.
Taking Steps to Protect Your Cloud Data
Understanding your part of the shared responsibility model is your first step. Figuring out how to protect your data is the next. Here are a few resources for your consideration:
- Step through an explanation of the different uses of replication, snapshots and backup in data protection in Clumio Systems Engineer Nic Hernandez’s blog.
- Many Clumio customers have first tried to build their own data protection solutions, before deciding that it wasn’t the best use of their resources given that a simple, cost-effective solution is available to them. Charles Goforth considers the DIY angle in his blog.
- Another misconception is that a third party data protection solution adds cost to the business. But that doesn’t take into consideration savings realized with management cost reduction. ESG conducted an Economic Value Validation Report to help organizations evaluate the potential savings with Clumio. Spoiler: enterprises have saved millions with Clumio over other backup solutions.
Customers love working with Clumio because they get air-gapped, immutable data protection with incredibly fast time to value while SaaS simplicity ensures fast and simple ingest, cataloging, search and recovery at any scale. Clumio allows customers to automate protection of their AWS and Microsoft 365 data, saving time and resources that allow them to focus on strategic initiatives and reducing TCO compared with other options.
Don’t let data protection misconceptions leave you vulnerable. Say goodbye to The Mandela Effect and get The Clumio Effect. | <urn:uuid:6a462d6e-8bcb-4427-9d4c-21dd599e6293> | CC-MAIN-2022-40 | https://clumio.com/blog/is-the-mandela-effect-leaving-your-aws-data-vulnerable/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00493.warc.gz | en | 0.935957 | 704 | 2.734375 | 3 |
This article is the first in our series on the common Unix commands every Mac admin must know.
In a world of endless possibilities where everyone seeks to work smarter not harder, IT and system admins cannot afford to be left behind. A great way for Apple Admins to become smart is to master as many commands as possible.
Terminal.app is a utility that gives the admin direct access to the Unix underpinnings of the macOS operating system. It lets the admin perform tasks quickly and efficiently on the local computer (directly or remotely). All you need to do is to send a few text commands, and you can make your way through both simple or complex tasks easily. It is the magic that saves you time and makes you more efficient. Therefore, we have decided to explore some of the most important macOS commands in this series.
In this article, you will learn how to enable SSH for accessing a remote Mac’s shell securely.
What Is SSH?
SSH — also known as Secure Socket Shell or Secure Shell — is a secure network protocol that allows users, especially system admins, to securely access remote devices. It encompasses a cryptographic network protocol and the suite of utilities that implement the protocol. SSH encrypts the communication with a remote system by utilizing a pair of SSH keys which are cryptographic in nature and made up of a public and private key pair. The keys work collaboratively to provide authentication between the client and the remote system.
SSH keys can and should be used in any situation where there is an unsecured network. Aside from providing strong encryption and secure remote connections, SSH encrypts the data during file transfers or while securely managing network infrastructure components. In addition, it can be configured to allow port forwarding by mapping the default SSH port to an available port number on the destination.
How SSH Works in Mac
Secure Shell leverages a client-server model to connect an SSH client application (where the session is displayed) with an SSH server (where the session runs). SSH has three layers:
- The transport layer, which establishes secure communication between the client and the SSH server.
- The authentication layer, which sends the supported authentication methods to the client.
- The connection layer, which manages the connection between the client and the server after a successful authentication.
To establish a connection with an SSH server, the client needs to initiate a request with an SSH server. Once the server receives the connection request, encryption negotiation begins. The server sends a public cryptography key to the client and the key is used to verify the identity of the SSH server. Afterwards, the server negotiates parameters and creates a secure channel for the client. Finally, the client logs into the server.
Enabling SSH to Securely Access a Remote Mac’s Shell
SSH remote login to an Apple computer is disabled by default. In this section, we will take you through the process of enabling SSH.
- Open the Terminal app on your MacBook.
You can do this by searching “terminal” using the Spotlight search option of your computer or navigating through Applications > Utilities > Terminal.
- Enter and run the command.
To enable SSH, enter and execute the
-setremotelogin command as follows:
sudo systemsetup -setremotelogin on
It is necessary to add sudo because the command requires administrator privileges. You will be required to input your user password when you run the command. Provide the password and press enter (as shown in Figure 1 below).
Note: In Mac, SSH is also known as Remote Login.
- Check if SSH is enabled.
Once you complete step 2, you will not get any message to confirm that SSH has been enabled. However, you can use a command to know if SSH has been successfully enabled. Simply run and execute the following:
sudo systemsetup -getremotelogin
If SSH is on, you will get a message that reads “Remote Login: On” (refer to Figure 2).
Want to Disable SSH?
While you have now learned how to enable SSH, it’s equally important to know how to turn it off in case you wish to disable any remote login in future. The process of disabling SSH is similar to the process you followed to enable it.
Simply open the terminal app and run the following command:
sudo systemsetup -setremotelogin off
After successfully executing the command, you will get a question: “Do you really want to turn remote login off? If you do, you will lose this connection and can only turn it back on locally at the server (yes/no)?” Refer to Figure 3.
Type “yes” to confirm. This will disable SSH and disconnect any active SSH connections on your MacBook. Meanwhile, if you want to bypass being asked a question of yes/no anytime you try to disable SSH, you can use the
-f flag to force the command to execute immediately and without the prompt.
sudo systemsetup -f -setremotelogin off
To confirm if SSH is off, run the command:
sudo systemsetup -getremotelogin
You should get a message that reads “Remote Login: Off” (as shown in Figure 4).
As stated earlier, SSH is a cryptographic network protocol used to establish a secure, encrypted connection between two computers. In this article, you learned how to enable or disable SSH by running a command in the terminal app. Enabling SSH will allow you to remotely connect your macOS device, transfer files, and perform admin tasks securely.
There are two other ways you can enable SSH for macOS devices:
- Turn on SSH in the GUI by going to System Preferences > Sharing > Remote Login.
- Leverage the Commands tab in the JumpCloud Directory Platform to enable SSH across your fleet.
Overall, SSH keys provide a more secure and convenient way to authenticate remote systems than the conventional username/password approach. To ensure the authorization each SSH key has is accurate, it’s important to deploy the right management tool and put sound policies in place. Simplified SSH key management is one of the many ways IT admins can make their lives easier with our cloud directory platform. Sign up for JumpCloud Free today to test out the possibilities in your own environment, no credit card required. | <urn:uuid:39428915-77b9-4a6d-aa71-a771425514f6> | CC-MAIN-2022-40 | https://jumpcloud.com/blog/how-to-enable-ssh-mac | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00493.warc.gz | en | 0.900014 | 1,310 | 2.796875 | 3 |
Driverless cars and the entire infrastructure this technology will bring with it pose a serious security threat, a new report (opens in new tab)claims (PDF).
The report from the Institution of Engineering and Technology (IET) and the Knowledge Transfer Network (KTN) calls for greater industry collaboration and transparency in developing connected and driverless cars, saying it will help ensure that future autonomous vehicles are safe from cyber threats.
The research on automotive cyber security is based on research and consultation with the industry.
It reviewed the progress made in developing technology like driverless cars and identified possible threats, including; personal data theft, fraud and deception (altering or deleting schedule logs and records), freight and goods theft, automotive ‘hacktivism’ – cyber infiltration of a vehicle’s systems that is politically or ideologically motivated - immobilisation and inflicting disruption, damage and even injury out of spite.
Dr Mike Short CBE from IET said: “Connected vehicles will significantly transform our driving experiences by making travel safer, more comfortable and efficient. Cars are becoming connected to the web via wireless for emergency rescue, and navigation services. However, this raises new challenges for connected cars - and those in and around them – that may become exposed to potential risks from online threats.”
“It is vital that the digital benefits and security are designed into the vehicle in ways that are both trusted and understood by users.”
However, the report also outlines the potential benefits including safer, more efficient transport and a potential boom for car sales due to new selling points.
The review is based, in part, on inputs from the Automotive Cyber Security Thought Leadership event, attended by more than 50 experts from a range of engineering and technical disciplines. | <urn:uuid:df7da77b-e1aa-4c33-87ca-924315bceb70> | CC-MAIN-2022-40 | https://www.itproportal.com/2015/03/09/industry-collaboration-essential-security-driverless-cars/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00493.warc.gz | en | 0.954092 | 358 | 2.671875 | 3 |
National Hurricane Preparedness Week, sponsored by the National Oceanic and Atmospheric Administration National Weather Service, is the time to prepare for a potential land-falling tropical storm or hurricane. This year, National Hurricane Preparedness Week is taking place May 9-15, 2021.
Hurricane Preparedness Week provides an opportunity for people across the country to prepare now for hurricane season. It educates the public about tropical cyclones, which are among nature’s most powerful and destructive phenomena.
If you live in an area prone to tropical cyclones, prepare. Even areas well away from the coastline can be threatened by dangerous flooding, destructive winds and tornadoes from these storms.
Today you can determine your personal hurricane risk, find out if you live in a hurricane evacuation zone, and review/update insurance policies. You can also make a list of items to replenish hurricane emergency supplies and start thinking about how you will prepare your home for the coming hurricane season.
The materials are available here.
The NOAA’s website also features links to helpful resources including a Social Media Plan, Hurricane Prep During COVID-19 from the CDC, information to public shelters during COVID-19, current forecast information, hurricane safety, how to deliver the right message, and Hurricane Safe videos. View all. | <urn:uuid:e5eece3d-e45f-4890-a53e-a1f2a1b42144> | CC-MAIN-2022-40 | https://continuityinsights.com/2021-hurricane-preparedness-week-scheduled/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00493.warc.gz | en | 0.911716 | 266 | 2.828125 | 3 |
Security Engineer vs Architect: What’s the Difference?
Building software is similar to constructing a building. Individuals with related, but different skills are needed to conceptualize and then deliver the application. The system security architect works at a high level and creates a blueprint for how all corporate applications will function.
The system security engineer takes those plans and applies them to individual applications by using development tools to create the final product. In order for an application to run successfully, each individual needs to understand their own role as well as what their co-workers provide.
Let's take a look at what each of these roles entails, the crossover and differences, and why they are valuable to organizations.
Cybersecurity Attack Surface Expands
In the past, security centered on the network perimeter. Enterprises placed various products, such as firewalls and anti-virus solutions at the edge of the network — and tried to ward off intruders before they gained access corporate resources.
Times change, and hackers found ways to circumvent network checkpoints. Nowadays, attacks come at every level of the traditional seven layer application model from the bottom network portion to the top, the application layer.
And there are plenty of holes. Cybercrime cost businesses over $2 trillion in 2019, according to Juniper Research. Consequently, enterprises must put checks in multiple places to secure corporate information — and that requires cybersecurity pros.
Security Architecture Touchpoints
Because of the growing number of entry points, designing enterprise security solutions has become exceedingly complicated. In fact, large enterprises typically have one to two dozen different security products. The outsiders look for the weakest point in the security chain, so these products must work cohesively and comprehensively.
So, thought and foresight are needed to close up all of the possible holes. Organizations create security frameworks, blueprints that outline what potential security breach might occur in each possible entry point. Next, they put tools in place to protect that potential gateway.
For example, previously corporations left information that was sitting in data center storage systems unencrypted. But if a hacker bypassed security checks and wormed their way into the system, they gained access to the corporate jewels, all of its confidential information. As a result nowadays, organizations have tools that encrypt information even when it is at rest.
To secure information and data, organizations need technicians with a wide range of skills. Two jobs, security architect and security engineer, are in high demand. Indeed.com lists about 21,000 open positions in the U.S. for the former and 78,000 for latter.
Here is what each position, which typically requires at least half a dozen and often 10 years or more of experience, entails.
What Does a Security Architect Do?
A security architect works at a high level. They design frameworks that ward the bad guys off at every possible entry point. They examine all of the system elements and make sure that they work together to prevent intrusions. Security architects create policies, standards, procedures, and documentation designed to work across all departments and for all applications. In essence, they design the entire building.
As a result, they need to have working knowledge about many different system components: information security programs, IT operations, and identity and access management. They also are responsible for organizations' security training and awareness, IT general compliance controls and reports, incident response, disaster recovery, data privacy, and and system risk.
The reality is that company information is under constant attack. A hacker probes a system somewhere every 39 seconds, according to a study at theUniversity of Maryland. So, security frameworks need to not only put checks in place to ward off hackers, but also create procedures that determine how well those checks are working.
Security architects develop business processes that constantly investigate potential problems, find the root cause of security events, and mitigate the potential damage if a breach occurs.
Required Skills for Security Architects
These techies identify, proposes and initiate improvements to the organization's security posture. They possess deep understanding of security trends and strategies and identify security solutions that meet business objectives. The position requires special skills, including security certifications like:
Information Systems Audit and Control Association (ISACA) Certified Information Security Manager (CISM)
Global Information Assurance Certification (GIAC) Security GSEC
In order for a security framework to be effective, everyone in the organization needs to understand it and take steps to ensure that they do not make a careless mistake and become the weak link that opens the door to an intruder.
So, security architects must work with coworkers to educate staff and create corporate awareness about cybersecurity dangers and prevention. Their responsibilities include:
Manage cross-functional cybersecurity and compliance projects.
Articulate complex technical security issues into business terms and share that information with stakeholders in different departments.
Work closely with internal auditing, legal and the IT teams to understand regulatory requirements and put systems in place to ensure compliance.
Provide management with up-to-date information on different threats and security vulnerabilities that organizations may face and ensure solutions are in place to mitigate those risks.
What Does a Security Engineer Do?
Security engineers implement the plans. In essence, they are the builders. They work with the applications and development tools, link all of the various components, and get companys' business applications running. Their experience with security products must be deep — and they are paid accordingly.
The bulk of their days are spent working on individual application deployment and troubleshooting issues. Their responsibilities typically includes working with a wide range of solutions and having practical, hands-on experience in many areas:
Operating systems like Linux and Microsoft Windows
Cloud platforms like Amazon Web Services, Microsoft Azure, and Google Cloud Platform
Programming and scripting languages such as Java, Python, Perl
Security tools like Kali, Nessus, Netsparker, openVAS, BurpSuite, and Metaspolit.
Mobile systems like Apple iPhone and Google Android, as well mobile secure design principles such as Open Web Application Security Project (OWASP)
Compliance is a major concern nowadays, especially as governments become more proactive in ensuring that individuals' personal information is not compromised. Security engineers need familiarity with technology risk management related frameworks, such as RMF, NIST 800-53, ISA/IEC 62443, UL CAP, ISO 27001, GDPR, CSL, CSA, SOC 2.
Security engineers cannot ignore the big picture. They must understand the data protection basics, including securing cloud services, especially Amazon Web Services data security, and network and system infrastructure design principles.
They analyze cybersecurity, intelligence and information technology policies and search for gaps. Also, they must know how to conduct penetration testing and reverse engineer software when necessary.
Required Skills for Security Engineers
Some of the most desired skills for security engineers include operational vulnerability analysis, incident response and analysis, pen testing, real-time network analysis, and digital forensics. Security engineers often participate in hackathons, cybersecurity competitions, and security exercises to hone those skills.
And of course, there's certifications that help develop and validate needed skills. Popular certs for security engineers include:
ISC2 Certified Secure Software Lifecycle Professional
Cloud Security Alliance (CSA) Certified Cloud Security Professional (CCSP).
Offensive Security Certified Professional (OSCP)
International Council of E-Commerce Consultants, also known as EC-Council, Certified Ethical Hacker (CEH)
Facing a widening threat footpoint, corporations are investing more than ever in cybersecurity. Security architects provide the big-picture framework needed to ward off intruders. Security engineers work at the various entry points making sure that they only admit authorized individuals.
To qualify for these jobs, IT professionals need a broad understanding about the enterprise security landscape as well as deep knowledge about various security solutions. Together, these two roles create an infrastructure that protects confidential corporate and customer information. | <urn:uuid:fe656d65-d46b-44bb-a1d7-cf78db09a1fc> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/certifications/security/security-engineer-vs-architect-whats-the-difference | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00693.warc.gz | en | 0.931071 | 1,616 | 2.53125 | 3 |
Everyone with half a mind for security will tell you not to click on links in emails, but few people can explain exactly why you shouldn’t do that (they will usually offer a canned ‘hackers can steal your credentials if you do’ explanation) Cross-Site Request Forgery (CSRF) is that reason. Clicking on that link means that an attacker can fake any user-supplied input on a site and make it indistinguishable from a user doing it themselves.
CSRF arises because of a problem with how browsers treat cross origin requests. Take the following example: a user logs into site1.com and the application sets a cookie called ‘auth_cookie’. A user then visits site2.com. If site2.com makes a request to site1.com, the browser sends the auth_cookie along with it.
Normally this doesn’t matter, if it’s a GET request then the page is served, and the same-origin policy stops any funny business. But what if site2.com makes a POST request instead? That request came from the same computer as the valid session and uses the correct authentication cookie. There’s no way to tell the difference, and any state-changing operation can be performed.
During the course of a recent penetration test I noticed that, on the application I was assessing, admins had the ability to add web pages: a pretty reasonable action for the site in question. Unfortunately, the action of adding a page was vulnerable to CSRF. My pen test attack not only created a new page, but also stole administrative credentials from the site, using some unorthodox HTML.
Now, the start of any CSRF attack is always the payload. The first thing to note here is that when an iframe loads, it sends a GET request to whatever is specified in the ‘src’ parameter. Normally this is a standard page, and the content is displayed. But what if you framed a ‘log-off’ page which invalidated your authentication cookie and then redirected you back to ‘index.html’?
The risk of this type of CSRF attack is that instead of trying to bypass this browser policy, an attacker isn’t breaking it at all! They just need to assign a function to the login button on ‘/admin.aspx’ that grabs the value of the username and password fields and then sends them back to the attacker’s server. In our pen test, it was pretty simply to do, as the vulnerable application used jQuery as well. Firstly, changing our ‘onload’ function so that it assigned the ‘grab_creds’ function to it.
And then secondly, declaring the function that we have assigned to the button in the above code.
This function uses the age old ‘getElementById’ to really simply grab the values from the two boxes on the page. Then jQuery’s ‘$.get()’ provides a way of getting it back to the server. Now the attacker has the injection part of the payload, shown below in full. Onload Function Onload Assign Function to Button Function ‘grab_creds’
The ‘history.replaceState’ line at the top just rewrites the URL displayed in the search bar of the browser to what would be shown when legitimately on ‘admin.aspx’, which would make the whole attack even more seamless.
I then went on to test how an attacker would be able to exploit the CSRF. BurpSuite has a function that generates a CSRF payload, which would allow an attacker to quickly whip one up for this vulnerable site.
From a technical standpoint, all a malicious actor would need to do now is replace the ‘Injection Payload’ text in the above code to their actual injection payload and they would be ready to initiate an attack. This was the point at which I stopped as I had proved to the client that an attack was possible, and, in the eyes of a pen tester, this is all that is required.
If this was being exploited in the wild the final step would entail an attacker building a site that they believe an admin of www.victim.it will want to visit and embedding the malicious form submission in a button that they will want to click. Once the attacker has created this application, they host it and leave it to gain natural traffic – so that its’ ‘trust score’ Injection Payload Malicious Form Submission goes up. Once the application is trusted, the attacker could simply find their target on LinkedIn then send them a message that reads something along the lines of:
“Hey *target*, I’m just starting a career in *field in which they work* and would really appreciate it if you could give me a hand. A lot of my work is shown here: *url to our web application with the malicious payload*. Do you have any advice on how I could flesh out my experience given what I’ve already done? Thank you so much, *pseudonym*.”
If the target is logged into the application, and clicks on the button, the attack will succeed, and the vulnerable page will be added. In my pen test, this tactic did work. Shown below is what the result of the attack would look like in the logs of the application.
Protecting against CSRF
The above shows how this attack was demonstrated during the course of my pen test. So, how do we protect against CSRF attacks? There are two ways we do this: one has to be implemented on the application and the other is training users.
The former is really easy to do. All you need is to include a random string in a hidden field, or header with every sensitive request. This value has to be determined by the server and checked to see if it’s valid whenever a request is submitted. We can also change the form of authentication from cookies to bearer tokens. I would advise using these methods, and not trying to perform a check on the referrer header. These headers can be manipulated, and pen testers and hackers alike take a great deal of pleasure in beating an attempt at a ‘smart’ defense.
Trust me: stick to anti-CSRF tokens. However, these protections have to come as part of a defense-in-depth approach. None of the above methods work if you have any Cross-site Scripting (XSS) vulnerabilities present on the application. Using XSS to bypass CSRF protections is a whole different kettle of fish however, but definitely something to bear in mind.
Now, onto training users. This entire attack hinges around a user being tricked into clicking on a malicious link or browsing to a malicious site. Anti-phishing training should be standard for all corporations, however, if members of the public are using your application then there is no way to train them all. It is for this very reason that implementing the defense we discussed previously is so important. Don’t trust a person if you can trust technology first.
I hope the offensive security perspective on performing these attacks has provided insight into why CSRF really is such an important vulnerability to understand. Going from step 1 – noticing that there was no anti-CSRF token, all the way to step 8 – successfully stealing credentials was only possible because there were no defenses. A simple Anti-CSRF header would have foiled the entire process.
Anti-phishing training would have stopped the attack when the malicious links were emailed to target employees. So, the next time someone asks you why they should never click on links in emails, you can tell them why. | <urn:uuid:a4045cc5-ac14-4472-978a-cbbca54c6eb1> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2021/03/23/csrf-on-company-websites/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00693.warc.gz | en | 0.947815 | 1,991 | 2.59375 | 3 |
Given the size and frequency of data breaches and news headlines in the last several years, identity is becoming a main topic of discussion. We will discuss ‘identity’ in the context of personal identity online or “who I really am” and the attributes and characteristics that are important to us.
“Who I really am” depends on the website that I am visiting whether it is a place of business, bank or government agency. Digital identity is a collection of data points that can include, username, password, purchase history, date of birth, social security number, online activities, electronic transactions and medical history. We have many identities, according to William Mougayar in his November 2016 Blog (opens in new tab). While using an individual’s personal information to access their accounts is the norm in the physical world using conventional identity systems, this is not true for the digital world.
We as consumers are programmed to believe that 5 layers of questions by the customer service representative on the other end of the phone, are designed to protect your personal information both in the physical and digital world. But recent hack cases, with Cyber-attacks on The Healthcare sector (NHS across England, etc.), Prominent department chains (Home Depot, Target, etc.), Governmental institutions (The IRS, The Federal Reserve, DHS, SEC, etc.) or even some of our favorite online sites (eBay, Yahoo, Google, etc.), your information was compromised no matter what level of authentication the systems had in place. Such data breaches will continue to happen as long as these antiquated verification processes are being used. We are made to feel comfortable that our information is secured behind a myriad of security features. These trusted organisations (banks, government agencies, etc.) have failed as custodians of our Identity!
Before we propose our solution, let us begin by asking ourselves, do we even OWN our personal Identities? We think we own our Social Security Number, Driver’s License, email addresses or even our phone numbers. This may not be news to us all, but we DO NOT actually own our identities. In many cases, we do not even have control over any information linked to our identity. Our personal data and information are stored in places that are owned and operated by other third parties. Does it work? Perhaps not! Despite all these processes in place, and including the fact that banks have added an additional layer of authentication in the form of PINs and Safecodes, you are still vulnerable to your information being hacked.
How can we even propose a solution when we don’t even own our identity?
Building an identity
Our proposal is a new way of thinking about digital identity. To be truly back in control, we must first own our identity and only then can we decide on the type of information to share and which technology to use. Our application is the creation of the digital id and uses blockchain as the underlying technology that enables all future applications and services. Mougayar succinctly explains that “blockchain-based identity holds a promise, which is to allow us to consume a number of services in a trusted manner, without the need to assert our physical presence, …”
Our solution is simple to use – you don’t need to reuse passwords, you don’t have to repeatedly give out your social security number and all your data is stored where no one can read it– including us. Users can upload any information to the blockchain knowing that the information is cryptographically secure, unalterable, and irrefutable. Think of our blockchain as your personal ‘witness’ in the digital world.
Secure Identity Ledger Corporation (SILC SM) introduces our blockchain or distributed ledger technology, offering a transformative consumer-based application for you, the user, to get a unique digital identity to transition into the Blockchain Age SM This ‘One Digital ID SM’ belongs to you and to no one else. You control how your personal information is used and distributed and as a result, you can start to build trust within the global digital community, as now, all transactions are recorded on the blockchain.
SILC operates an independent blockchain platform that allows you to build an identity that is not entirely subject to the challenges we discussed above. For starters, with SILC, you will OWN your Digital Identity! Simply stated, our solution is designed to allow you to gain absolute ownership and control over your personal data, create and maintain anonymity and have the AUTHORITY to then choose how, when, where and with whom you share that information. By creating a SILC ‘One digital ID’, you will be the sole CUSTODIAN of your personal data and can safely interact and conduct data transactions with other users.
By adopting a SILC ‘One Digital ID’, you effectively can select what information you want to release to a third party based on the context of that exchange. For example, when applying for a new job, a prospective applicant can select to release only those credentials (his education level, previous work history, work study projects, etc.) that are relevant to his application process. The employer does not need to know the details of the applicant’s political affiliation or other non-requested personal details that may negatively influence the employer’s hiring decision based on biases. Once again, the applicant can control what elements of his ‘Virtual Imprint’ he releases. With the SILC ‘One digital ID’, you can store all various elements of your identity in blocks and can pick and choose what information gets released. By regaining control over your personal information, the power dynamic radically shifts back into your hands!
Danny H. Lee, Secure Identity Ledger Corporation (opens in new tab)
Image Credit: Zapp2Photo / Shutterstock | <urn:uuid:1fb5d094-133c-4330-851e-c4d88b15518f> | CC-MAIN-2022-40 | https://www.itproportal.com/features/how-blockchain-and-digital-id-puts-user-back-in-control/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00693.warc.gz | en | 0.933753 | 1,194 | 2.90625 | 3 |
Central Candy Factory manufactures Chewy Bars, which are individually wrapped candy bars made from bulk chocolate, sweetened shredded coconut and bulk marshmallow. This Input, which also includes wrappers and boxes for the candy, arrives via the Bulk Food Trucking distribution network. Central Candy Factory contains a melter that melts the chocolate, a heating mixer that combines the coconut and marshmallows into filling, an extruder that makes cores from filling, and an enrober that makes warm Chewy Bars by covering the cores with melted chocolate. After spending some time on the cooling rack, the Chewy Bars are placed in wrappers by the wrapping machine, and the Wrapped Chewy Bars are packed in boxes by the packing machine. This last step yields boxes of Chewy Bars.
The manufacturing Input is represented as a composite material consisting of five specific materials: Chocolate, Coconut, Marshmallow, Wrappers, and Boxes. The Input flows from the Bulk Food Trucking distribution network to the Central Candy Factory facility. The manufacturing and packaging system is depicted as a series of equipment instances that access materials. The read access relationship is used to represent the use of inputs by equipment, while the write access relationship is used to represent the production of outputs. The Heating Mixer uses Chocolate and Marshmallow to make Filling, while the Extruder uses Filling to make Cores. The Melter uses Chocolate to make Melted Chocolate, which the Enrober uses to turn Cores into Warm Chewy Bars. The Cooling Rack cools down the Warm Chewy Bars to make finished Chewy Bars. The Wrapping Machine uses Wrappers to make Wrapped Chewy Bars, which the Packing Machine packs into Boxes, yielding Boxes of Chewy Bars. | <urn:uuid:17a0cff4-92e6-4367-8659-545290af4065> | CC-MAIN-2022-40 | https://www.eaprincipals.com/content/exploring-archimate-physical-layer-manufacturing-scenario | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00693.warc.gz | en | 0.917234 | 371 | 3.03125 | 3 |
One of the biggest fears when it comes to our precious finances is that they might be taken from us when we least expect it. This fear has rapidly moved from having physical cash stolen to having our entire digital financial footprint snatched from us. But, while this is a very real concern, there are ways to mitigate possible cyber crime to protect our financial information when conducting business online.
The bottom line is that we can’t fully rely on others to protect our financial information online. Luckily, there are ways we can take matters into our own hands to ensure we have the adequate levels of protection necessary. One way is through a VPN. A VPN (virtual private network) helps you stay safe when transmitting information across public, unsecured or unencrypted networks.
VPNs allow you to share files safely with a group and ensure everyone within is protected. They also allow items to be sent anonymously online. Plus, VPNs can bypass security filters on websites as they have adequate protection themselves. VPNs can also be a cheaper alternative to some costly ways of ensuring your network is more protected and give better peace of mind.
Many more people have embarked on cryptocurrency journeys and are swapping traditional methods of storing money to digital ones for the level of safety and security that they provide. Indeed, digital wallets that store money in the form of tokens of cryptocurrency are harder to steal from and aim to prevent cybercrime and hackers.
This can be beneficial for anyone embarking on any form of online trading, which is often a target for cybercrime. For instance, CFD trading can be conducted on cryptocurrency, which allows traders to own underlying values rather than the asset itself, which is a more protected way of trading, done on platforms connected to digital wallets. The safety of the cryptocurrency wallet means that even if a cybercriminal bypasses the other methods for keeping financial information safe, there is a back-up that ensures they can’t access your hard-earned funds.
One of the simplest ways of protecting your financial information when doing business online – yet also one of the most overlooked – is proper password encryption. Using one password across the board has been shown to increase the likelihood of cybercrime, while a strong password should also feature encryption as well as a strong combination of characters.
If you are conducting financial business online, make sure that the site you use – whether it be e-commerce or banking – has a strong level of password encryption. Most sites already employ strong password authentication and an encrypted SSL certificate to protect your information, so make sure that wherever you input financial information, this basic level of encryption is involved.
Our financial information – especially in conjunction with our other personal information – is our most valuable asset, so we need to ensure that it is as protected as it possibly can be throughout every step of our online experience. The three key ways in which we can protect our financial information are to ensure the sites we use the information on and for are password encrypted, to transfer our money into cryptocurrency that is safer and more secure, and to use a VPN when committing any financial information to our screens. Using these methods, we should have a better-protected online experience with our financial information. | <urn:uuid:555edb31-1416-42be-9b41-6ddaab27a39e> | CC-MAIN-2022-40 | https://gbhackers.com/financial-information-online/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00693.warc.gz | en | 0.940009 | 655 | 2.625 | 3 |
As technology gets more sophisticated, so do cybercriminals, and the arms race between IT security and hackers continues to escalate. 2021 was a record-setting year for cyberattacks, with data breaches increasing by 68% over 2020. With a third of 2022 in the books, the rate of attacks doesn’t seem to be showing any signs of slowing down. While cybercriminals keep their foot on the gas pedal, the best defense against a costly intrusion is still a well-informed, well-prepared human being.
That’s why it’s vital to understand the risks and stay informed of the dangers of being defenseless against cyberattacks. Here are the worst cyberattacks of 2022 (so far) and the businesses that are learning that lesson the hard way:
- A cyberattack targeting the Ukrainian government, suspected to be Russian in origin, took down more than 70 government websites. The attack claimed to leak personal information and replaced text on the websites with the threat “be afraid and wait for the worst.”
- Hackers breached the International Committee of the Red Cross. The attackers stole data on 500,000 people and disrupted the organization’s services worldwide.
- An attack on a German firm caused Shell to reroute its oil supplies after its IT and supply chain systems were affected.
- Hackers infiltrated the networks of the UK Foreign Office, causing a “serious cyber incident,” the details of which remain confidential.
- Two groups backed by the North Korean government waged cyberattacks against numerous members of the media, financial, and software sectors. The group used phishing emails, fake job posts, and a security vulnerability in Google Chrome to spread its malware.
- Russia-affiliated hackers accessed multiple US defense contractors between January 2020 and February 2022. The group stole emails and sensitive data related to products and interactions with foreign governments.
- The FBI claims the North Korean government was responsible for stealing $600 million in an attack on a cryptocurrency exchange.
- A report in February discovered that the Chinese government had gained access to the networks of at least 6 U.S. state governments. The attack was made possible through a vulnerability known as “Log4j” as well as vulnerable web applications.
- A DDoS attack targeting a telecom company shut down the internet on the Marshall Islands for over a week.
Ignoring the threat is not an option
While the hacking of governments and multinational corporations make the headlines, small and medium-sized businesses are just as at risk for a cyberattack. In fact, 43% of attacks target small businesses and 60% of those that fall victim go out of business within 6 months.
It’s more important than ever to protect your small business against the ever-looming threat of cybercrime and data breaches. Schedule a free consultation with amshot today to learn how employee training and IT security can help save your business from facing a devastating attack. The best way to beat cybercrime is to make sure you’re never a victim of it in the first place. | <urn:uuid:bbe37971-5b5c-4ce0-a9ef-d5a681de6b00> | CC-MAIN-2022-40 | https://amshot.com/2022/06/21/the-worst-cyberattacks-of-2022-so-far/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00693.warc.gz | en | 0.956027 | 633 | 2.625 | 3 |
Containers and microservices are two leading edge technologies that enable much greater efficiency in cloud computing. Although they don’t need to be used in in combination, when they are deployed together they provide maximum benefit.
Using containers allows developers to work faster and better by creating virtual “sandboxes” where it’s possible to write, manage and manipulate software code. The advantage is that this can be done without affecting other applications and systems running on a server or virtual machine (VM). Containers produce gains in efficiency, costs and even security. These easily packaged and lightweight components can run with others in the same VM.
The enormous flexibility that containers introduce has fueled rapid adoption, producing a growing dependence on container technology. Containers have emerged as an alternative to virtual machines. Forrester has noted that 58 percent of developers currently use containers or plan to use containers in the next 12 months. Meanwhile, 451 Research reports that the application container market will grow from $762 million in 2016 to $2.7 billion by 2020.
A major part of the appeal—and power—of containers lies in microservices. These software components—which may include code, runtime, system tools, system libraries, and settings as unique and discreet processes—introduce a more modular and advanced development framework.
Microservices, which are typically available via a toolbar or menu, allow organizations to deploy software, and make changes, faster and continuously. This capability is particularly important for organizations using DevOps and other agile approaches.
Microservices have their roots in Web services. By combining code components into functional applications, organizations can use these pre-designed applets to turbocharge software development. This allows businesses to introduce products faster and make changes far more dynamically.
Remember, while microservices don’t require containers, they benefit greatly from them. Containers with microservices allow organizations to create a more consistent and lightweight cloud development framework.
Here are five key truths to know about using containers and microservices together:
Complexity can become a problem
The accumulation of solutions—in this case microsystems across numerous containers—can introduce new and sometimes vexing problems. Although these two tools, particularly when combined, streamline and simplify development, they also introduce new and sometimes nagging challenges.
The sheer number of microservices and all their intersection points with containers translates into a constantly changing environment. This can force an organization to deal with too much granularity and, if used too heavily or mapped out incorrectly, can introduce latency. It can also ratchet up testing requirements.
The fact that some components are open source and others are provided by commercial companies can complicate matters further. Ultimately, a gap in the overall framework can impact scalability, reliability and numerous other factors.
A key to success is forging a strategy and strong framework to support microservices and containers. This requires experienced developers—and new training for key members of a team—so that they can lead an initiative and use tools and systems to maximum advantage.
An organization will require new processes
Continuous delivery (CD) and continuous integration (CI) frameworks are on most company’s radar. These methodologies can unleash remarkable business innovation. They are especially suited to today’s agile and DevOps development frameworks—which promote fast, incremental and continuous rollouts of applications and updates.
Containers and microservices support this approach in a big way. Yet, there’s a catch: without the right processes and workflows, organizations struggle to extract maximum value from CD and CI—and ultimately from containers and microservices.
Unleashing these tools without establishing a foundation and framework may add to complexity and undermine progress. As a result, it’s important for teams from both development and operations to focus on two crucial issues:
- Building a knowledge base: Groups from the business and development sides must thoroughly understand CI/CD concepts and approaches before embracing microservices and containers.
- A foundation for collaboration must be in place: An organization must develop a framework for these groups to work together to incorporate containers and microservices in the most efficient and productive way possible.
Monitoring is crucial
Because development environments that rely on containers and microservices can become fairly complex—and involve an enormous number of tools and components—monitoring is at the center of a successful initiative.
Moreover, code monitoring must take place inside containers. it’s important to focus on a few key issues:
- Understand the scope and nature of monitoring required. Inadequate monitoring can lead to development teams that wind up frustrated and overwhelmed. One problem is that microservices can vary greatly across containers and components. This means that it’s necessary to deploy monitoring that spans the entire collection of containers and microservices.
- Know that conventional monitoring is limited. Conventional methods of monitoring – namely an approach revolving around instrumentation – don’t necessarily work. Containers benefit from being small, isolated processes with as few dependencies as possible.
- Monitoring tools must address the unique challenges of container and microservices. Identifying where bugs, errors and production problems occur and taking steps to remediate these issues involves a more involved and nuanced approach. Monitoring containers and microservices may include a variety of tools, including app performance monitoring, code profiling, direct error tracking, centralized logging, and metrics revolving around apps and components.
- Fix problems quickly and seamlessly. When development teams can pinpoint where a problem exists, it’s possible to roll back or patch the problem quickly. This may involve deleting or changing a microservice that might otherwise be difficult to spot and populating the change across containers.
Orchestration is vital to success
A collection of containers and microservices doesn’t automatically address an organization’s DevOps or agility challenges. There’s a need for these systems and components to be effectively coordinated.
Orchestration, which essentially clusters containers in an intelligent fashion, is a critical piece of the puzzle. It makes scalability manageable. The container orchestration platform, Kubernetes, which is open source, works with most tools and platforms. It addresses the need for automation and scaling.
A number of potential solutions incorporate Kubernetes, including open source Docker, which is more effective for managing single image instances. Solutions may also incorporate commercial services from cloud companies like AWS, Google Cloud and Microsoft Azure, which are equipped to address more complex distributed applications.
These services can accomplish several key tasks. Most important, they can:
- Tie together vast collections of microservices.
- Automate an array of tasks and processes.
- Manage scaling of services.
- Introduce a higher level of flexibility by making it possible to deploy containers and microservices in a broader array of environments, including hybrid cloud deployments.
Security cannot be an afterthought
Containers and microservices introduce some important security advantages – particularly surrounding isolation of code and applications through sandboxing – but they also create new vulnerabilities.
One of the biggest risks revolves around specific libraries and microservices that are prone to specific threats. As a result, organizations using containers and microservices should:
- Adopt specialized tools for managing container security, including solutions that handle image scanning, image management and maintaining a trusted registry. Organizations also benefit from application security software to address dynamic and static scanning of code, and software that handles policy-based enforcement tasks.
- Use an operating system or software that secures containers at the boundaries. This approach is important because it prevents the host kernel from escaping the container as well as securing containers from each other.
- Focus on container orchestration as a key element of security. This includes which containers are deployed to which hosts, the capacity of hosts, how containers are discoverable and connected, how container health is managed, and the degree of developer self-service that is incorporated into the environment.
- Understand how the network is configured for container and microservice security, including whether it’s possible to segment traffic to isolate different users, teams, applications, and environments within a single cluster. This may require more advanced SDN tools that can address the complexity of identifying IP addresses and clusters. Likewise, an organization must address storage issues, including how and where containers reside in a resting state.
Organizations that address these issues and take a systematic approach to containers and microservices are far better positioned to match their development efforts with the opportunities and challenges of today’s digital business framework. | <urn:uuid:cd1bed6e-c0b9-4210-af30-2b6916758f03> | CC-MAIN-2022-40 | https://www.datamation.com/cloud/containers-and-microservices-five-key-truths/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00093.warc.gz | en | 0.924255 | 1,734 | 2.71875 | 3 |
(1) The protection of natural persons in relation to the processing of personal data is a fundamental right. Article 8(1) of the Charter of Fundamental Rights of the European Union (the ‘Charter’) and Article 16(1) of the Treaty on the Functioning of the European Union (TFEU) provide that everyone has the right to the protection of personal data concerning him or her.
(2) The principles of, and rules on the protection of natural persons with regard to the processing of their personal data should, whatever their nationality or residence, respect their fundamental rights and freedoms, in particular their right to the protection of personal data. This Regulation is intended to contribute to the accomplishment of an area of freedom, security and justice and of an economic union, to economic and social progress, to the strengthening and the convergence of the economies within the internal market, and to the well-being of natural persons.
(3) Directive 95/46/EC of the European Parliament and of the Council seeks to harmonise the protection of fundamental rights and freedoms of natural persons in respect of processing activities and to ensure the free flow of personal data between Member States.
Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data (OJ L 281, 23.11.1995, p. 31). https://eur-lex.europa.eu/legal-content/EN/AUTO/?uri=OJ:L:1995:281:TOC
(4) The processing of personal data should be designed to serve mankind. The right to the protection of personal data is not an absolute right; it must be considered in relation to its function in society and be balanced against other fundamental rights, in accordance with the principle of proportionality. This Regulation respects all fundamental rights and observes the freedoms and principles recognised in the Charter as enshrined in the Treaties, in particular the respect for private and family life, home and communications, the protection of personal data, freedom of thought, conscience and religion, freedom of expression and information, freedom to conduct a business, the right to an effective remedy and to a fair trial, and cultural, religious and linguistic diversity.
(5) The economic and social integration resulting from the functioning of the internal market has led to a substantial increase in cross-border flows of personal data. The exchange of personal data between public and private actors, including natural persons, associations and undertakings across the Union has increased. National authorities in the Member States are being called upon by Union law to cooperate and exchange personal data so as to be able to perform their duties or carry out tasks on behalf of an authority in another Member State.
(6) Rapid technological developments and globalisation have brought new challenges for the protection of personal data. The scale of the collection and sharing of personal data has increased significantly. Technology allows both private companies and public authorities to make use of personal data on an unprecedented scale in order to pursue their activities. Natural persons increasingly make personal information available publicly and globally. Technology has transformed both the economy and social life, and should further facilitate the free flow of personal data within the Union and the transfer to third countries and international organisations, while ensuring a high level of the protection of personal data.
(7) Those developments require a strong and more coherent data protection framework in the Union, backed by strong enforcement, given the importance of creating the trust that will allow the digital economy to develop across the internal market. Natural persons should have control of their own personal data. Legal and practical certainty for natural persons, economic operators and public authorities should be enhanced.
(8) Where this Regulation provides for specifications or restrictions of its rules by Member State law, Member States may, as far as necessary for coherence and for making the national provisions comprehensible to the persons to whom they apply, incorporate elements of this Regulation into their national law.
(9) The objectives and principles of Directive 95/46/EC remain sound, but it has not prevented fragmentation in the implementation of data protection across the Union, legal uncertainty or a widespread public perception that there are significant risks to the protection of natural persons, in particular with regard to online activity. Differences in the level of protection of the rights and freedoms of natural persons, in particular the right to the protection of personal data, with regard to the processing of personal data in the Member States may prevent the free flow of personal data throughout the Union. Those differences may therefore constitute an obstacle to the pursuit of economic activities at the level of the Union, distort competition and impede authorities in the discharge of their responsibilities under Union law. Such a difference in levels of protection is due to the existence of differences in the implementation and application of Directive 95/46/EC.
(10) In order to ensure a consistent and high level of protection of natural persons and to remove the obstacles to flows of personal data within the Union, the level of protection of the rights and freedoms of natural persons with regard to the processing of such data should be equivalent in all Member States. Consistent and homogenous application of the rules for the protection of the fundamental rights and freedoms of natural persons with regard to the processing of personal data should be ensured throughout the Union. Regarding the processing of personal data for compliance with a legal obligation, for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller, Member States should be allowed to maintain or introduce national provisions to further specify the application of the rules of this Regulation. In conjunction with the general and horizontal law on data protection implementing Directive 95/46/EC, Member States have several sector-specific laws in areas that need more specific provisions. This Regulation also provides a margin of manoeuvre for Member States to specify its rules, including for the processing of special categories of personal data (‘sensitive data’). To that extent, this Regulation does not exclude Member State law that sets out the circumstances for specific processing situations, including determining more precisely the conditions under which the processing of personal data is lawful.
(11) Effective protection of personal data throughout the Union requires the strengthening and setting out in detail of the rights of data subjects and the obligations of those who process and determine the processing of personal data, as well as equivalent powers for monitoring and ensuring compliance with the rules for the protection of personal data and equivalent sanctions for infringements in the Member States.
(12) Article 16(2) TFEU mandates the European Parliament and the Council to lay down the rules relating to the protection of natural persons with regard to the processing of personal data and the rules relating to the free movement of personal data.
(13) In order to ensure a consistent level of protection for natural persons throughout the Union and to prevent divergences hampering the free movement of personal data within the internal market, a Regulation is necessary to provide legal certainty and transparency for economic operators, including micro, small and medium-sized enterprises, and to provide natural persons in all Member States with the same level of legally enforceable rights and obligations and responsibilities for controllers and processors, to ensure consistent monitoring of the processing of personal data, and equivalent sanctions in all Member States as well as effective cooperation between the supervisory authorities of different Member States. The proper functioning of the internal market requires that the free movement of personal data within the Union is not restricted or prohibited for reasons connected with the protection of natural persons with regard to the processing of personal data. To take account of the specific situation of micro, small and medium-sized enterprises, this Regulation includes a derogation for organisations with fewer than 250 employees with regard to record-keeping. In addition, the Union institutions and bodies, and Member States and their supervisory authorities, are encouraged to take account of the specific needs of micro, small and medium-sized enterprises in the application of this Regulation. The notion of micro, small and medium-sized enterprises should draw from Article 2 of the Annex to Commission Recommendation 2003/361/EC .
Commission Recommendation of 6 May 2003 concerning the definition of micro, small and medium-sized enterprises (C(2003) 1422) (OJ L 124, 20.5.2003, p. 36). https://eur-lex.europa.eu/legal-content/EN/AUTO/?uri=OJ:L:2003:124:TOC | <urn:uuid:9889623b-49e1-4884-be61-c1c368485f9f> | CC-MAIN-2022-40 | https://gdpr-text.com/read/article-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00093.warc.gz | en | 0.918652 | 1,732 | 2.625 | 3 |
The Staggering Scope of Big Data
The amount of data we're all generating at any given time is astronomical. Jump on the big-data train or risk getting left behind.
So you've been hearing about storage lately, maybe a lot. Well, that shouldn't come as too much of a surprise. Storage is a hot category for a lot of reasons, but the main driver of demand for storage is something that has now become a familiar buzz phrase: big data.
Simply put (and it's scary to think that anybody in IT wouldn't know this) massive data growth is driving a panic scramble for storage solutions. But how massive is this massive growth in data volume? It's incomprehensible actually, or nearly so.
Let's take a look at some of the numbers that make up big data. A summary from Villanova University offers a few numbers:
- Users create 2.5 quintillion bytes of data every day. Essentially, this means that 90 percent of the data in the world today has been created in the last two years alone.
- Retailer Walmart alone controls more than 1 million customer transactions every hour -- and then transfers them all to a database that stores more than 2.5 petabytes of information
- There are 45 billion photographs (and counting) in Facebook's database. That's more than six photos for every human being on earth.
But wait...there's more. According to Domo, a business-intelligence firm, every 60 seconds, technology users:
- Send 204 million e-mails;
- Upload 3000 videos to YouTube;
- Tweet 100,000 times;
- Download 47,000 applications from an app store.
And those numbers are already old. They were old the first time somebody typed them. So, are we humans just that much smarter than we used to be? Do we just know that much more? Maybe, but one of the drivers of big data is the storage of increasingly massive file types -- think MRIs and sonograms.
Then there are government regulations, social media, the proliferation of online video, streaming-video services, blogs about storage... The stuff we're storing is just bigger, broader and greater in volume than it has ever been before. That's all there is to it.
So, what does this all mean? Well, for one thing, now would be a good time to acquire skills in working with big data. There will soon -- within the next few years -- be significant shortages of IT people who know how to handle big data. And then there's the investment that big data will require. That brings us full circle to our discussion of storage. All of this stuff has to go somewhere and be accessible and recoverable -- and soon.
It's not that IT professionals don't know all this. It's just that the numbers provide a stark reminder of the onslaught of big data that's happening right now and a harbinger of what's to come. Get ready. | <urn:uuid:96c4d21d-64c4-4c03-b0a5-a4d48e598270> | CC-MAIN-2022-40 | https://esj.com/articles/2014/04/29/scope-of-big-data.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00093.warc.gz | en | 0.95393 | 605 | 2.921875 | 3 |
Health care industry is transforming thanks to entrepreneurs and engineers who are working on enhancing both doctors’ and patients’ experience.
The Health Insurance Portability and Accountability Act (HIPAA) was enacted to keep up with the rapidly developing health care system and to protect patient’s medical records provided to doctors, hospitals, and other health care entities.
Companies and startups in the medical field have to keep a close eye on regulations like HIPAA. Continue reading to find out more about the Act and to understand how to make your application HIPAA-compliant.
What Information is Protected by HIPAA?
Individually Identifiable Information
• Email and Phone number
• Social Security number
• Driver’s license
• Clinical notes
• Blood tests
• MRI scans
Who Does It Apply To?
1. Health Plans – health insurance companies, company health plans, and government programs that pay for health care.
2. Health Care Providers – doctors, clinics, hospitals, psychologists, chiropractors, nursing homes, pharmacies, and dentists managing business electronically.
3. Health Care Clearinghouses—entities that process atypical health information into a standardized version.
“Business associates” are the contractors or subcontractors of covered entities that need access to PHI to provide services to a covered entity. These include:
1. Billing companies and companies processing health care claims
2. Companies helping to administer health plans
3. Lawyers, accountants, and IT specialists related to a covered entity
4. Companies that store or destroy medical records
If you are collecting or storing PHI, then you are obliged to sign a Business Associate Agreement with a covered entity. It outlines procedures for how you will protect the health records and how you should respond in an event of a breach.
HIPAA Privacy & Security Rule
The privacy and security rules were implemented to make sure a patient’s PHI is secure and not wrongfully disclosed.
The Privacy Rule dictates how, when, and under what circumstances PHI can be disclosed. This law forces covered entities and business associates to implement the Rule to control the flow of information, monitor internal networks, and take measures to prevent unauthorized disclosure of PHI.
The Security Rule was implemented to further protect the confidentiality, integrity, and availability of electronically protected health information (EPHI). The Rule requires administrative, physical, and technical safeguards to secure EPHI that is created, received, used, or maintained by covered entities and business associates.
Cover over half of HIPAA security requirements. They define regulatory policies and procedures that must be in place to prevent, detect, contain and correct EPHI security violations.
1. Security management process
2. Assigned security responsibility
3. Workforce security
4. Information access management
5. Security Awareness and training
6. Security incident procedures
7. Contingency plan
9. Business associate contracts
Focus on protecting electronic systems from potential threats, unauthorized intrusion, and environmental hazards. It is essential for a covered entity to establish secure ways of using workstations and electronic media to ensure the protection of EPHI.
1. Facility access controls
2. Workstation use
3. Workstation security
4. Device and media controls
Cover user access to systems storing EPHI. Covered entities and business associates need to figure out risks to EPHI relative to their size and the cost to cover them.
1. Access control
2. Audit controls
4. Personal or entity authentication
5. Transmission security
The Enforcement Rule
In case of a breach, the Enforcement rule defines and governs the responsibilities and requirements of covered entities and business associates and how it expects them to cooperate during the enforcement process.
The Cost of HIPAA Non-Compliance
Covered entities and business associates can both be fined and receive penalties if they do not comply with the Act. Violation of HIPAA can cost you from $100 to $50,000 per record, with a maximum penalty of $1.5 million per year for each violation.
ITIRRA’s experts are happy to help your software development team with any questions related to building a HIPAA compliant application. To find out more about how to make HIPAA compliant application or to discuss how a straightforward solution can grow your business and improve the safety of your data, contact us today or arrange a meeting with me. | <urn:uuid:1a849030-5bc4-41fc-8d94-4bd3ad031f9b> | CC-MAIN-2022-40 | https://itirra.com/blog/why-should-you-care-about-hipaa/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00093.warc.gz | en | 0.915875 | 929 | 2.640625 | 3 |
Network Traffic Analysis (NTA) is a category of cybersecurity that involves observing network traffic communications, using analytics to discover patterns and monitor for potential threats. NTA solutions can be powerful tools for any organization, alerting security teams to an infection early enough to avoid costly damage. However, in today’s threat landscape, there are many different types of cybersecurity solutions, so let’s examine what sets NTA apart, and why you should incorporate it.
How Do NTA Solutions Work?
The practice of traffic analysis is actually much older than the Internet. For example, the military began intercepting radio traffic beginning in World War I, and the interception and decoding work done by analysts at Bletchley Park quickly became a critical part of battle strategy during World War II.
Though we’ve advanced considerably from radio technology, the principle of traffic analysis remains the same. Communication traffic patterns are scrutinized for information that will help keep assets secure. By monitoring network traffic, abnormal activity from threat actors can be detected early on, thwarting attackers before they achieve their goal of destruction or theft.
Gartner published its first Market Guide for Network Traffic Analysis in 2019. Since it is a newer category, there is a significant amount of variation between solutions. However, there are a few key similarities:
- Traffic Observation. Instead of monitoring specific assets or the network itself, these security solutions constantly watch network traffic, creating a picture of what normal traffic patterns look like.
- Anomaly Detection. With a baseline developed, NTA tools can then flag traffic abnormalities as possible security threats.
- Threat Investigation. Though there are multiple approaches to this, NTA tools should have some degree of analysis of anomalies to determine whether it’s a harmless abnormality, or a true threat.
Why Do I Need an NTA Solution?
Since organizations have more assets and house more sensitive data than individuals, they will always be at risk of attack. Organizations benefit from the increased protection an NTA solution provides for a number of reasons, including:
They shorten the dwell time of infections. Discovering threats as soon as possible is the best way to minimize damage. The longer an infection lives in a network, the more damage it can do. Swiftly detecting a threat can ensure that there is minimal harm.
They improve efficiency. Most organizations do not have the resources to have personnel devoted to actively monitoring for and investigating risk around the clock. These solutions automate threat detection, allows organizations to do more with less and ensures that security analysts are able to focus more on threat removal.
They provide wide coverage. By monitoring traffic, NTA solutions can monitor different types of devices. For example, many NTA solutions are OS agnostic, monitoring traffic from both Linux servers and Windows workstations.
The Importance of Layered Security
Both IT environments and their attackers have grown far too sophisticated for a single solution to protect them. Focusing solely on prevention is no longer enough. Security strategies must be as multi-faceted as the infrastructures they protect. A zero-day approach to security is becoming increasingly common—meaning organizations operate with the mindset that they will at some point be breached and should layer security accordingly.
NTA solutions are ideal for this approach. While it’s critical to have a prevention layer with antivirus tools which focus on blocking as much malware as possible from entering the system, it’s equally important to have a defensive layer using tools like NTA solutions which detect infections that use techniques like phishing emails to sneak in.
Network Traffic Analysis With Network Insight
Since there is substantial variation between NTA solutions, it’s important to find the solution that best suits your environment. So what makes Core Security’s solution different? In addition to having all the standard NTA features, Network Insight stands apart for several reasons:
Protection for every endpoint. Oftentimes, many high-end IoT and other devices go unwatched, causing hidden gaps between technology. Network Insight is agentless as well as OS and platform agnostic, so no device is left behind and you have ongoing visibility across your entire environment.
Multiple threat detection engines. While it’s standard practice to set a baseline, Network Insight goes further, leveraging multiple detection engines focused on analyzing behavior, content, payload, threat intelligence, and more. This eliminates meaningless alerts, and ensures you have definitive proof of infection.
Comprehensive threat database. Network Insight doesn’t just learn from your environment’s behavior. Core Security’s threat intelligence database includes more than 15 years of evidence collected from observing billions of DNS requests a day, thousands of malware samples, and nearly 100 billion domains, providing unmatched threat intelligence.
While any NTA solution will detect threats, Network Insight will pinpoint infections that other solutions miss.
Find out just how effective NTA solutions are at detecting threats.
Read a real world example about a large telecommunications company that immediately found infections that other solutions missed. | <urn:uuid:3183475b-693b-49c0-88a8-63a26d88dfad> | CC-MAIN-2022-40 | https://www.coresecurity.com/blog/what-network-traffic-analysis | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00093.warc.gz | en | 0.926671 | 1,023 | 2.8125 | 3 |
Keeping in constant contact with the rest of the world is important, whether you reside in a rural community, live in a busy apartment, or are often on the road. Communication is just as vital to our first responders when they’re on the road and when they’re at work. When people call for help, emergency services need to respond, which is why cellular signal boosters are needed by first responders.
Factors That Hider Signal
Many things could inhibit cellular signals, all of which make it harder for first responders to reach individuals. The weaker a cellular signal becomes, the more difficult it is to get a good signal and make calls. Three common factors that affect this are:
Distance. The further you are from the nearest tower, the weaker signal becomes.
Terrain. Some terrain, such as mountains, block the path between you and the cellular tower.
Weather. Poor or stormy weather may also create challenges when making calls.
Remember that other factors, such as building materials, could also create challenges. Understanding these issues is an important step to improving the cellular signal. Each of the things mentions above are common reasons for a weak signal.
The Importance of These Factors
Distance, terrain, and weather create problems for everyone, but they can make things more difficult in an emergency. Some may call for help during poor weather or when they’re lost on a mountainous road. Likewise, some people live off the map, and everyone needs the ability to reach emergency services. This is why cellular signal boosters are needed by first responders — because they boost communication between the tower and cell phones.
How First Responders Use Signal Boosters
First responders rely on communication and their ability to reach civilians in all situations. They could use cellular signal boosters in their vehicles and buildings to ensure constant contact with their home base and those they’re helping.
Emergency workers could also use commercial cell phone boosters in their offices, ensuring they remain connected despite the distance, weather patterns, and other areas of concern. You can find the ideal commercial cellular booster at SureCall Boosters, which will help you put a stop to dropped calls once and for all and ensure easy contact with the public.
A strong cellular signal helps first responders react faster, gather the information they need to do their job, and keep the public safe. And cellular signal boosters help with this by capturing, enhancing, then rebroadcasting the current signal.
By equipping their cars and offices with cellular boosters, first responders can remain a reliable source of support to the public. Regardless of the situation, a cellular booster helps them keep aware and in contact with the public. Shop for cellular boosters to secure the best reception at SureCall Boosters! The better your connection, the more people you can assist. | <urn:uuid:6c753222-0039-47ae-ad94-ccd745731e85> | CC-MAIN-2022-40 | https://www.surecallboosters.ca/post/why-cellular-signal-boosters-are-needed-by-first-responders-1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00093.warc.gz | en | 0.953534 | 567 | 3.296875 | 3 |
Many users of Android devices sooner or later are tempted to root them. Here we discuss the advantages and disadvantages of having root permissions on Android devices — and if your device should be rooted at all.
Why people root their Android devices
Obtaining superuser access rights, popularly known as rooting, lets owners take full control of their devices. It is possible to do virtually anything with superuser access rights, and quite a few apps (including some in the Google Play store) require root permissions to function properly.
Superuser access privileges are typically sought to expand regular Android capabilities. For example, you can limit network activity for some or all apps, delete annoying preinstalled apps, speed up the CPU, and more.
We used Kaspersky Security Network to compile a list of the most popular reasons that users root their Android devices.
- Install apps that hack games. These apps gain access to the memory where games are stored and modify parameters to allow free gameplay.
- Access the file system. Unrestricted access to the file system may be useful for recovering erased files, moving apps to an SD card, or using root explorers, which are applications with advanced file-system functions.
- Tweak, overclock, or clean the device. Overclocking means increasing the CPU clock frequency of a device so that it works faster.
- Change the Android version. Some users flash third-party firmware ROMs (install different versions of the operating system) created by enthusiasts.
How people gain root privileges
According to our data, people use applications such as Kingroot, 360 Root, Framaroot, Baidu Easy Root, Towelroot, One Click Root, and Mgyun to gain superuser access rights. Unfortunately, many of these applications either show advertisements or install adware on a device. Their behavior is not necessarily malicious, but nothing good comes of it.
We do not recommend using any of those applications for rooting. Well, we don’t recommend rooting at all. Here’s why.
The dangers of rooting
As we said, superuser access rights grant full control over a device. Although that access has some potential advantages (mentioned above), it comes with disadvantages as well.
It is important to understand that having a device with superuser system permissions violates Android’s basic security principles. Rooting is, in effect, do-it-yourself hacking of the operating system of your tablet or smartphone.
Normally, Android apps work in isolated environments (in so-called sandboxes) and cannot gain access to other apps or the system. However, an app with superuser access rights can venture out of its isolated environment and take full control over the device.
With superuser access rights, apps can do whatever they like — for example, view, modify, or delete files, including those that are required for device operation.
Also, note that rooting voids the device’s warranty. Sometimes, the process of rooting can even brick a device, and in that case, you’re simply out of luck; there’s no way you’ll get a refund for it.
Malicious applications and rooted Androids
After gaining superuser access rights, malicious applications enjoy full freedom. In fact, the first thing many Trojans for Android do is attempt to gain root access. Users rooting their own devices offer quite a gift to malware developers.
With superuser access, mobile Trojans can:
- Steal passwords from a browser (as the Tordow banking Trojan did);
- Purchase applications surreptitiously in Google Play (the Guerrilla and Ztorg Trojans did that);
- Substitute URLs in a browser (as the Triada Trojan did);
- Install applications stealthily, including onto system partitions;
- Modify firmware so that Trojans remain on a device even after it is reset to factory settings.
Some ransomware Trojans use superuser access rights to improve their chances of staying in the system.
In most cases, malware is capable of gaining superuser access rights on its own by exploiting vulnerabilities in the system. But some malware applications use existing permissions. Furthermore, according to our data, approximately 5% of malware applications — for example, the Obad mobile Trojan — check devices for root permissions.
The geography of rooting
Our statistics show that rooting is most popular in Venezuela, with 26% of users having rooted smartphones. Algeria takes the lead among African countries, with 19% of smartphones operating with superuser access rights. In Asia, rooting Android is most popular in Bangladesh, with 13% of devices rooted. In Europe, Moldova, at 15%, has the lead.
As for Russia, 6.6% of owners of Android devices use rooted smartphones, which is close to the world average percentage (7.6%). Neither North America nor Western Europe includes any top-rooting countries.
Our statistics show that the top 10 countries where Android devices are rooted most frequently and the top 10 countries where mobile devices are successfully attacked overlaps by 60%. And 9 of the 10 countries with the largest number of rooted devices are in the top 25 countries where devices are attacked most often.
Does antivirus work on a rooted Android device?
Regrettably, although criminals can exploit the advantages of gaining superuser rights and use them to bypass security mechanisms, the good guys still have to play by the rules. In short, antivirus works on rooted devices, but superuser access doesn’t increase its effectiveness.
Of course, how well malware can take advantage of the capabilities of a rooted system varies. But the risk of a security solution letting a threat through on a rooted device is higher than on a device without superuser access rights.
So, should you root your Android device?
Using a system with superuser access rights is similar to driving a heavy truck. If you are really capable of handling that, then why not? But if you aren’t, then get the necessary knowledge and skills first. So if you’re not into IT and don’t consider yourself a pro-user, then we do not recommend rooting Android.
A few more pieces of advice:
- Install applications from official stores only — but even so, don’t trust them blindly. Although the Google Play store is far more trustworthy than random Internet sites, Trojans sometimes get in.
- Limit yourself to known apps from known developers and only those apps that are really needed.
Scan installed apps with a reliable antivirus — for example, our free Kaspersky Internet Security for Android. | <urn:uuid:2bee137f-fab4-4641-a8e4-bfe598bca709> | CC-MAIN-2022-40 | https://www.kaspersky.com/blog/android-root-faq/17135/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00093.warc.gz | en | 0.920601 | 1,362 | 2.84375 | 3 |
China is set to develop “a prototype of an exascale computer” by the end of this year, the country’s press agency claims.
Meanwhile, Donald Trump’s new administration in the United States is considering slashing existing supercomputing programs at the Department of Energy.
Supercomputers and Superpowers
“A complete computing system of the exascale supercomputer and its applications can only be expected in 2020, and will be 200 times more powerful than the country’s first petaflop computer Tianhe-1, recognized as the world’s fastest in 2010,” Zhang Ting, application engineer with the Tianjin-based National Supercomputer Center, told Xinhua.
The Tianhe-1 uses Knights Corner Xeon Phi chips, and was originally set to be upgraded with Intel’s Knights Landing chips, until the US government banned such exports. Instead, China has focused on creating its own chips to build the Sunway TaihuLight supercomputer, the most powerful computer built to date.
TaihuLight is a 125-petaflop (peak) supercomputer that uses ShenWei 26010 processors.
As China ramps up towards building an exascale computer, the US may be winding its initiatives down. The US Department of Energy previously announced plans to build two GPU-accelerated supercomputers, codenamed ‘Summit’ and ‘Sierra’ for 2018, ahead of reaching exascale by 2023.
Now, these plans are up in the air. The Hill reports that the Trump administration is considering sweeping cuts modeled closely on proposals created by conservative think tank the Heritage Foundation. Among the reduction in federal spending by $10.5 trillion over 10 years, the Department of Energy is set to see some cutbacks.
Funding for nuclear physics and advanced scientific computing research could be brought back to 2008 levels, while the Office of Electricity, the Office of Energy Efficiency and Renewable Energy and the Office of Fossil Energy could all be cut.
Supercomputing resource Top500 notes that funding in 2008 stood at $342 million, approximately half of what it is now. It adds that the Exascale Computing Project simply didn’t exist in 2008.
Should Trump proceed with the plan, it is expected to be made public within the first 45 days of the administration, before going to Congress for a vote. | <urn:uuid:a09f9881-3cb3-44da-89ef-a321c4e97a08> | CC-MAIN-2022-40 | https://www.datacenterdynamics.com/en/news/china-to-launch-exascale-computer-prototype-this-year/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00293.warc.gz | en | 0.932581 | 498 | 2.78125 | 3 |
The HIPAA Privacy Rule and
Preemption of State Law
The HIPAA Privacy Rule provides a federal floor of privacy protections for individuals’ protected health information (PHI), where that information is held by a covered entity or by a business associate of the covered entity. State laws that are contrary to the HIPAA Privacy Rule are preempted by the federal requirements, unless a specific exception applies. Continue reading for details about the HIPAA privacy act.
The concept of preemption is not specific to HIPAA. The Constitution of the United States contains what is, in effect, a preemption provision. Article 6 of the Constitution contains a clause that is known as the “Supremacy Clause.” The Supremacy Clause states, simply, that the Constitution, and federal laws created under the Constitution, are the “supreme law of the land.” This has been interpreted to mean that a state law that contradicts, or is contrary to, a federal law, is “trumped” by the federal law.
The part of the Constitution that HIPAA was enacted under, is referred to the “interstate commerce clause.” Under this clause, the Congress of the United States has the power to regulate commerce – commercial activity – among the states. Health care transactions contain a commercial component (i.e., people pay for health care; doctors are paid to provide it; payments are made from residents to one state to laboratories headquartered in another state, and so forth) and are therefore regarded as “interstate commerce.”
When is a State Law “Contrary” to the HIPAA Privacy Rule?
A State law is “contrary” to the HIPAA Privacy Rule if:
- It would be impossible for a covered entity to comply with both the State law and the HIPAA Privacy Rule; or
- If the State law is an obstacle to accomplishing the full purposes and objectives of the Administrative Simplification provisions of HIPAA.
For example, a state law that prohibits the disclosure of protected health information (PHI) to an individual who is the subject of the information may be contrary to the HIPAA Privacy Rule, which requires the disclosure of protected health information to an individual in certain circumstances.
The state law is contrary to the HIPAA Privacy Rule because:
- The covered entity cannot, as a simple logistical matter, comply with both the State law and the HIPAA Privacy Rule. If the covered entity discloses the information to the individual under the HIPAA Privacy Rule, the covered entity has failed to comply with the state law. If the covered entity follows the state law and does not disclose the information to the individual, the covered entity has failed to comply with the HIPAA Privacy Rule.
- The state law is an obstacle to accomplishing the purposes and objectives of HIPAA’s administrative simplification provisions. Those provisions were created for the purpose of protecting the privacy of individuals’ PHI, without compromising the ability of individuals to receive and review their own health records.
Are there Exceptions to the HIPAA Privacy Rule’s Preemption of Contrary State Laws?
There are three recognized exceptions to the general rule that the HIPAA Privacy Rule preempts contrary state law. These HIPAA privacy act exceptions include if the state law:
1.Relates to the privacy of PHI and provides greater privacy protections or privacy rights with respect to such information, than the HIPAA Privacy Rule does. As noted above, HIPAA sets a privacy “floor.” States may, if they so choose, to provide greater privacy protections than are provided
- Provides for the reporting of disease or injury, child abuse, birth, or death, or for public health surveillance, investigation, or intervention. Generally, states have the authority to create and enforce laws related to the health and safety of their residents. States also possess what is referred to as the “police power” – the power to define what constitutes a crime, and the power to conduct law enforcement activities, such as criminal investigations.
2.Requires certain health plan reporting, such as for management or financial audits. States possess broad power to regulate insurance companies that do business in the state. This power to regulate includes the power to require health plans to (among other things) conduct and report the findings of financial audits.
Are There Other Exceptions to Privacy Rule “Preemption”?
The Department of Health and Human Services (HHS) may, upon specific request from a state or other entity or person, determine that a provision of state law which is “contrary” to the HIPAA regulations, and which meets certain additional criteria, will not be preempted by the Federal requirements.
Therefore, preemption of a contrary state law will not occur if the HHS Secretary determines, in response to a specific request, that one of the following criteria apply: The state law:
- Is necessary to prevent fraud and abuse related to the provision of or payment for health care,
- Is necessary to ensure appropriate state regulation of insurance and health plans to the extent expressly authorized by statute or regulation,
- Is necessary for state reporting on health care delivery or costs,
- Is necessary for purposes of serving a compelling public health, safety, or welfare need, and, if a HIPAA Privacy Rule provision is at issue, if the Secretary determines that the intrusion into privacy is warranted when balanced against the need to be served, or
- Has as its principal purpose the regulation of the manufacture, registration, distribution, dispensing, or other control of controlled substances.
Compliancy Group Simplifies HIPAA Compliance
Compliancy Group was founded to help simplify the HIPAA compliance challenge. We give health care organizations everything they need to address the full extent of the HIPAA regulations.
Our ongoing support and web-based compliance app, HIPAA The Guard™ Software, gives health care organizations the tools to address the law so they can get back to confidently running their business.
Find out how Compliancy Group has helped thousands of organizations like yours Achieve, Illustrate, and Maintain their HIPAA compliance! | <urn:uuid:89c8dcb2-28cf-4e16-8ccc-c40bc9dc7e80> | CC-MAIN-2022-40 | https://compliancy-group.com/hipaa-privacy-rule/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00293.warc.gz | en | 0.939125 | 1,257 | 2.609375 | 3 |
How to stop spam emails – Tips and Advice
No one likes receiving spam emails. Not only are they annoying and time-consuming, but they can also be dangerous. It’s estimated that 94% of malware is delivered by spam emails, and other potential dangers include spyware, phishing, and ransomware.
Loosely defined, spam refers to messages you don’t want to receive, which tend to be either commercial or deceptive in nature. Spam is as old as the internet, and despite considerable efforts to overcome it, it remains a problem. In this overview, we explain how to identify spam, how to report spam, how to block unwanted messages, and how to prevent spam.
How to identify spam emails
Sometimes it’s obvious when a message is spam. However, if it isn’t immediately apparent, there are some helpful signs you can look for:
Check the sender’s address
Most spam comes from email addresses that don’t make sense or appear as gibberish – for example, email@example.com or similar. By hovering over the sender's name, which itself may be spelled oddly, you can see the full email address. If you’re not sure if an email address is legitimate or not, you can put it into a search engine it to check.
Consider what kind of information is being requested
Legitimate companies don’t contact you out of the blue via unsolicited emails to ask you for personal information such as banking or credit card details, Social Security number and so on. In general, unsolicited messages telling you to 'verify account details' or 'update your account information' should be treated with caution.
If you need to, navigate to the relevant website yourself – by typing the URL directly into your browser or searching for it via a search engine – and log in to your account without clicking on the link in the email.
Be wary if the message is creating a sense of urgency
Spammers often try to apply pressure by creating a sense of urgency. For example, the subject line may contain words like “urgent” or “immediate action required” – to pressure you into acting.
Check whether the email uses your name
Although some spam messages are sophisticated, a potential red flag is being addressed in vague terms, like “Dear Valued Customer” and so on. Legitimate companies to which you have subscribed will know your name and will address emails accordingly.
Check grammar and spelling
Typos and bad grammar are red flags. So too are odd phrasing or unusual syntax, which might result from the email being translated back and forth through Google Translate several times.
Be wary of attachments
Legitimate organizations avoid sending unsolicited emails with attachments since these are known to carry malware. If you receive an email from an unknown sender urging you to open an attachment, this is a sign of spam. Avoid opening attachments in unknown emails in case they download malware onto your device.
Examples of spam emails and messages
Spam emails fall into different categories. Some of the most common examples include:
- Ads – trying to sell you products or services. Sometimes these may be real but often, they are a scam.
- Spoofed emails – which impersonate legitimate organizations and attempt to fool you into handing over personal or confidential information via phishing.
- Money scams – these range from the notorious ‘Nigerian Prince’ scams to more sophisticated attempts to defraud, such as fake charity appeals.
- Malware warnings – which tell you that you have a malware infection on your devices, such as ransomware or a virus. Often these messages invite you to open an attachment or click on a link – which then downloads malware onto your device.
- Forced or accidental subscriptions - When you buy something online or sign up for a new app, you may inadvertently subscribe to newsletters. Some companies may use opaque tactics, so you subscribe without meaning to.
- Over-the-top promises – such as ‘get rich quick’ schemes, miracle diets, unbelievable discounts and offers, winning a prize draw or lottery, and so on.
- Chain letters – often a far-fetched story you need to pass on, otherwise 'something bad will happen to you.'
How to reduce spam emails
While spam filters – which we explain below – can help reduce spam, there are some email best practices you can follow to prevent spam from reaching you in the first place. These include:
Be selective about what you sign up to:
Only disclose your email address to organizations you trust. When you do share it, only opt-in to newsletters that you want to receive. Remember, reputable companies will make unsubscribing from marketing emails easy and transparent.
Be careful about how you share your email address:
We all share our email address with friends, family, and work contacts. But avoid posting it on public forums or on social media, where bots and spammers may capture it. Try to use your email address for as few services as possible – only the ones you genuinely use.
Use different email addresses for different purposes:
For example, an email address for work, another for close friends and family, and another more disposable one for sign-ups and subscriptions. If the latter is breached, you can abandon it, and it doesn’t pose a risk to your other accounts.
Be careful how you react to spam messages:
If you can avoid doing so, it’s best not to click on or open spam messages. When in doubt, be cautious by deleting messages you are unsure of. Never reply to a spam message – doing so alerts the scammers that yours is a live email address and invites yet more spam. Never click on links or open attachments in spam emails to avoid downloading malware or falling victim to a phishing attack.
Should you unsubscribe from spam emails?
If it’s a marketing newsletter from a company you know, which you no longer want to subscribe to, then you can hit unsubscribe. However, if it’s a spam email, it’s best not to unsubscribe (assuming it even offers this option). This is because any interaction with spammers is bad – unsubscribing lets them know you’re a potential prospect. In some cases, the unsubscribe link itself could be dangerous. It’s best to delete spam emails or using spam filtering or blocking.
How to report spam emails
Simply deleting spam emails won’t stop others from appearing in your inbox. But you can train your email provider to recognize which emails you would like to see and which you don’t. You can do this by using spam reporting features, which vary by the email provider. Some well-known examples include:
How to mark spam messages in Gmail in web browser
- Place a checkmark next to any spam messages in Gmail by selecting the empty box to the left of the email.
- Go to the menu just above your inbox.
- Find the icon that looks like an exclamation symbol ( ! ) in a stop sign.
- Press it to mark the message as spam.
- If you have Gmail keyboard shortcuts enabled, you can also press ! (Shift+1).
- Gmail will confirm that the message and any conversations it's part of have been moved to spam.
How to mark spam messages in Gmail in a mobile web browser
- Place a checkmark in the box to the left of the unwanted message or messages.
- You can also open the message you want to report.
- A new bar will appear, floating in the upper right of the screen. Press the down arrow iconto reveal the rest of the options.
- Select Report spamfrom the new extended menu.
How to mark spam messages in Gmail in the Gmail app
To report a message as spam in the Gmail app for Android and iOS mobile devices:
- In your inbox, tap the initials in front of one or more messages.
- The top menu will shift to show you the options for your selected message (s). Tap the menu icon, designated by three stacked dots, in the upper right corner of the screen.
- Another menu will expand to show an extended set of options. Choose Report spamfrom the list.
How to mark spam messages in Outlook on a web browser
- Log in to your Outlook account.
- Go to your message listand select the junk message. To flag multiple messages as spam at one time, place a checkmark in the circle next to the message.
- Select Junk from the toolbar.
- In the Report as junkdialog box, select Report or Don't report the message to Microsoft.
- The message will move to the Junk Email
- The items in the Junk Emailfolder are deleted after 30 days.
How to mark spam messages in Apple Mail
- On your iPhone, iPad, or iPod touch, open the message, tap the Flag at the bottom, then tap Move to Junk.
- On your Mac, select the message and click the Junk button in the Mail Or you can drag the message to the Junk folder in the sidebar.
- At iCloud.com, select the message, then click the Flag button and choose Move to Junk. Or you can drag the message to the Junk folder in the sidebar.
Over time, your inbox should learn to automatically filter any emails like the ones you have been flagging into your spam folder, which will probably delete anything that’s been in there for longer than 30 days.
It’s also a good idea to look at your spam folder occasionally to make sure that any emails you do want aren't ending up in there by mistake.
How to filter spam emails
As well as marking messages as spam, setting up spam filters is another way to combat spam. Again, they vary by email provider – some examples include:
To configure the Gmail spam filter:
- Log in to your Gmail account.
- Click the Gear icon at the top right and then click Settings.
- Go to Filtersand Blocked Addresses and click Create a new filter.
- In the From section, type the email address of the sender you want to keep out of your spam folder.
- Click Create filter.
To configure the Outlook.com spam filter:
- Go to Settings.
- Select View all Outlook settings.
- Select Mail.
- Select Junk email.
- In the Filters section, select the Block attachments, pictures, and links from anyone not in my Safe senders and domains list check box.
- Select Save.
To configure Apple Mail spam filter:
- To view or edit the junk mail filter, select Preferencesfrom the Mail
- Click the Junk Mail
- Confirm that the box next to Enable junk mail filtering has a checkmark in it. If not, click it.
- Choose from three basic options for how Mail can handle junk:
- Mark as junk mail, but leave in my inbox.
- Move it to the Junk mailbox.
- Perform custom actionsand click Advanced to configure. You can set up additional filters to perform custom actions on junk mail.
- Select any of the exempt messages options to exempt messages from the junk filter. They are:
- Sender of message is in your Address Book or Contacts app.
- Sender of message is in your Previous Recipients.
- Message was addressed using your full name.
To configure the Thunderbird spam filter:
- Go to the Thunderbird hamburger menu and select Options> Account Settings.
- For each account, go to the Junk Settingssection and select Enable adaptive junk mail controls for this account.
How to block unwanted emails
Sometimes, blocking may be a better option when you want to stop receiving messages from unwanted senders. The time to block is when you no longer wish to receive messages from individual senders – these emails won’t look like typical spam, so they may confuse the spam filter more than they help. By contrast, spam emails don’t usually have identifiable email addresses that remain the same, so blocking won't stem the flow of spam.
The process for blocking is different for each email provider and device. For example:
How to block emails in Gmail
- Open the message sent by the unwanted party.
- In the top right corner, click the Moreoption or the three vertical lines.
- Click Block[sender].
- Click on the Report as spamoption if you want to report the message.
How to block emails in Outlook and Hotmail
- Choose a message from the unwanted sender you want to block.
- From the Outlook menu bar, select message > Junk Mail.
- Click on Block Sender. Outlook can then add this sender's email address to the blocked list and filter out any future messages from them.
- If you want to undo this action, go to Tools > Junk Email Preferences. Go to the Blocked Senders tab, click on the specific email address, then choose Remove Selected Sender.
How to block emails on an iPhone or iPad
Blocking unwanted emails using iPhone Contacts:
- Make sure the email address of the person you want to block is in the Contacts app. If it isn’t, you need to make an entry for this sender.
- Open Settings, then Mail > Threading options. Tap Blocked Sender Options.
- Click on Blocked, then choose Add New.
- Tap on the contact you created for the unwanted email sender.
- Enable mail in iCloud so it syncs this preference across all your Apple devices.
Blocking unwanted emails using the Mail app:
- Open an email from the unwanted contact.
- Click on the name of the sender.
- TapFrom in the header and choose Block This Contact.
You can unblock senders by going to Settings > Mail > Blocked. Find the name of the blocked sender and hit Unblock.
Protecting yourself from spam messages
As well as practicing good email security, using spam filters, and blocking unwanted messages, here are three further steps you can take:
Enable multi-factor authentication
Using multi-factor or two-factor authentication means even if a phishing attack compromises your username and password, hackers won't be able to overcome the additional authentication requirements tied to your account.
Consider using a third-party spam filter
Your email service provider may have its own filter, but using it with anti-spam software can provide an additional layer of cybersecurity. This is because emails travel through two spam filters before they reach your inbox. So if junk gets through one spam filter, the other should catch it. Look for an anti-spam filter that works with your email provider.
Use comprehensive antivirus software
Suppose you do fall victim to a spam email by clicking on a malicious link or inadvertently downloading malware. In that case, a good antivirus solution such as Kaspersky Total Security will recognize the malware and prevent it from damaging your device or network. | <urn:uuid:01e6b6b3-376a-4eac-9732-141e16425a59> | CC-MAIN-2022-40 | https://www.kaspersky.com/resource-center/preemptive-safety/protect-yourself-from-spam-mail-using-these-simple-tips | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00293.warc.gz | en | 0.885078 | 3,135 | 2.796875 | 3 |
IPv6 (Internet Protocol Version 6), the “next generation” Internet
standard, has been under development now since the mid-1990s. Yet despite
current testbed activity worldwide, and an emerging push from Asia and
Europe, IPv4 (Internet Protocol Version 4) continues to hold sway. As new
applications and infrastructure roll out over the next few years, though,
more enterprise network managers in the US will face the challenges of IPv6
So why move to IPv6 anyway, and why the long delay? IPv6 was born out of
concern that the demand for IP addresses would exceed the available supply.
However, in the US, at least, this hasn’t turned into much of a problem at
Most large enterprises have managed to garner large chunks of contiguous IP
addresses by nailing down Class A and Class B networks. Protocols such as
NAT (Network Address Translation), CIDR (Classless Inter-Domain Routing),
and NAPT (Network Address Port Translation) have meanwhile been created,
also helping to curb the need for IPv6’s new addressing scheme.
Some countries in Asia and Europe, however, are already claiming an IP
addressing pinch. “Asia, in particular, is encountering some real problems
with address space depletion. IPv4 address space is largely tilted toward
the US, because we’re the ones who ‘invented’ the Internet,” says Rob
Batchelder, research director, Internet infrastructure, Gartner Group.
In fact, the governments of Japan and Korea have mandated national
migration to IPv6 by 2005. “I would argue that, by requiring use of IPv6 in
these countries, (Japan and Korea) also know they’ll get the industry
behind it. This will help drive consumption in other parts of the world,”
predicts John Longo, VP of data services for Global Crossing.
The biggest benefit of IPv6 is replacement of IPv4’s 32-bit
address scheme with a much longer, 128-bit address scheme. A 32-bit
adress scheme allows for a total of 2^32 addresses, while IPv6 allows
for 2^128 total addresses. “You’ll now have addresses for every penny
and every speck of dust,” quips Frank Arundell, director of business
development at Stealth Communications.
All kidding aside, IPv4 will certainly expand the universe of possible IP
addresses for cell phones, PDAs, and consumer appliances, including
refrigerators and TV sets, for instance. Some players in the airline
industry are even eyeing IPv6 addressing as a possible means of tracking
passengers and monitoring airline instrumentation.
IPv6 offers other technical advantages, too. For example, headers will be
simplified to seven fields, instead of the 13 fields in IPv4, bringing less
overhead than would otherwise be expected from headers for 128-bit
Header fields will include a “traffic class field,” also known as a
“priority field,” capable of distinguishing between real time traffic such
as video and lower priority transmissions that can be slowed down during
peak congestion periods.
There are three types of IPv6 addresses: unicast; anycast; and multicast.
The new anycast addresses enable a packet sent to a group of anycast
addresses to be delivered to one member of the set. IPv6 does away with
IPv4’s broadcast addresses, rolling their functionality into multicast
On the security side, IPv6 adds two new extension headers. The
“authentication header” provides built-in authentication and integrity
(without confidentiality). The “encapsulating security header,” on the
other hand, supplies confidentiality and integrity..
Despite these and other efficiencies, though, migration to IPv6 is bound to
be gradual in the United States. “If migrating to IPv6 was easy to do, it
would have been done a long time ago. It’s almost like saying that,
starting tomorrow, everyone in the United Kingdom will have to start
driving on the right hand side of the road. Overnight, you’d have to change
all the exits and move all the traffic signs to the other side of the
street,” notes John O’Keefe, president and CEO of Fine Point Technologies.
Devices with IPv6 protocol stacks will be able to automatically obtain
routable addresses. If companies change their ISPs and need to renumber,
computers will automatically reconfigure themselves. Still, though, changes
will need to be made to router settings, firewall rules, and hard-coded
IPs. Global updates to DNS entries will continue to take days to weeks.
The infrastructure for IPv6 is now under way. The IETF (Internet
Engineering Task Force), author of IPv6, has finalized the protocol,
although approval is still awaited on a number of proposed specifications,
including RIPng for IPv6; IPv6 over IPv4 clouds; and FDDI transmission.
Router manufacturers such as Cisco and Juniper Networks have already
started to comply with the emerging standard. So, too, have OS like Sun
Solaris, Microsoft Windows XP, and Linux.
IPv6 applications are expected to be the strongest driver, but these have
yet to appear. Even in the US, however, companies have started to test
interoperability and/or applications on testbed IPv6 networks.
For instance, Stealth Communications, a New York City-based ISP, also
operates NY61X, an Internet exchange point with links to more than 50
different IPv4 networks and just as many IPv6 nets. The IPv6 interconnects
include DEFENSENET; IIJ (Internet Initiative Japan); Sprintlink; UUNET; and
Finland’s TELIA, for instance.
6bone, on the other hand, is an experimental worldwide IPv6 network.
Participants include more than 180 organizations from the US alone. AOL,
BellSouth, IBM, Motorola, Microsoft, Xerox PARC, and DREN (Defense Research
and Engineering Network) are just a few. | <urn:uuid:f1f85f1a-cbbc-41cc-9632-f09aafcf646d> | CC-MAIN-2022-40 | https://www.enterprisenetworkingplanet.com/standards-protocols/ipv6-what-you-need-to-know/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00293.warc.gz | en | 0.910698 | 1,341 | 2.546875 | 3 |
International Business Machines Corp is considering adopting the underlying technology behind bitcoin, known as the “blockchain,” to create a digital cash and payment system for major currencies, Reuters first reported.
The blockchain – a ledger, or list, of all of a digital currency’s transactions – is viewed as bitcoin’s main technological innovation, allowing users to make payments anonymously, instantly, and without government regulation.
Rather than stored on a separate server and controlled by an individual, company, or bank, the ledger is open and accessible to all participants in the bitcoin network.
The proposed digital currency system would work in a similar way.
“When somebody wants to transact in the system, instead of you trying to acquire a bitcoin, you simply say, here are some U.S. dollars,” the source said. “It’s sort of a bitcoin but without the bitcoin.”
While cryptocurrency still struggles with regulatory issues, IBM is one of a number of tech companies looking to expand the use of the blockchain technology beyond bitcoin, the digital currency launched six years .
The project is still in the early stages and constantly evolving, with concerns about money-laundering and criminal activities yet to be resolved.
Unlike bitcoin, where the network is decentralized and there is no overseer, the proposed digital currency system would be controlled by central banks. “These coins will be part of the money supply,” the source said. “It’s the same money, just not a dollar bill with a serial number on it, but a token that sits on this blockchain.”
According to the plans, the digital currency could be linked to a person’s bank account, possibly using a wallet software that would integrate that account with the proposed digital currency ledger.
“We are at a tipping point right now. It’s making a lot more sense for some type of digital cash in the system, that not only saves our government money, but also is a lot more convenient and secure for individuals to use”.
Bank of England had also showed interest in the blockchain open ledger, describing it as a “significant innovation” that could transform the financial system.
(image credit: born1945) | <urn:uuid:83d15404-d661-4870-9709-9fcdae50d967> | CC-MAIN-2022-40 | https://dataconomy.com/2015/03/ibm-to-adopt-underlying-technology-behind-bitcoin-to-create-digital-cash/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00293.warc.gz | en | 0.937545 | 464 | 2.765625 | 3 |
Without a doubt, Emergency Medical Services (EMS) personnel such as paramedics and emergency medical technicians are everyday heroes. They are the first people the community calls for help during emergencies. These workers sacrifice their well-being to ensure that the sick and injured receive timely and appropriate emergency medical care. The job is so challenging and risky that the rates of violence-related injury, as well as occupational injury and fatality among EMS personnel, are much higher than the average for all occupations.
Despite such risks, however, becoming an EMS worker remains a noble and highly fulfilling career. After all, very few jobs allow one to save lives on an almost daily basis. Perhaps you have a heart for service and dream of entering this worthy profession. Or maybe you want to know more about this job. Whatever the case, this article will help you discover five essential principles all EMS personnel follow.
Provide Accurate and Timely Patient Care Reports (PCRs) to Subsequent Caregivers
EMS documentation may seem irrelevant to the inexperienced, but nothing can be farther from the truth. The patient care reports (PCRs) recorded by the EMS workers are crucial to providing appropriate and continuing care to the patient. Professionals in the emergency department, trauma centers, and similar medical facilities will rely on the information contained in PCRs as they assess the patient’s condition and decide on suitable interventions.
Imagine what would happen if an ER doctor were to accidentally administer a medication that could negatively interact with a drug given by the first responder because the doctor did not read about the medications already given to the patient. Given the importance of patient care reports, EMS providers should always create accurate PCRs and make this document easily accessible to subsequent caregivers. Doing this task manually can be taxing, which is why many EMS organizations make use of a modern EMS ePCR software to create efficiency in data collection and patient information sharing by everyone involved in the care of the patients.
Treat Every Patient with Respect
Although giving due respect to the patient may seem like a given for EMS personnel, this principle is worth highlighting. EMS workers should always keep in mind that patients have a say in the type of medical care they want to receive. Even if they know what the patient needs, EMS professionals can only provide advice initially. They cannot perform any procedure or offer any intervention without the patient’s consent, unless the person lacks the ability to make rational decisions. Recognizing the patient’s bodily autonomy is just one aspect of respect. EMS personnel should also treat the patient as they would want to be treated—with compassion, dignity, and care.
Provide Services without Discrimination
EMS workers should provide medical care to anyone who needs it, regardless of race, politics, gender identity, or socioeconomic status. Their sole focus should always be on addressing the needs of the patients. They must never let their beliefs, biases, and prejudices influence their judgment or behavior toward somebody in need. In other words, they cannot let their feelings get in the way of what they are supposed to do. It is their duty to provide the same level of care to anyone who seek their help.
Practice Empathy at All Times
Patients are not mere jobs or projects to complete. They are people with feelings. As such, one of the rules that EMS workers should never forget is being empathetic or understanding the feelings and needs of the people in their care—both spoken and unspoken. This could mean offering to hold the patient’s hand during medical procedures or staying by their side until they are safely transported to the hospital. Ultimately, empathy involves treating every patient like a beloved family member and going the extra mile to make them feel comfortable and safe.
Keep Behavior and Appearance in Check
They may not realize it, but EMS personnel offer more than emergency care to the person in need. They are also a source of hope and stability in an otherwise chaotic situation. A team of EMS professionals who are calm, polite, and empathetic toward a patient and family members can readily ease tension and bring order to any scene, even tragic ones. On the contrary, emergency crews who are inattentive, insensitive, and disrespectful in carrying out their duties can cause more physical and emotional pain to everyone involved. That is why EMS practitioners should always be wary of their behavior, words, and appearance whenever they are out helping people.
The principles discussed above are just a few of the more essential rules every EMS practitioner should follow. If you ever wish to pursue this career, you have to take all these principles and a few more unwritten rules to heart. Keep in mind that the EMS personnel are admirable not only because they get to impact lives in remarkable ways, but more so because they adhere to high ethical standards no matter how difficult they are. | <urn:uuid:41d74c94-75d2-48e8-9b92-3bb7c1f7ace0> | CC-MAIN-2022-40 | https://coruzant.com/opinion/5-important-principles-all-ems-personnel-follow/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00493.warc.gz | en | 0.96207 | 973 | 2.703125 | 3 |
A Sandia researcher has developed algorithms that separate robotic Web crawlers from people using browsers, a first step toward identifying spear-phishing sources and targets.
Sandia National Laboratories, like many government agencies, gets thousands of visitors each day searching its websites — some human-generated traffic coming through browsers, and some Web crawlers or bots that could be up to no good.
In order to protect the network, analysts have to sift the bot traffic, which can contain various threats, from legitimate human-directed browser traffic.
But even the best security system can be defeated by a gullible user taken in by a spear phishing attack, one that targets specific e-mail addresses that have something the sender wants.
Sandia computer science researcher Jeremy Wendt wants to reduce the number of visitors that cyberanalysts have to check by identifying the bots. He has developed algorithms that separate robotic Web crawlers from people using browsers, according to the lab. Wendt said he believes his work will improve security because it allows analysts to look at the two groups separately and then identify the possible sources of spear phishing.
According to Sandia cybersecurity's Roger Suppona, the ability to identify the possible intent to send malicious content might enable security experts to alert a potential target. “More importantly, we might be able to provide specifics that would be far more helpful in elevating awareness than would a generic admonition to be suspicious of incoming e-mail or other messages,” he said.
According to its Web logs, the lab said its site traffic is about evenly divided between Web crawlers and browsers. Wendt is looking for a computer that doesn’t identify itself or says it’s one thing but behaves like another, and trolls websites in which the average visitor shows little interest.
Some of the differences between bots and browsers include:
Range: Crawlers tend to go all over; browsers concentrate on one place, such as jobs.
Volume: When bots try to index a site, they pull down HTML files far more often than browsers do.
Identification: Browsers often give their browser name and operating system information. Crawlers identify themselves by program name and version number.
Behavior: Browsers go after only one page but want all images, code and layout files for it instantly, or as Wendt calls the behavior, "bursty." Bot requests, on the other hand, are not bursty, and none of the bots identified had a high burst ratio.
Now Wendt needs to bridge the gap between splitting groups and identifying targets of ill-intentioned e-mails. He has submitted proposals to further his research after the current funding ends this spring.
“The problem is significant,” he said. “Humans are one of the best avenues for entering a secure network.” | <urn:uuid:05563aa3-8291-4b37-8c2f-ab976730104d> | CC-MAIN-2022-40 | https://gcn.com/cybersecurity/2013/02/curb-spear-phishing-separate-bots-from-browsers/317300/?oref=gcn-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00493.warc.gz | en | 0.943787 | 582 | 2.71875 | 3 |
Route Filtering and Manipulation
Route filtering is a method for selectively identifying routes that are advertised or received from neighbor routers. Route filtering may be used to manipulate traffic flows, reduce memory utilization, or to improve security. For example, it is common for ISPs to deploy route filters on BGP peerings to customers. Ensuring that only the customer routes are allowed over the peering link prevents the customer from accidentally becoming a transit AS on the Internet.
Filtering of routes within BGP is accomplished with filter-lists, prefix-lists, or route-maps on IOS and NX-OS devices. IOS XR uses route policies for filtering of routes. Route-filtering is explained in more detail in Chapter 4, “Troubleshooting Route Advertisement and BGP Policies.”
Depending on the change to the BGP route manipulation technique, the BGP session may need to be refreshed to take effect. BGP supports two methods of clearing a BGP session: The first method is a hard reset, which tears down the BGP session, removes BGP routes from the peer, and is the most disruptive. The second method is a soft reset, which invalidates the BGP cache and requests a full advertisement from its BGP peer.
IOS and NX-OS devices initiate a hard reset with the command clear ip bgp ip-address [soft], and the command clear bgp ip-address [graceful] is used on IOS XR nodes. Soft reset on IOS and NX-OS devices use the optional soft keyword, whereas IOS XR nodes use the optional graceful keyword. Sessions can be cleared with all BGP neighbors by using an asterisk * in lieu of the peer’s IP address.
When a BGP policy changes, the BGP table must be processed again so that the neighbors can be notified accordingly. Routes received by a BGP peer must be processed again. If the BGP session supports route refresh capability, then the peer readvertises (refreshes) the prefixes to the requesting router, allowing for the inbound policy to process using the new policy changes. The route refresh capability is negotiated for each address-family when the session is established.
Performing a soft reset on sessions that support route refresh capability actually initiates a route refresh. Soft resets can be performed for a specific address-family with the command clear bgp address-family address-family modifier ip-address soft [in | out]. Soft resets reduce the amount of routes that must be exchanged if multiple address families are configured with a single BGP peer. Changes to the outbound routing policies use the optional out keyword, and changes to inbound routing policies use the optional in keyword.
Older IOS versions that do not support route refresh capability require the usage of inbound soft reconfiguration so that updates to inbound route policies can be applied without performing a hard reset. Inbound soft reconfiguration does not purge the Adj-RIB-In table after routes process into the Loc-RIB table. The Adj-RIB-In maintains only the raw unedited routes (NLRIs) that were received from the neighbors and thereby allows the inbound route policies to be processed again.
Enabling this feature can consume a significant amount of memory because the Adj-RIB-In table stays in memory. Inbound soft reconfiguration uses the address-family command neighbor ip-address soft-reconfiguration inbound for IOS nodes. IOS XR and NX-OS devices use the neighbor specific address-family command soft-reconfiguration inbound. | <urn:uuid:7202659c-1581-48f4-838c-56313cfb5794> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=2756480&seqNum=6 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00493.warc.gz | en | 0.890846 | 749 | 3.234375 | 3 |
The raw power of increasingly advanced quantum computers could necessitate advances in software to make sense of the noise.
Quantum computers certainly seem like strange devices. For humans used to living in a world driven by Newtonian physics, having a device dip into the world of quantum physics—where the rules are different and sometimes even counterintuitive—can seem inexplicable. And when those same devices actually solve complex problems and provide answers, it almost begins to border on magic.
Not too many years ago, there were still scientists who thought that quantum computing was a hoax. Quantum machines are built to run deep inside black boxes and must operate in a totally dark vacuum at temperatures close to absolute zero. So you can’t watch them as they work. They have to be designed that way, because their computing power is tied to putting atoms or electrons into a state called superposition, which is incredibly fragile. Almost anything can strip away that property and imprison atoms back into their normal, single state of being that makes up our Newtonian-physics based reality. Beams of light, heat, soundwaves, slight vibrations, air molecules or even radiation can devastate superposition in a process called decoherence.
These days, very few people doubt the existence of quantum computers. In 2019, Google, in partnership with NASA, achieved quantum supremacy by designing a quantum machine that could solve a problem that would have taken a traditional supercomputer thousands of years. That milestone puts the United States well ahead of other countries in the race to create powerful, and more useful, quantum computers.
In this country, most work on quantum computers is being undertaken by private companies and universities with heavy backing from the government. That is in contrast to most other rival nations like China and Russia, which are investing billions directly into government labs. Our approach seems to be working better. A recent report commissioned by the Department of Defense and conducted by the RAND Corporation shows that the United States leads the world in most key areas of quantum computing.
Most of the developments made so far in the quantum computing world have been because of improvements in hardware. Quantum computers use qubits, which are kind of like binary bits in traditional digital computers. They are powerful because a quantum device is designed to let the qubit—which can be something like a polarized photon or the spin of an electron—exist in multiple states at the same time. Instead of a digital computer’s bit that represents either a one or a zero, qubits can be both at the same time, plus everything in between. And having more qubits has so far equated to more computing power.
The Google quantum computer that achieved supremacy had 53 qubits. IBM recently announced a quantum computer with 127 qubits that is thought to be the largest in the world, although D-Wave is working on a new machine with thousands of qubits. There is some discrepancy about the numbers because of the vastly different ways companies can create qubits, but basically, more qubits means more power.
Software fixing hardware
However, while adding more qubits certainly gives more power, it does not make up for the inherent problems associated with quantum computers, with one of the biggest being that they are very prone to errors. Or, more accurately, they are difficult to understand and program so that errors don’t occur within their output. All quantum computers generate “noise” to some extent. They may return a correct answer to a question, but they will also send back a lot of useless junk, with the actual solution mixed in with it. Then it becomes a matter of trying to separate a needle from a haystack, or even a needle from a stack of other needles. Because of that, adding more qubits may not help the situation.
It's been suggested that artificial intelligence running on traditional computers could be employed to analyze the answers returned by quantum machines. That might make it easier to eliminate the noise more quickly than trying to do it by hand, but does not address the fundamental problem of inaccurate answers coming from quantum machines.
Instead of adding more qubits, the solution to this predicament might actually be software-based, letting programmers ask better questions so that noise is reduced or eliminated from the start. One of the reasons for all the errors is that the qubits can become entangled. This is a state where even if two qubits are physically separated, the actions of one can change the other. Albert Einstein amusingly described that property as “spooky action at a distance.” In practical terms, if you are accepting data generated from one qubit, but don’t know that it’s entangled with another, then there is a good chance that the data is being corrupted, but you may not know it.
Right now, scientists basically need to guess at how qubits are entangled and try to act accordingly. So it’s like trying to write a program to run on a machine where the rules are not completely known, and may change. Hence, a lot of noise gets returned with the results, regardless of the size of the quantum machine. And bigger machines could make the problem worse.
To try and compensate, scientists and researchers at The Massachusetts Institute of Technology recently unveiled a new programming language called Twist at the 2022 Symposium on Principles of Programming conference in Philadelphia. Right now, there is nothing quite like Twist. Most quantum computer programmers use assembly languages, or something like them, where they have to string a bunch of processes together without the benefit of much orchestration. They have to guess at the entanglements based on their observations of the data being generated.
Twist is designed to help scientists discover which qubits in their machines become entangled when working on a problem, and then take specific actions, like only accepting data from an unentangled qubit. The language of Twist mirrors other common programming languages and is designed to be easy for skilled coders to pick up.
“Our language Twist allows a developer to write safer quantum programs by explicitly stating when a qubit must not be entangled with another,” said MIT PhD Student Charles Yuan in MIT News. “Because understanding quantum programs requires understanding entanglement, we hope that Twist paves the way to languages that make the unique challenges of quantum computing more accessible to programmers.”
In the same MIT News article about the new language, Fred Chong, the Seymour Goodman Professor of Computer Science at the University of Chicago, talked about why Twist and other software developments may be just as important in the long run as putting more and more qubits into play.
“Quantum computers are error-prone and difficult to program. By introducing and reasoning about the purity of program code, Twist takes a big step towards making quantum programming easier by guaranteeing that the quantum bits in a pure piece of code cannot be altered by bits not in that code,” Chong explained.
As the hardware side of quantum computers continues to evolve, better software may be needed to help focus all of that raw power and potential. Twist may eventually seem like a small step towards that goal, but it’s undoubtedly a critically important one.
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys | <urn:uuid:d4ac9b69-d726-47f0-98e4-2f3df7831e47> | CC-MAIN-2022-40 | https://gcn.com/emerging-tech/2022/02/next-big-quantum-leap-may-require-better-software/362448/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00493.warc.gz | en | 0.950366 | 1,503 | 3.40625 | 3 |
Important specifications for fiber optic power meters include wavelength range, optical power range, power resolution, and power accuracy. Some devices are rack-mounted or handheld. Today we will focus on fiber optic power meters.
What does the Optical Power Meter (OPM)mean?
An optical power meter (OPM) is a device used to measure the power in an optical signal. The term usually refers to a device for testing average power in fiber optic systems. Another general purpose light power measuring devices are usually called radiometers, photometers, laser power meters (can be photodiode sensors or thermopile laser sensors), light meters or lux meters.
An optical power meter (OPM) is a testing instrument used to accurately measure the power of fiber optic equipment or the power of an optical signal passed through the fiber cable. It also helps in determining the power loss incurred to the optical signal while passing through the optical media. An optical power meter is made up of a calibrated sensor that measures the amplifier circuit and a display. The sensor normally consists of a silicon (Si), germanium (Ge) or indium gallium arsenide (InGaAs) based semiconductor. The display unit shows the measured optical power and the corresponding wavelength of the optical signal.
Explains Optical Power Meter (OPM)
OPM calibrates the wavelength and measures the power of an optical signal. Before testing, the required wavelength is set manually or automatically. Accurate calibration of the signal wavelength is necessary for accurate measurement of power level, otherwise, the test may yield false reading.
Different sensor types used in OPMs have different characteristics. For example, Si sensors tend to become saturated at low power levels and can only be used in 850-nanometer bands, while Ge sensors saturate at high power levels, but perform poorly at low power.
To calculate the power loss, OPM is first connected directly to an optical transmission device through a fiber pigtail, and the signal power is measured. Then the measurements are taken through OPM at the remote end of the fiber cable. The difference between the two measurements displays the total optical loss the signal incurred while propagating through the cable. Adding up all the losses calculated at different sections yields the overall loss incurred to the signal.
Three types of equipment can be used to measure optical power loss:
- Component equipment – Optical Power Meters (OPMs) and Stabilized Light Sources (SLSs) are packaged separately, but when used together they can provide a measurement of end-to-end optical attenuation over an optical path. Such component equipment can also be used for other measurements.
- Integrated test set – When an SLS and OPM are packaged in one unit, it is called an integrated test set. Traditionally, an integrated test set is usually called an OLTS. GR-198, Generic Requirements for Hand-Held Stabilized Light Sources, Optical Power Meters, Reflectance Meters, and Optical Loss Test Sets, discusses OLTS equipment in depth.
- An Optical Time Domain Reflectometer (OTDR) can be used to measure optical link loss if its markers are set at the terminus points for which the fiber loss is desired. However, a single-direction measurement may not be accurate if there are multiple fibers in a link since the backscatter coefficient is variable between fibers. The accuracy of such a measurement can be increased if the measurement is made as a bidirectional average of the fiber. GR-196, Generic Requirements for Optical Time Domain Reflectometer (OTDR) Type Equipment, discusses OTDR equipment in depth.
It can experiment at Voice, data, and video signal synchronous measurement and display on BPON/EPON/GPON.
Providing simultaneous measurement for all three wavelengths on the fiber (1490nm, 1550nm,1310nm )
Used in Burst mode measurement of 1310nm upstream.
Use the software connect with PC, setting the threshold, data transfer, and calibration the wavelength.
USB communication port enables data transfer to PC.1000measurement items can be saved in 3213 PON power meter or computer for data review.
With optical power meter module, include 850、1300、1310、1490、1550、1625sixs( 3213AP,3213A without 850nm wavelength);With visual fault locator modual(3213and3213AV)Optical power meter and VFL with one port.(only 3213A)
Optional Chinese/English display.
Offers up to 10 different threshold sets in total, Three status LEDs represent different optical signal conditions of Pass, Warn and Fail respectively.
10 minutes Auto-off function can be activated or deactivated
Good key design, high sensitivity, greatly reducing the volume and weight of the tester.
Different models corresponding to the different function, according to own use to choose.
When you install and terminate fiber optic cables, you always have to test them. A test should be conducted for each fiber optic cable plant for three main areas: continuity, loss, and power. Fiber-Mart offers a full range of optical power meters to support FTTx deployments, fiber network testing, certification reporting capabilities, and basic power measurements. | <urn:uuid:feceedf9-0501-4fe8-9265-aa9c9fc5de33> | CC-MAIN-2022-40 | https://www.fomsn.com/test-and-measurement/fibernetworks/fiber-optic-power-meter/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00493.warc.gz | en | 0.875576 | 1,096 | 3.875 | 4 |
In the last couple of days you cannot fail to have seen the huge number of media articles about the so-called Heartbleed bug. In this article, we’ll try and answer some of the common questions that users of Apple products have raised about this issue.
What is the Heartbleed bug?
The Heartbleed Bug is a serious vulnerability that could lead to malicious hackers spying on what were thought to be secure Internet communications. A programming bug in the widely-used OpenSSL software library could allow information to be stolen, which—under normal conditions—would be protected by SSL/TLS encryption.
Typical information which could be stolen includes email addresses and passwords, and private communications; data which normally you expect to be transmitted down the equivalent of a “secure line.”
As well as “Heartbleed,” the bug is also known officially by the rather nerdy name of CVE-2014-0160.
How long has this bug existed? It sounds like it’s really bad.
Yes, it is really bad. I hope you’re sitting down. It looks like it’s been around for two years.
Does that mean people have been able to scoop up private information for the last couple of years?
Has that been happening? I mean, have bad guys been stealing information this way?
We simply don’t know. Exploitation of the bug leaves no trace, so it’s hard to know if anyone has been abusing it. However, lots of people have demonstrated in the last couple of days that the bug can be exploited, and they’ve proven that it works.
What versions of OpenSSL are vulnerable?
OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable. OpenSSL 1.0.1g, OpenSSL 1.0.0 branch and OpenSSL 0.9.8 branch are NOT vulnerable.
Am I at risk if I use a Mac? What about an iPhone or iPad?
Unfortunately this bug doesn’t care what kind of device you are using to communicate via the Internet. This means that iPhones, iPads and Macs are just as much at risk as, say, a computer running Windows 8.1.
Is there a fix?
Yes. A new version of OpenSSL, version 1.0.1g, was released this week. Internet companies are scrabbling to update vulnerable servers and services. Some sites weren’t vulnerable in the first place, others have since fixed their systems.
Have any big websites been shown to be vulnerable to the Heartbleed bug?
Is Yahoo big enough for you? Some researchers have uncovered hundreds of Yahoo users’ passwords and email addresses by exploiting the flaw. Other big websites reported to have been affected include Flickr, Imgur, OKCupid, Stackoverflow and Eventbrite.
Can Apple roll out the patch for the bug?
Unfortunately this isn’t a bug in Apple’s software or hardware. The bug exists in open source software that some web servers and networked appliances use to establish secure SSL connections. In other words, there is no patch for your computer or smartphone or tablet computer, as the problem exists on the websites themselves.
There is a version of OpenSSL shipped with OS X Mavericks 10.9, but it is unaffected by the bug.
How can I test whether a website is impacted by the Heartbleed bug or not?
Are Apple’s own website secure, or are they affected by the vulnerability?
Tests indicate that Apple’s own websites are not impacted by the bug.
Where can I find out more about Heartbleed?
Check out this webpage all about the Heartbleed bug by the folks at Codenomicon. | <urn:uuid:59945f72-c43c-4633-921b-2382891f161c> | CC-MAIN-2022-40 | https://www.intego.com/mac-security-blog/heartbleed-openssl-bug-faq-for-mac-iphone-and-ipad-users/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00493.warc.gz | en | 0.943136 | 797 | 2.75 | 3 |
Sleep deprivation is the condition of not having enough sleep. It’s impact severe on our brain functions, with dangerous consequences arising in daily life.
However, many studies focused on how sleep affects on our body function, and how it impacts the brain.Some studies suggest that sleep may be key in visual learning, memory consolidation, and necessary unlearning.
New research from the David Geffen School of Medicine at the University of Los Angeles, California (UCLA) confirms that sleep-deprived people experience memory lapses and also faces distorted visual perception, the communication between neurons is temporarily impaired.
Senior study author Dr. Itzhak Fried, said, we discovered that starving the body of sleep also robs neurons of the ability to function properly. This paves the way for cognitive lapses in how we perceive and react to the world around us.
Origin of seizures
The researchers tested on 12 epilepsy patients, who had electrodes implant in their brains, for testing origin of seizures.
While in experiment, researchers given a categorization task to participants in which they had to sort different images into categories as quickly as possible.
In task, the researchers focused on the electrical activity in the temporal lobe of the brain, which associate with memory and visual recognition. However, the scientists noticed that the sleepier and tired participants perform slowly.
Co-author Dr. Yuval Nir, said, the act of seeing the pedestrian slows down in the driver’s over-tired brain. It takes longer for his brain to register what he’s perceiving.”
The study also found that brain cells that took longer to respond associate with slower brain waves.
“Slow sleep waves disrupted the patients’ brain activity and performance of tasks,” explains Dr. Fried. This phenomenon suggests that select regions of the patients’ brains dozing, causing mental lapses.
The researchers demand that sleep deprivation should take much more seriously than it currently, given its real dangers.
More information: [nature medicine] | <urn:uuid:372ff376-0263-4532-a683-ae7dc5b0eb8a> | CC-MAIN-2022-40 | https://areflect.com/2017/11/08/sleep-deprivation-damages-your-brain-cell-communication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00493.warc.gz | en | 0.933939 | 417 | 3.796875 | 4 |
Before we jump into the third part of this GDPR , let’s take a moment to think about a few questions. Such as, why are compliance mandates necessary? Are they framed to just prevent data breaches? Are compliance mandates established to just detect and report security attacks? I would say no!
The primary objective of compliance mandates is to help enterprises prove that all is well within their network. Yes! You read it right. IT regulatory mandates are the checklists that help organizations show auditors that security measures are intact and that their network is safe and sound.
With that said, let‘s now focus on the GDPR’s requirements, specifically establishing technical and organizational measures to streamline organizations’ auditing processes.
The appropriate technical and organizational measures to tackle Article 32
The GDPR outlines requirements to ensure personal data safety. Whether it‘s getting consent from data subjects, storing personal data, appointing a data protection officer if needed, or notifying concerned officials in the event of a data breach, the GDPR covers most security aspects enterprises have to look into.
But the GDPR isn’t as clear when defining the technical and organizational measures that a company should adopt. Here are two possible reasons the GDPR is less clear in this aspect:
- Reason #1: There are plenty of applications and platforms that help store personal data. Defining how to adopt policies for each platform or application would make the GDPR adoption process overly complicated. Therefore, the GDPR only outlines the general auditing and security policies that enterprises need to adopt.
- Reason #2: Security threats and data breaches are dynamic. There aren’t any hard and fast rules that define attack prevention. With that said, the best thing for enterprises to do is to adopt regular reviewing and auditing practices for monitoring each of their platforms that handle personal data. Restricting the adoption of best practices to specific applications or platforms would leave a big security loophole.
What does “appropriate technical and organizational measures“ actually mean?
You could store personal data in a database, such as MS SQL or Oracle Database, a file server, or even in a cloud environment. No matter where you store the data, make sure that the following measures are taken to ensure data safety.
- Control who gets to access personal data: Devise proper access controls and restrict personal data access. Grant personal data handling access only to privileged users.
- Audit user behavior: Keep track of when users:
- Access your organization’s personal data storage platform (whether that’s a server, database, or cloud application).
- Alter personal data (E.g. modify, delete, or rename files).
- Perform access modifications, permission changes, and privilege escalations with respect to personal data access.
- Get real-time insights: Ensure that you’ve established a system that notifies you in real time about any abnormal or suspicious activities such as personal data deletion.
- Always have a plan B: No matter what, be sure to retain data backups. That way, you can restore personal data in the event of data loss. Note that you need to get proper consent from data subjects before backing up their personal data. You must also ensure that your back ups are protected from tampering.
Stay tuned for the fourth and final installment of this blog series on the GDPR. We will be discussing the most debated requirement, notification of personal data breaches. | <urn:uuid:f422f228-d06f-4b57-9ac5-8d0e5874400f> | CC-MAIN-2022-40 | https://blogs.manageengine.com/it-security/eventloganalyzer/2017/08/28/getting-to-know-the-gdpr-the-technical-and-organizational-measures.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00493.warc.gz | en | 0.911701 | 708 | 2.671875 | 3 |
Security Implications of Quantum Computing Mean We Need to Vigorously Pursue Post-Quantum Cryptography
(SemiEngineering) The race is on to find and implement a public-key cryptographic algorithm that will stand up to the challenges posed by quantum computers.
For cryptography, we depend on the fact that digital computers will take hundreds or thousands of years to solve the “hard mathematical problems” at the foundation of the cryptographic algorithms which protect secret or personal data. For symmetric key cryptography such as AES, where both endpoints share a key ahead of time, the advent of quantum computing doesn’t change matters.
However, for public key cryptography, such as RSA and ECC (Elliptic-Curve Cryptography), quantum computing represents an existential event. A fully developed quantum computer using Shor’s algorithm, a polynomial-time quantum computer algorithm for integer factorization, will be capable of cracking a 2048-bit RSA implementation in perhaps as little as a few days. Since so many secure applications depend on the scalability of public key cryptography, this is an extremely serious issue.
Work is well on its way to define Post Quantum Cryptography (PQC). The National Institute of Standards and Technology (NIST) is sponsoring a competition to find, evaluate and standardize a public-key cryptographic algorithm (or algorithms) that will stand up to the challenges posed by quantum computers.
Designers will need time to implement the chosen algorithm standard(s) in their products, and that lead time can be as much as a couple of years for new chips and devices, and up to ten years for networking infrastructures and networking protocols. It will also take many years to upgrade and deploy existing computing and network hardware on a broad scale.
Quantum computing is a goal being pursued across government, academia and industry with tremendous energy. To ensure that we can keep data safe, we’ll need to pursue PQC with equal vigor. | <urn:uuid:d8af3008-8ef4-4de6-9fdd-5efb7f4679e1> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/security-implications-of-quantum-computing-mean-we-need-to-vigorously-pursue-post-quantum-cryptography/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00493.warc.gz | en | 0.933336 | 407 | 3.078125 | 3 |
While researching FTK 3X and Oracle, you just recently discovered that the best configuration of your Oracle database would be on a solid state drive (SSD). Solid state drives give the maximum level of performance to Oracle databases and in turn speed up your FTK 3X responsiveness.
You are a conscientious analyst and decide to try reinstalling your database on a SSD. You approach your boss, who is not a techno geek, and ask him to purchase a 256GB high performance SSD:
“Five hundred dollars!! For one drive? In this economy? If I buy in bulk I can get ten Terabyte hard drives for that price; get out of my office and close the door behind you!”
So how do I get close to SSD performance on the cheap? Welcome to the world of short stroking.
With short stroking, you don’t use the entire hard drive for storage. Disks have become so large and cheap that you can use the outer tracks of the disk just for storing data. If you create one partition that is twenty percent, of the total size of the drive, the drive head will travel much less distance. This will decrease your latency and improve your input / output performance, access times, and in all probability, drive wear. If correctly implemented, short stroking creates more than double the throughput in less than half the access time.
By using Fdisk, GParted, or software provided by the hard disk drive manufacturer, you can use only the first few blocks of the disk to limit the number of LBAs (Logical Block Addresses) accessible in your hard disk drive. This limits the drive arm to only the last few tracks of each platter and blocks the use of slower areas of the hard drive. Remember, you will lose access to the part that’s blocked; therefore, it cannot be used to store any data.
Reading from the outside sectors of the platters is faster, as more sectors pass under the heads per second (depending on your drive speed) at 10,000 or 15,000 RPM than towards the middle of the drive at the same speed.
If you do a twenty percent short stroke of a Terabyte hard drive, you will only have 200GB of usable space on the hard drive. You will need to short stoke two Terabyte hard drives, at twenty percent, and assemble them in a RAID 0 array to get 400GB of useable space for your Oracle database. Remember, even though RAID 0 is fast, it is not fault tolerant; be sure to periodically backup your database.
Defragment early, defragment often.
After you have installed your Oracle database on your short stroked RAID 0 array, another performance boost is recognized by defragmenting the drive that the Oracle database resides on. Defragmenting allows for rapid sequential file reads and writes. It is always best to store file blocks together contiguously (especially in any type of database).
I have created and processed FTK cases, on a clean Oracle database install and have observed up to seventy five percent fragmentation on the hard drive. Before starting analysis, I defragment the hard drive containing the Oracle database and make sure it is at zero fragmentation.
You need to regularly defragment the hard disk drive to ensure all frequently used data is defragmented otherwise; you will lose some of the performance benefits. | <urn:uuid:4a16483b-5a76-40cb-bce9-70c8e1f9e529> | CC-MAIN-2022-40 | https://www.forensicfocus.com/articles/forensic-toolkit-v3-tips-and-tricks-on-a-budget/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00493.warc.gz | en | 0.913092 | 699 | 2.53125 | 3 |
To prevent artificial intelligence (AI) being biased in the future, the teams in charge of developing them need to be more diverse, experts told the audience at the 2018 Everywoman in Technology Forum.
With the day focused on developing the technology of the future, speakers and panel members explained the importance of encouraging more women and minority groups into the technology industry to ensure current stereotypes in the sector are not inherited by the technology it develops.
Yen-Sze Soon, managing director of Accenture, said this is already beginning to happen as AI developed. Assistants have a tendency to be named after women, whereas those developed to assist in factories are given male names.
Artificially intelligent stereotypes
Soon claimed women need to help to develop AI to put themselves “front and centre of tomorrow’s world”, rather than be forced to use technology that does not fit their needs because they were not part of designing or developing it.
“Women need to take part in this change otherwise social stereotypes will continue,” she said. There is a long history of products not being fit for their audience because they were developed and tested by a small subsection of society.
Tabitha Goldstaub, co-founder of AI community CognitionX, named a few examples including women dying in more car accidents in the 1990s because crash test dummies were only built to male specifications, adverts showing men higher paid jobs than women, and Apple’s Health app originally tracking everything but a person’s menstrual cycle.
“This is only going to be exacerbated by AI,” she said, adding that if women are not involved in building these new technologies situations such as this will continue to be “everyday norms”.
The skills for AI
As AI, robotics and automation begin to make some roles unnecessary, it is clear many of the jobs people will have in the future currently do not exist yet.
Maxine Benson, co-founder of Everywoman, claimed the jobs most likely to avoid automation, such as those requiring adaptability, changeability, communication and collaboration, will provide women with an “opportunity” as these are skills women are more prone to having than men.
“These are all skills that women possess in abundance,” she said.
Many firms are increasingly looking to fill roles with candidates that have both technical and soft skills – “skills that machines will not be able to have” according to Suki Fuller, founder of Miribure.
Some are concerned that in a world full of artificial intelligence, children will grow up with a lack of empathy because of how they are encouraged to interact with it.
This makes ensuring people are still learning to empathise with each other in an artificially intelligent world just as important as developing emotionally intelligent teams to programme and develop human-facing AI.
According to Fuller, because men are “usually lacking in that mega-empathic skill”, women are in a good position to enter the AI space.
“The human aspect, the empathy, the tacit, everything that is the human aspect that quite frankly women seem to have more of, that will be your strength,” she said. “These are the key traits that make us rule the world.”
Fuelling a diverse Stem pipeline
But regardless of whether women are better at developing the soft skills that will be important in the future of AI, the industry still struggles to attract females across the pipeline.
“When you start out in your career or in school, girls are deprogrammed by the time they’re about seven that they’re not supposed to do anything technical,” said Fuller.
Many believe targeting people from a younger age could help to eliminate some of the reasons girls avoid the technology industry, including teacher and parent stereotyping.
Adding computing to the UK’s curriculum to teach people concepts such as computation thinking and coding from a young age was meant to help with encouraging young people, and especially girls, into science, technology, engineering and maths (Stem).
Ensuring how boys are taught and brought up to make them more accepting of working with and being in competition with girls is also important, said Fuller.
Despite these efforts, there are still women who begin their education and careers in Stem but end up leaving the industry further down the pipeline.
Read more about women in tech
- Tackling the lack of gender diversity in the tech industry could help close the skills gap, but there is a long way to go to solve either, says Eileen Burbidge.
- The government has committed to sign the diversity initiative Tech Talent Charter in an aim to close the gender diversity gap in the technology industry.
Inma Martinez, venture partner of Deep Science Venture and chief data scientist of Right Brain Future, claimed in organisations the shift needed to attract and retain diverse talent needs to be driven from the top down.
Research has found that having just one woman on a board reduces the likelihood of bankruptcy for a firm by 20%, proving the level of change having diverse representation on a board is capable of.
“Like anything in this world, there are companies that can see the future and move fast, and there are others that are still in 1999,” said Martinez. “It takes one crazy person at the board to say ‘I’m going to do that thing’.”
Many recommend finding a mentor higher up in a firm to gain encouragement and ensure there is a steady pipeline towards these higher-level positions.
“Be awesome and then the mentor wants to mentor you,” said Martinez. “There’s got to be a connection. A mentor normally sees him or herself in you.”
Acting as industry mentors and role models
As well as mentoring, women across the pipeline in the technology industry should make themselves visible to act as role models for those around them. Young women have even claimed they want more encouragement from role models in the technology industry to pursue careers in Stem.
Karen Gill, co-founder of Everywoman, stated being able to see women in senior positions in an organisation is the “single most important change” Everywoman members would like to see in the industry.
“When a woman is able to look up and see what they can be, it can do wonderful things for them,” she said. “Whatever your age or experience, you are a role model for someone ahead of or behind you.”
Many of the experts on the day alluded to the age-old adage that “you can’t be what you can’t see” to demonstrate how important it is to be visible and do your part to encourage change in an industry.
Melissa Di Donato, chief revenue officer of SAP S/4HANA Cloud, claimed she didn’t realise how few and far between women in the industry were until “about six years ago” and that if she had realised sooner she would have done a lot more to try to encourage diversity in the organisations she worked in.
As well as claiming women in the industry have a “duty to each other” to teach, learn and “pay it forward”, she urged women to “take a risk on a young person who has a huge ambition or energy but maybe doesn’t have the tech skills you would traditionally have thought”.
“At every level of your career you are a role model. Even when you don’t realise you’re a role model, you are,” she said. | <urn:uuid:01a94c6d-d233-4f3f-b893-6d881e98dfa6> | CC-MAIN-2022-40 | https://www.computerweekly.com/news/252434878/Everywoman-forum-2018-The-dangers-of-non-diverse-artificial-intelligence | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00493.warc.gz | en | 0.96697 | 1,590 | 2.796875 | 3 |
Inspire students who never thought they’d enjoy cyber security to find their passion!
Empower all your students to try cyber security without extra effort this British Science Week.
By showing that STEM is an accessible and dynamic career path, every student can envision themselves in a job or field of study that they might never have considered.
“I started from knowing absolutely nothing about cyber security. CyberStart taught me everything from the ground up. Even if you think you don’t like computers, give CyberStart a try. It might be for you!” - Maeve, student
Discover four FREE activities to introduce cyber security to your students with CyberStart - no effort or planning required!
1. Become cyber agents and solve a criminal case
Want a fun and easy activity to introduce your students to cyber security? Try Intern base in CyberStart.
Intern base contains 12 free cyber security puzzles across three levels, with each level getting progressively more advanced.
By playing Intern base, your students can feel the thrill of solving gamified cyber security challenges and the empowerment of quickly grasping new skills. It’s all about giving it a go!
And the best part is, all challenges in Intern base are free and unlocked for your students to try in class right now.
All your students need to do is sign up for their free access to Intern base to get playing.
2. Use Intern base to hold a mini capture the flag competition
Give your students a fun competition to compete or collaborate for points in the CyberStart challenges.
All you need to do is:
- Register for your free CyberStart account and get your students signed up too.
- Create a Group in CyberStart to get your automatically generated Group Access Code.
- Share the Group Access Code with your students, which they can use to join your Group.
- See who can score the most points in a certain amount of time.
- Sit back and watch your students’ excitement as they play. Whoever scores the most points wins!
3. Play ethical hacking challenges
Mention ethical hacking to your students, and they’ll be instantly intrigued.
CyberStart has tonnes of ethical hacking challenges that allow your students to hack into systems and networks in a safe and proactive environment.
“I never considered cyber security until I learned about how fun it was to hack and protect”. - Kaite, student
Students learn the hands-on skills needed in cyber security fields like digital forensics and penetration testing.
Your students can try ethical hacking challenges like L01 C04 - Lazy Locked Login for free in Intern base.
The Lazy Locked Login challenge is your students’ first chance to use developer tools.
This challenge allows your students to try ethical hacking whilst uncovering security flaws, just like a cyber security professional. Watch as they become empowered by their valuable new skills!
4. Try decoding and watch a video tutorial
In levels 2 and 3 of Intern base, your students will start to play more advanced challenges.
In L02 C01 - 610enC0de’d Password, your students will come across a type of encoded message. They’ll need to decode the encoded message to find the password to log into a cyber criminal server.
Don’t worry if your students haven’t encountered encoding before.
CyberStart offers many useful ways to tackle new concepts while strengthening your students’ problem-solving skills.
To solve L02 C01 - 610enC0de’d Password, your students can:
- Watch the challenge video walkthrough in the CyberStart Field Manual
- View the challenge hint to get more information
- Do their own research using a search engine
If you decide to purchase a CyberStart licence upgrade, your students will also gain access to extensive encoding write-ups in the Field Manual.
Find out how to upgrade your students’ CyberStart licences below.
It’s time to get all your students interested in cyber security this British Science Week.
Have a go at these awesome activities with our free Intern base, and then once you know how many students have shown an interest in developing their cyber security skills, reach out to us at email@example.com to discuss our exclusive education pricing packages! | <urn:uuid:9c379e85-69bf-4952-b3c6-e698e6722153> | CC-MAIN-2022-40 | https://cyberstart.com/blog/4-activities-to-do-in-your-computer-science-lab-this-british-science-week/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00693.warc.gz | en | 0.907611 | 890 | 3.15625 | 3 |
A new analysis suggests that over half of communities in the West lack the capacity to take advantage of infrastructure bill funding. Now what?
Bounded by the Bitterroot Mountains to the west and the Sapphire Mountains to the east, Montana’s Bitterroot Valley is home to renowned fly fishing streams and soaring vistas. Its forests, however, are facing the greatest wildfire risk in the entire state, with towns like Florence, Victor, and Darby all in the nation’s 98th-plus percentile for risk. Yet houses continue to be built at a rapid clip, many of them in hazardous areas.
Theoretically, the Infrastructure Investment and Jobs Act, a $1.2 trillion bill that funds improvements in transportation, water, energy, broadband, and climate resilience projects, should be able to help. The legislation, signed into law by President Joe Biden in November 2021, includes money to make forests more resilient to fire and defend at-risk communities. But according to a recent analysis by the Montana-based research group Headwaters Economics, over half of the communities in the West might not be able to access those funds.
Researchers examined 10 factors that influenced how well-equipped communities were to apply for grant funding, and then used those factors to calculate each county and community’s “rural capacity” score. For instance, Missoula County, Montana, home to the state’s second-largest city and flagship university campus, scored 94 out of 100, while Carter County, Montana, where there is no county head of planning and just 20 percent of adult residents have attended college, scored just 45, the lowest in the state.
Within the West, Montana stands out: More than three-quarters of its communities have index scores below the national median. The state’s low capacity exemplifies the challenges rural communities across the West face, including a reliance on boom-and-bust industries that create financial instability, and a lack of grant writers, land-use planners, and emergency planners that would be helpful in applying for federal funds. “You go to a rural community, and typically the mayor is almost always part-time,” said Don Albrecht, director of the Western Rural Development Center at Utah State University. “They don’t have the resources or the experience or the expertise to even write grants to get the money in the first place.”
More than half the communities in Wyoming, New Mexico, and Idaho rank below the national median on Headwaters’ rural capacity index scoring system. Clark County, Idaho; Esmeralda County, Nevada; and Jackson County, Colorado, like Carter County in Montana, also received capacity scores in the 40s. At the same time, Headwaters also found that many of the rural communities rated as having low capacity also face the highest climate threats.
When overlaid with data about flood and wildfire risk, Headwaters’ analysis reveals areas with stark capacity barriers, often exacerbated by historical injustices, as well as high vulnerability to the impacts of climate change. In Montana and elsewhere, many of these communities are on or near Native American reservations. In the town of Hays on the Fort Belknap Indian Reservation, for example, capacity is among the lowest 5 percent in the country, while both wildfire and flood risk are higher than in 90 percent of the country.
In theory, the $47 billion the infrastructure bill designates for climate resilience can help communities prepare for floods, fires, storms, and droughts. But Headwaters’ analysis suggests that areas with low capacity might not submit requests in the first place. “The point of this was to shed light on major barriers that exist for communities trying to plan and finance climate adaptation projects,” said Patty Hernandez, the executive director of Headwaters Economics. “For our team, it was really striking how widespread the problem is.”
Over the next few weeks, the Biden team is taking a cross-country tour to discuss the legislation ahead of the midterm elections in November, with a particular focus on rural areas. According to Mitch Landrieu, Biden’s infrastructure czar, officials will stop in a handful of Western states, including Colorado, Alaska, Arizona, Washington, and Nevada. Behind the scenes, federal agencies in charge of divvying up infrastructure funding are defining grant guidelines and making spending plans. “These decisions are being made right now that will impact the ability of rural communities to access the dollars that are coming online,” Hernandez said.
Officials say that access for rural communities is their top priority, pointing to their “rural playbook,” which details money set aside for urgent rural issues like broadband internet access and upgrading electricity and wastewater systems. Last Wednesday, the Biden administration launched a pilot program called the Rural Partners Network, designed to address capacity issues by putting staff on the ground in rural communities to “provide local leaders with the expertise to navigate federal programs,” according to a fact sheet from the USDA.
It’s essential for state and federal officials to work directly with communities in order to make sure that the money gets where it’s most needed, Albrecht said. Otherwise, those with more resources will “lap” it all up. According to The Washington Post, municipalities from Florida to California are already hiring lobbyists to influence where infrastructure money goes.
There are also policies the Biden administration could enact to give communities a fair shot at funds, Headwaters’ analysis suggests. The administration could eliminate requirements for communities to match contributions from federal grants, which can be difficult in sparsely populated areas with a limited tax base. Granting agencies could even directly identify places with high need and award them money without requiring applications. Hernandez said that new approaches are necessary to give rural towns across Montana, and the West, a chance. “I can’t imagine a scenario,” she said, “where a one-size-fits-all rubric for scoring proposals is ever going to work out for rural communities.” | <urn:uuid:6c9c87cb-4793-4715-995c-6c4000a5facf> | CC-MAIN-2022-40 | https://gcn.com/state-local/2022/05/why-rural-communities-struggle-bring-much-needed-federal-grants/366557/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00693.warc.gz | en | 0.949578 | 1,247 | 2.5625 | 3 |
Smart devices could provide users with the ability to see when they have previously been legally blind.
A new development regarding the use of augmented reality glasses is suggesting that these devices might be usable for individuals who have been declared legally blind, in providing them with the ability to see far more clearly.
Though this will not actually provide perfect vision, it could elevate the seeing abilities of some people.
The technology will not work for people who are completely blind. However, for individuals who have some level of vision, smart augmented reality glasses could help to provide them with an enhancement to this sense so that they can see better than they usually would.
This augmented reality based technology was developed by researchers at the University of Oxford.
The Oxford researchers used smart augmented reality glasses that contain an infrared projector and a camera, to be able to display image and gauge distance. This way, when the camera detects certain objects or other people that are in front of the wearer, they can be displayed on transparent OLED lenses in a way that can help to provide the wearer with an idea of where they are.
The augmented reality overlay can be adjusted to be displayed in a color that is most visible to the individual wearer, and its contrast can be adjusted to be much higher to make it easier to see for that person. Using this technology also makes it possible – in theory – for the glasses to be able to detect the difference between a person and an object. That way, a person who is legally blind would be better able to detect when they have things or people within their field of vision.
These augmented reality smart glasses function along with a gyroscope that is installed within them, as well as a GPS system and a compass, to provide a greater amount of data. Though they are far from restoring a full sense of vision, it can provide a system that is like AR to the wearer, considerably improving what can be seen.
At this point, the augmented reality vision devices are far from complete. Additional work is required. However, the researchers are ready to move ahead now that their discovery has won them a £50,000 prize from the Brian Mercer Award that they received from the Royal Society. | <urn:uuid:01f25ab8-9699-48a0-969c-e90eac8d477f> | CC-MAIN-2022-40 | https://www.mobilecommercepress.com/augmented-reality-offer-sight-visually-impaired/859381/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00693.warc.gz | en | 0.970343 | 440 | 3.265625 | 3 |
The word “malware” is a combination of the words malicious and software. The word malware is used to describe software that is designed expressly for criminal or unethical purposes. These purposes include a range of illicit behavior like stealing information, corrupting systems, locking access to essential applications or devices, and physical hardware destruction.
Since malware comes in so many different forms and is used in many ways, there are popular terms used to describe each variation. Spyware, ransomware, Trojan horses, rootkits, and viruses are some of the most popular malware types. Malware names and descriptions are typically based on how the virus functions or spreads. For example, spyware (as defined below) is a kind of malware that secretly tracks or monitors a victim’s activities.
How is Malware Used?
Malware functions just like legitimate software, only it is designed specifically to do something harmful. Early versions of malware were often created as pranks, but the criminal applications were made clear when pranks turned destructive. Malware programs use a wide variety of techniques to remain undetectable. More devious, well-designed viruses can surreptitiously infect a system and take measures to prevent operating systems or antivirus programs from detecting suspicious activity or files.
All useful malwares will start by infecting a device or system in order to gain access. How an attacker chooses to exploit that access will depend on the type of malware used. Below is a list of primary ways that malware is used, and the moniker associated with each type:
How can it hurt my business?
The damage that malware causes to businesses depends on the type of malware used in an attack. Negative impacts include:
- Proprietary data loss
- Customer data loss
- Stalled or slowed business operations
- Brand damage
- Device destruction
User computers were subjected to at least one type of malware attack
- Browser-based vulnerabilities are the largest contributor to malware attacks.
- Over 15 million new malware variants were observed online in 2017.
- Trojans remain the most commonly used type of malware.
- Windows users are still the most at risk of suffering a malware attack.
- Cyber criminals are increasingly using “fileless” techniques to initially compromise a system instead of relying on malicious .exe files for installation. Use of this strategy will likely increase throughout 2018. | <urn:uuid:77d8d0d4-5802-47fb-8140-e8fb371c66f4> | CC-MAIN-2022-40 | https://www.cyberdot.com/threats/malware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00693.warc.gz | en | 0.918208 | 518 | 3.46875 | 3 |
In this blog post
On October 2 each year, India celebrates Gandhi Jayanti
– the birth anniversary of Mahatma Gandhi. And if you are in the same filter bubble as I am, you would have read a news headline saying, “Gandhi Jayanti 2019:
Mahatma Gandhi Brought To Life”. How was that madepossible?
On October 2, 2019, the fourth Ahimsa lecture was organised by UNESCO Mahatma Gandhi Institute of Education for Peace and Sustainable Development (MGIEP) in cooperation with the Permanent Delegation of India at the UNESCO headquarters, Paris. It was this lecture that brought Gandhi to life in the form of a three-dimensional hologram. This life-sized hologram addressed the audience on Gandhian thoughts. Another time holograms were in the news was when the Prime Minister of India, Mr. Narendra Modi, used holograms during his election campaign in 2014. He used 3D holograms to speak live at rallies in dozens of remote towns all over India. It was the first time that hologram technology was used in a general election campaign. Holograms also found their place in a famous German circus, Roncalli, to make it cruelty-free. It replaced real animals with massive, stunning holographic animals that included horses, lions and elephants. They collaborated with Bluebox and used eleven laser projectors to achieve this.
If you think you haven’t seen a hologram in real, then think again. Holograms are used in our daily lives more often than you think. Chances are you are carrying one in your pocket right now. Currency notes such as the Brazilian real, British pound, South Korean won, Japanese yen, Indian rupee and all the currently circulating banknotes of the Canadian dollar, Croatian kuna, Danish krone, and Euro have security holograms or diffractive Optically Variable Devices (OVDs) on it. It can also be found on credit and debit cards, ID cards, books, etc. Security holograms are used for protection against counterfeiting as they are very difficult to forge. This is because they are reproduced from a master hologram that requires technologically advanced equipments and highly specialized work demanding expertise in this field. Nearly 100 countries over the world employ a hologram to protect their banknotes.
What are Holograms? The word holography is taken from the Greek words holos or “whole” and graphē or ‘writing’ or ‘drawing’. Holograms are sort of “photographic ghosts”. It is a physical structure that uses light diffraction to create an image which appears to be three-dimensional showing depth and parallax. If you look at holograms from different angles, you would see objects changing perspectives, just like you would if you were looking at a real object. Some holograms even appear to move as you walk past them. A hologram is almost like a hybrid of viewing a photograph and a real object. One of the interesting properties of a hologram is that even when it is torn in small pieces, the whole image can be seen in each piece! This technology is not new, it emerged many decades ago. It can be traced back to the late 1940s, when the Hungarian-British physicist Dennis Gabor invented electron holography. The development of laser enabled the first practical optical holograms that recorded 3D objects.
To create a hologram, a laser beam is split into two separate halves and both the light waves travel in identical ways. One part of the beam bounces off a mirror, hits the object, and reflects onto the photographic plate inside which the hologram will be created. The other part of the beam bounces off another mirror and hits the same photographic plate. By recombining these beams in the photographic plate, we can identify how the light rays in the first beam have changed as compared to the second beam. This shows how the object changes after light rays fall onto it. This information is engraved permanently into the photographic plate by the laser beams.
In its early days, holography required high-power and expensive lasers, but now, mass-produced low-cost laser diodes can be used and have made holography much more accessible. Most of us have experienced 2D and 3D technologies but the latest addition to the holographic technology is 7D. 7D hologram is a technique of capturing a high-quality hologram using 7 parameters, called dimensions. From each viewpoint in a three-dimensional space, viewing direction is captured in a two-dimensional space and for each viewing direction time and light properties are captured. So, the seven parameters are: 3D position + 2D angle + time + image intensity (light properties). In simpler words, the main difference between a 3D and a 7D Hologram is that in 7D the subject or the whole scenario is captured from a larger number of viewpoints. This technology is still in the experimental stage but we will soon be able to experience it.
Holograms have appeared in a wide range of books, live-action movies, television series, animations etc. There has been unrealistic depiction of holograms in fiction, which resulted in the general public having overly high expectations of the capabilities of holography. Some of the famous movies that featured holograms are Star Wars, Batman, Iron Man, Wall-E and Avatar. Star Trek, Power Rangers, The Simpsons and The Flash are some of the TV series which featured this technology. Holography has gained a lot of popularity in the world of video gaming as well. Recently, Sony Interactive Entertainment was granted a patent for a 3D holographic display screen by USPTO (United States Patent and Trademark Office) which will be compatible with a game console to experience PlayStation 3D games without the 3D glasses. It has detailed eye tracking as well as facial recognition technology included. It can determine the number of people looking at the display. It is equipped with cameras and a light sensor which calculate the distance between the gamer and the screen. What’s more, this screen is even capable of recognizing gestures, including blinking of eyes, winking or nodding the head.
A team of researchers in Japan of Burton Inc., created a holographic projector which enables laser focused aerial display of text and images. It is achieved by focusing 1kHz infrared pulse to a single point in the air and breaking down air molecules to separate positive and negative ions. The team designed this technology to be used for communication assistance during any disaster or emergency situations by producing bright floating holograms. One of the problems with this system is that the intensity of the laser is so high that, if it comes in contact with the skin of the user, it can cause significant burns. So, a new system has been designed allowing users to interact with the mid-air images by using femtosecond lasers which have lesser intensity. Not only are these holograms safe to touch but can also be felt. These tangible aerial holographic images are called Fairy Lights. Another application of the holographic technology is the Euclideon Hologram Table which is the world’s first multi-user hologram table. It can be used to display digital models of cities or buildings which can be zoomed in to single blades of grass. As claimed by the company, users can pick up objects from the hologram and move them around on the table. This table is commercially available for businesses.
Recently, a Japanese hotel ‘Henn-na’ caught the eye of the media for using friendly dinosaurs as their front-desk staff. This hotel, located in Tokyo’s buzzing Ginza district, is the world’s first hologram hotel. While checking in, the visitors are welcomed at the reception by holographic virtual staff like a ninja, or even a dinosaur (who apparently gets excited every time it sees a human). This allows real human staff to do other important work. Each hologram can speak 4 different languages — English, Chinese, Japanese and Korean. There are cameras and sensors placed at the front desk which alerts the holographic receptionists when someone approaches. They can also read the visitors’ emotions and respond accordingly. This hotel is part of Japan’s cutting-edge hotel chain that opened one of the first hotels completely staffed by robots. The company behind Henn-na has revealed plans to open more than 100 properties worldwide over the next five years.
As we move towards a digitally advanced future, the potential of fusing physical atmosphere with the virtual world interaction between the physical and virtual worlds will no longer remain just a figment of our imagination. Holograms have many interesting and effective use cases which can push the limits of user experience. It has a wide scope of transforming businesses in new and unforeseen ways when fully implemented. | <urn:uuid:34297293-2558-4543-98a9-c932bc49da35> | CC-MAIN-2022-40 | https://www.gavstech.com/holograms-a-bridge-between-the-physical-and-virtual-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00693.warc.gz | en | 0.94543 | 1,908 | 2.796875 | 3 |
About Incident Response 🔗︎
The threats to companies are increasing despite growing expertise and investments in IT security. Companies can no longer protect themselves effectively through prevention alone. It shows that most organizations can’t detect initial intrusions (Attivo Survey, 2020) . Nearly two-thirds (64%) of respondents indicated that 100 days of dwell time . They must therefore be in a position to react correctly to events in IT systems. As soon as a company detect violation of their security policy and a raise a security incident , since those events mostly compromise sensitive personal & business data they needs to respond quickly (before major damage occurs). To be able to do this, the methodology of incident handling helps.
While focusing on incident response capabilities, the incident response methodology also underlies processes and procedures that can be applied to any security incident.
As we observe a rise in the number of security incidents, the IR handling processes helps with the use of attack categories to faster identify and scope the problem and apply different response strategies.
For example, the National Institute of Standards and Technology (NIST) categorizes the types of attack incidents as follows:
- External/Removable Media: An attack executed from removable media (e.g. flash drive or CD) or a peripheral device.
- Attrition: An attack that employs brute force methods to compromise, degrade, or destroy systems, networks, or services.
- Improper Usage: Any incident resulting from violation of an organization’s acceptable usage policies by an authorized user, excluding means from the above categories.
- Loss or Theft of Equipment: The loss or theft of a computing device or media used by the organization, such as a laptop or smartphone.
- Other: An attack that does not fit into any of the other categories.
Handling the given incidents is essentially the task of Computer Emergency Response Teams or Computer Security Incident Response Teams, also commonly known as CERT or CSIRT.
After an initial assessment of the situation, it is typically necessary to determine whether there is an imminent danger to life and limb (such as in the case of manufacturing and industrial plants), but also a risk of manipulation, sabotage, or exfiltration of sensitive data. If necessary, such a danger can be contained with immediate measures, whereby the attacker should not be made aware of the existence of the incident response activities if possible. The incident response team then attempts to identify the attacker’s current and past activities to a sufficient extent and observe them over a period of time to gain a picture of their capabilities, procedures and possible motives.
Information gathering is essential to tracing the activities of an incident. It ensures sufficient evidence identification and IOC development that will enable the IR team to assess and define the extent of compromise. In this critical stage of the incident response process, Maltego supports IR teams to gather intelligence from both public and paid data sources.
Typical Incident Response Processes 🔗︎
Incident response is the process designed to manage, contain, and—when possible—reduce the consequences of a cyberattack in a fast-paced and efficient manner. Maltego can help incident response teams carry out rapid analyses of digital artifacts that have triggered such a response protocol, align your operations with the best common practices, and shape your existing playbooks.
NIST Cybersecurity & Incident Response Frameworks 🔗︎
The National Institute of Standards and Technology (NIST) is an agency operated by the United States’ Department of Commerce which provides standards and recommendations for many technology sectors. These standards and recommendations are usually voluntary for industry but mandatory for government agencies.
NIST created a high-level cybersecurity framework based on existing standards, guidelines, and practices for organizations to better manage and reduce cybersecurity risk and communications amongst both internal and external organizational stakeholders. The Framework consists of five key functions that provides a comprehensive view of the lifecycle for cybersecurity management:
The NIST Information Technology Laboratory (ITL) developed one of the most extended models for incident response (IR): The Computer Security Incident Handling Guide (Special Publication 800-61 ). The NIST incident response process is a cyclical activity featuring ongoing learning and advancements to discover how to best protect the organization. It includes the following stage:
The core of everything is the Incident Response Plan (IRP) which is a set of documented procedures detailing the steps that should be taken in each phase of incident response, including roles and responsibilities, communication plans, and standardized response actions.
MITRE ATT&CK & D3FEND Matrixes 🔗︎
During the last years, incident response teams have been overwhelmed as they have not been able to properly manage the growing landscape of threats that were impacting organizations in a persistently and impactful way. In the past, it was common to define some standard operational procedures (SOPs) aligned with the common types of triggers that detection teams were potentially detecting and handling. These observations were handed over to the response teams(IR-Teams). This approach was however not sufficient, so the discipline shifted towards actively using threat intelligence for a better understanding of the threat landscape: Enumerating threat actors, their tactics, techniques and procedures (TTPs), and tracking their ongoing campaigns mapped to specific indicators of compromise (IoCs) used in every single cyberattack.
In order to properly structure all the adversary information, MITRE ATT&CK Framework was born. The MITRE ATT&CK Framework is a globally accessible knowledge base of adversary tactics and techniques based on real-world observations. The knowledge base is used as a foundation for the development of specific threat models and methodologies in the private sector, in government, and in the cybersecurity product and service community.
Most stakeholders in the industry follow this framework in order to align the generation of threat intelligence information (exchanged in well-known TIP platforms such as MISP or OpenIOC), detection signatures using standardized languages such as YARA, response capabilities such as the ones available in Security Orchestration Platforms (SOARs), and investigative OSINT/DFIR tools pivoting in digital online/offline evidence such as Maltego.
During the last year, a new complementary framework known as D3FEND was born to support defenders in order to encode countermeasures in a knowledge graph. It contains types and relations that define both the key concepts in the cybersecurity countermeasure domain and the relations necessary to link those concepts to each other.
Accelerating Your Incident Response Workflows from Hours to Minutes with Maltego 🔗︎
With Maltego, analysts will not need to spend valuable time switching between multiple tools or writing a report detailing their findings for other teams and decision makers to act upon. Instead, they can carry out their analysis with all available data within one interface in Maltego and present their results directly on the graph, which will help them reduce time during the triage and analysis phase.
This handbook is meant to serve as an example of how Maltego could be utilized in standard incident response workflows to streamline investigative efforts. It is not meant to replace any established practices or tools, but to present a solution to challenges some investigators might face. | <urn:uuid:f0dbc647-ef66-483f-babc-9b01b763876a> | CC-MAIN-2022-40 | https://www.maltego.com/blog/maltego-handbook-for-incident-response/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00693.warc.gz | en | 0.934248 | 1,476 | 2.765625 | 3 |
Every day, our team at Star Rapid receives a large number of 3D CAD files for new ideas and inventions. They arrive in different shapes, sizes and levels of complexity depending on what the customer needs made. But sometimes we’re unable to get to work straightaway for a couple of reasons.
The two most common factors that stop us from getting to work with a project are incorrect file formats and the lack of a watertight model (also referred to as "manifold").
What is a watertight 3D model?
Imagine you’ve created a cylinder and want to fill it up with water. As you start pouring water into the cylinder it starts coming out of the sides. Not having a watertight model follows the same principle – you have leaks.
There are several reasons why this might be the case but the most common one is having two neighboring surfaces that are not stitched together completely.
Why is a watertight 3D model important?
A single, unified 3D model is a lot easier to work with and eliminates a lot of room for error when it comes to creating a CNC machining program or preparing the model for 3D printing. When it comes to 3D printing, a non-watertight model will generate error messages when trying to generate the G-Code.
Plugging the holes, so to speak, can be quite time-consuming and delay the manufacturing process.
What are the preferred file formats for sending CAD files?
The best CAD file format to share is the STEP file. STEP stands for Standard for the Exchange of Product model data. The lack of file interchangeability has been a huge headache in the field of manufacturing and product design for a long time. The STEP-file was introduced to make it more uniform.
Another widely used, vendor-neutral, file format is IGS (also referred to as IGES) which stands for Initial Graphics Exchange Specification. Along with STEP it’s one of the most compatible file formats around and allows for seamless transfer of information between software.
Can I send other file formats?
Although STEP and IGS are the preferred file formats because of their general compatibility with a variety of software we also accept designs sent as .stl.
STL originally stood for stereolithography, but has since received a variety of other names, such as “Standard Triangle Language” and “Standard Tessellation Language”.
The landscape for 3D CAD files is wide and complex. Knowing what file formats are widely used and compatible with a range of software can minimize potential delays when it comes to manufacturing. | <urn:uuid:10fad6c7-9b18-4360-b645-c9a41bea4d5b> | CC-MAIN-2022-40 | https://www.mbtmag.com/home/blog/21101883/why-providing-a-complete-3d-model-matters | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00693.warc.gz | en | 0.941945 | 535 | 3.03125 | 3 |
How many iterations, what salt and what hash function should I use with PBKDF2?
To answer this, we need to look a little at what password-based key derivation function (PBKDF)2 does, and how it works.
PBKDF2, standardised in RFC 2898 and PKCS#5, is a function for creating a cryptographic key from a password. It is the only such function currently appearing in NIST standards, hence it has seen widespread use.
The aim of the function is to create a key in such a way that dictionary attacks (where the attacker just tries a range of possible passwords) are unfeasible. To do this, PBKDF2 applies a pseudorandom function (PRF) to the password many times. This means that an attacker making a guess at the password will also have to apply the function many times to his guess. This increases the computation time needed to check each guess.
Additionally, the function can be given a "salt" parameter. The idea of this is to make each key derivation operation unique, so that an attacker cannot guess one password and then look for matches against a large number of derived keys. These properties mean PBKDF2 is used not just to produce a key to be used in a cryptographic protocol, but also to store passwords securely (by storing the derived keys).
A developer using PBKDF2 must choose parameter values for the salt, the PRF, and the number of iterations, i.e. the number of times the PRF will be applied to the password when deriving the key.
The specification suggests (in section 4.1) that the salt be (or contain) a 64 bit pseudorandom value. This makes collisions (i.e. occasions that two stored passwords use the same salt) unlikely. By the birthday paradox, we would expect a collision after 2^32 passwords, i.e. a little more than 4 billion.
The PRF mentioned in the specification is SHA-1, and in many libraries this is the only choice. However, using SHA-256 or SHA-512 has the benefit of significantly increasing the memory requirements, which increases the cost for an attacker wishing to attack use hardware-based password crackers based on GPUs or ASICs.
The recommended iteration count in the RFC published in September 2000 was 1000. Computing performance has greatly increased since then. Modern guides such as the OWASP password storage cheat sheet (2015) and the August 2016 NIST guidelines now also recommend a minimum of 10 000 iterations. NIST's detailed guide (Appendix A.2.2) recommends that the iteration count be "as high as can be tolerated while still allowing acceptable server performance".
Real-World Password Cracking
What are the consequences of a low iteration count?
Imagine we are restricted to using SHA-1 as our PRF, as is the case for example in PKCS#11 up to version v2.20. How long would it take a well-resourced attacker (i.e. with access to GPUs) to break an 8-character password?
First we have to estimate how much entropy or "randomness" there is in an 8-character password. An excellent paper by Kelley et al. from IEEE Security and Privacy 2012 found that when users are forced to choose a password following the "Comprehensive8" policy, "Password must have at least 8 characters including an uppercase and lowercase letter, a symbol, and a digit. It may not contain a dictionary word.”, the result is roughly 33 bits of entropy.
If, however, the password is a perfectly random combination of uppercase and lowercase letters, numbers and the 30 symbols on a US keyboard, we would expect 52 bits of entropy. Interestingly, the same result can be obtained by choosing 4 random words from the Diceware list.Second, we need to know how fast GPUs can calculate PBKDF2.
An article from April 2013 reports a rate of 3 million PBKDF2 guesses per second on a typical GPU setup. This includes calculating AES once for each guess (to see if the right key has been derived to decrypt a master key file), and it's now November 2015, so suppose conservatively we can apply Moore's law almost once since then (whether one can apply Moore's "law" to GPUs is doubtful), giving a very rough rule-of-thumb ability of 5 million guesses per second on typical GPU hardware.
The table below shows how long an attacker would take to cover the whole password space of a single salted hashed password.
Password complexityEntropy estimate (bits)1000 iterations10000 iterationsComprehensive8334 hours 46 minutes47 hours8 random lowercase letters3712 hours5 days8 random letters45123 days3 years 5 months8 letters + numbers + punctuation OR 4 random Diceware words52325 years3250 years
If you have to use PBKDF2, you should:
- use a unique 64-bit salt for each password.
- rather than SHA-1, use SHA-512 or if not SHA-256 if you can.
- use an iteration count of at least 10000, more if you can do it "while still allowing acceptable server performance".
On this last point, note that execution speeds of PBKDF2 implementations vary widely. Using a faster implementation will allow you to run more iterations without slowing down your server. If you want to check that your developers are using cryptography securely everywhere in your applications, Cryptosense Analyzer can integrate into your CI/CD process and check for bad parameters, key management errors, randomness issues and other cryptographic mistakes. | <urn:uuid:e64da109-efa4-4581-be33-192ded471eaa> | CC-MAIN-2022-40 | https://cryptosense.com/blog/parameter-choice-for-pbkdf2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00693.warc.gz | en | 0.904228 | 1,175 | 2.578125 | 3 |
This document describes the settings required to be able to operate a Web server and an FTP server behind a LANCOM router.
Most Internet access options from Internet providers include just one IP address that can be reached direct from the Internet. LANCOM devices use a mechanism known as NAT/PAT (network address translation / port address translation) to connect an entire network to the Internet via a single public IP address. In this process, the router translates the private IP addresses of the computers in the LAN to the LANCOM's public IP address. In this case, only the public IP address can be accessed from the Internet, which in this case is directed exclusively to the LANCOM itself. This serves to shield the private network from the Internet.
If selective access is to be allowed to a server in the LAN, the connection to the server's private IP address must be configured in the LANCOMS router's configuration. For this it is important to know the address of the server port which the service is running on; for example, a Web server runs on port 80 by default. You can find a complete overview of registered ports (so-called well-known ports) on the IANA (Internet Assigned Numbers Authority) Web site, which is responsible for allocating protocol and port numbers in the Internet. You can access the complete overview of port numbers under this link:
1. In LANconfig, open the configuration dialog for the LANCOM router and go to the menu Configuration → IP router → Masquerading.
2. Click on the Port forwarding table button.
3. Click on Add to create a new entry for port forwarding.
4. Into the two fields first port and last port, enter port 80.
5. As remote site, select the remote site for the Internet connection. If no remote site is specified, port forwarding will apply globally for all Internet connections.
6. In the Intranet address field enter the local address of the designated web server.
7. Under Protocol you can choose whether forwarding is to apply to TCP or to UDP connections, or both.
- Under Map port you can specify a target port for remapping the port addressed from the Internet. If the port is to remain 80, as in our example, nothing has to be entered here.
- WAN address allows you to specify a WAN address for which the port mapping should apply. This is particularly relevant when more than one public IP address is available on a WAN connection.If you choose not to use this, leave this value set to 0.0.0.0. In our example we have only one public IP address, so the field remains unchanged.
8. Store your entries with OK.
9. Now you set up the port forwarding for the FTP server. Configure this entry as illustrated in the figure below.
- The LANCOM WEBconfig service works like any other Web server on port 80. If you activate a Web server in the LAN via the service table as described in the example, you can still access the LANCOM router's WEBconfig Interface from the Internet using port 8080 or using HTTPS protocol.
- It is also possible to modify the http port in the LANCOM device. | <urn:uuid:4b3c3327-9f70-44dc-b2c2-89b5c73b2319> | CC-MAIN-2022-40 | https://support.lancom-systems.com/knowledge/display/KBEN/Port+forwarding%3A+Setting+up+a+Web+and+FTP+server+behind+the+masked+connection+of+a+LANCOM+router | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00693.warc.gz | en | 0.867245 | 673 | 3.234375 | 3 |
Prolong laptop battery life with these tricks
Are you interested in knowing how you can prolong laptop battery life? It’s difficult to get any work done with your laptop notifying you that it’s running out of juice. And if you are not close to a power outlet, your laptop will soon be nothing more than a cold slab of metal and plastic. Here are some helpful tips to prolong laptop battery life.
Some truths about your laptop battery
Batteries in many modern devices are lithium-based — either lithium-ion or lithium-polymer — and users must take note of the following guidelines for proper battery maintenance:
- Leaving your battery completely drained will damage it.
- Batteries have limited lifespans. So no matter what you do, yours will age from the very first time you charge it. This is because as time passes, the ions will no longer be able to flow efficiently from the anode to the cathode, thereby reducing the battery’s capacity.
What else can degrade your battery
Besides being naturally prone to deterioration, your battery can degrade due to higher-than-normal voltages, which happens when you keep your battery fully charged at all times. Even though a modern laptop battery cannot be overcharged, doing so will stress and harm your battery.
Both extremely high temperatures (above 70°F) and low temperatures (32–41°F) can also reduce battery capacity and damage its components. The same goes for storing a battery for long periods of time, which can lead to the state of extreme discharge. Another factor is physical damage. Remember that batteries are made up of sensitive materials, and sustaining a shock from a fall or similar can damage them.
How to prolong laptop battery life
Now that you know some facts about your laptop battery, it’s time to learn how to delay its demise:
- Never leave your battery completely drained.
- Don’t expose your battery to extremely high or low temperatures.
- If possible, charge your battery at a lower voltage.
- If you need to use your laptop for a long period of time while plugged into a power source, it’s better to remove the battery. This is because a plugged-in laptop generates more heat that will damage your battery.
- When you need to store your battery for a few weeks, you should recharge your battery to 40% and remove it from your laptop for storage. | <urn:uuid:8dfb8d30-61c8-4968-a3f8-397aff34bc74> | CC-MAIN-2022-40 | https://1800officesolutions.com/prolong-laptop-battery-life-with-these-tricks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00693.warc.gz | en | 0.922089 | 501 | 3.03125 | 3 |
WebRTC or Web Real-Time Communications is a free and open-source application that supports real-time communication for web browsers and mobile applications via application programming interfaces (APIs). It is standardized by the World Wide Web Consortium (W3C) and Internet Engineering Task Force (IETF), and is compatible with HTML, TCP/IP, and HTTP protocols.
WebRTC makes audio and video communication possible and allows direct peer-to-peer communication and teleconferencing without the need to install plug-ins or any third-party software. WebRTC consists of various interrelated APIs and protocols, and is supported by Google Chrome, Apple, Microsoft, Mozilla Firefox, and Opera.
The global WebRTC was valued at USD 1,669 million in 2018 and is expected to reach USD 21, 023 million by 2025, growing at a CAGR of 43.6% between 2019 and 2025. The webRTC market is anticipated to witness significant growth in the coming years due to the growing demand for collecting real-time information.
The demand for solutions providing real-time information to retailers and customers has been witnessing an upward trend due to the growing use of online platforms for shopping. WebRTC-powered retail websites enable retailers to help their customers in choosing the right product from any location. The global webRTC market will also grow due to its rising application in end-use sectors like IT, telecom, e-commerce, and others.
Similarly, the increasing adoption of IoT and growing demand for cloud telephony will create more opportunities for webRTC. On the other hand, rising incidences of cyber-attacks can restrain the market growth. Another factor that can inhibit the growth of the global WebRTC market is interoperability challenges.
North America is expected to hold a significant share in the global WebRTC market owing to the rising adoption of WebRTC by the IT and telecom industry of the region, especially of the US. The European market will also play an important role, as the technology is increasingly used in the retail sector. Another region that is projected to grow substantially is the Asia-Pacific region. China and India are going to lead the Asia-Pacific WebRTC market, mainly due to the rapidly growing telemedicine services.
The major players that are operating in the global WebRTC market are, Acano Ltd., Google Inc., Oracle Corporation, Avaya, Inc., Cafex Communications Inc., TokBox, Twilio, Citrix Systems Inc., Frozen Mountain, Genband US LLC, Dialogic Corporation, Quobis Networks, S.L, Sinch AB, and TeleStax, Inc. | <urn:uuid:eb5b377c-14fd-450a-b964-82642edc1b42> | CC-MAIN-2022-40 | https://www.alltheresearch.com/blog/global-webrtc-market | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00693.warc.gz | en | 0.915359 | 546 | 2.671875 | 3 |
Published On March 18, 2019Blockchain in finance is starting to take hold, with regulations on the horizon. Financial institutions will need secure, streamlined data management
Blockchain in finance is advancing as financial services providers and regulators look into the different ways cryptocurrencies will impact payments, value exchange and other elements of the financial landscape.
Blockchain employs a closed loop tracking system to protect against tampering or modification, using a “block,” which is a sequence of unique letters and numbers protected by public key encryption, with each party receiving a “golden copy” of the document containing the embedded block. New blocks — forming a chain — are added any time the document changes, with all parties receiving updated golden copies. Together, all blocks in a chain contain the complete transaction history. A blockchain ledger tracks all changes, and is distributed to all computers in the chain. All documents are required to have the same blockchain signature, as a protection against fraud.
Blockchain is the underlying technology used by Bitcoin and other cryptocurrencies, which today are being used to provide more seamless transfer of funds between parties in different countries, to exchange funds anonymously and to pay ransom to hackers who encrypt computers and won’t decrypt them until they’re paid. Hackers like cryptocurrencies because they can’t be traced in the same way as other forms of payment, though regulators are seeking methods to change this.
Some financial institutions see value in adding crypotocurrencies to their existing line of products and services. Firms may also see cryptocurrencies as helping financial services firms extend their businesses to the unbanked and underbanked, according to Banking Exchange.
Regulators in the U.S. and overseas are looking to develop oversight rules for blockchain in finance in order to protect consumers from fraud and ensure the stability of the payment mechanism. As they develop rules, regulators are looking at issues such as how blockchain funds should be reported, according to Mobile Payments Today. The reporting and data management of cryptocurrencies is at least as complex as data management for other types of financial transactions, if not more so.
Expect that any newly developed regulations for blockchain in finance to require the kinds of record keeping and data maintenance required for other types of financial transactions. Financial services providers will likely need to be able to provide accurate, comprehensive and secure records of cryptocurrency transactions upon request. That means working with a data management partner who not only has a deep understanding of blockchain and all of its implications, but also has secure and proven data management and record keeping capabilities. | <urn:uuid:86ae199a-22e6-4473-bc04-06eb16a97514> | CC-MAIN-2022-40 | https://www.ironmountain.com/blogs/2019/emergence-of-blockchain-in-finance-requires-secure-streamlined-data-management | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00693.warc.gz | en | 0.948372 | 510 | 2.515625 | 3 |
Move a zone and its contents, including resource records and options, to a different part of the DNS name space. For example, if you move the example.com zone under the org top-level domain this would create a zone called example.org.
To move a DNS zone:
- From the configuration drop-down menu, select a configuration.
- Under DNS Views, click a DNS view. The Zones tab for the view opens.
- Click the name of a zone. If you wish to move a sub zone, navigate to the zone you want to move by clicking on the sub zone names.
- Click the zone name and select Move.
Under Destination, enter the new location for the zone
in the Address Name field. The destination for the zone
must already exist within the Address Manager configuration.
For example, to move the zone example.com to example.org, type example.org in the Address Name field. The .org zone must already exist in the configuration.
- Click Yes. | <urn:uuid:1ae5ca28-e65c-4fe5-92e2-b20dcfda8d55> | CC-MAIN-2022-40 | https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Moving-a-DNS-zone/9.2.0 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00093.warc.gz | en | 0.77184 | 213 | 2.578125 | 3 |
Application security, sometimes referred to as "AppSec," is a collection of security measures applied at the app level to prevent data or code from being misused, stolen, or harmed. It’s a comprehensive approach used to address security issues during application development, design, and deployment—and to prevent security vulnerabilities that may lead to an attack.
Application security often includes a mix of security software and hardware devices to minimize risks and vulnerabilities. Solutions often include application delivery controllers (ADC), integrated web application firewalls (WAF), encrypted routers, and other application delivery tools.
Explore additional application security topics:
App security is critical because application-layer attacks—specifically SaaS and web app breaches—are the most common type of attack. Cloud-native applications frequently contain sensitive data and are accessed from multiple devices and networks, making comprehensive app security a vital component of cybersecurity strategies.
These days, applications are available from everywhere. They’re accessed by different networks connected to the internet. This wide availability, although very convenient, also increases your attack surface—and makes apps vulnerable to threats and data breaches. It’s not enough to secure the network. For applications to remain secure, protection must extend to the apps themselves.
See how the right application security and delivery solutions can help you continue business as usual—even when the unexpected happens.
Application security measures can be classified according to their environment. The three primary classifications are:
Cloud app security consists of the solutions, processes, and practices used to protect the sharing and exchange of data in collaborative cloud environments. Because cloud environments usually provide shared resources, it’s important to implement the principle of “least privilege.” That means making sure users access only what they’re authorized for and need to complete their tasks.
Common cloud application security processes include security testing and secure web gateways. It also involves securing the architecture. As more enterprises adopt hybrid and multi-cloud strategies, cloud app security needs to adapt to these environments. Cloud security architecture assesses the environment for application gateways, identity verification systems, and enterprise datacenter deployments.
While cloud app security involves securing the environment, web application security involves securing the applications themselves. Web apps are apps or services that users can access via an internet browser. Securing the applications is important for organizations that provide web services or host applications in the cloud because they must protect them from cybercriminal intrusions.
An example of web application security is the web application firewall. This solution acts as a filter, inspecting incoming data packets and blocking suspicious traffic.
Most applications are used on mobile devices. Because mobile devices transmit and receive information over the public internet, they’re vulnerable to attack. Organizations often use virtual private networks, access control, and other security measures to prevent unauthorized access to data. Encryption is another common method employed to provide an extra layer of security for mobile data.
Securing applications and their environments can be a challenge. Fortunately, applying best practices can improve an organization’s application security posture. A good framework to follow includes four steps:
For developers, application security starts by using secure code and secure development processes. Implementing DevSecOps (development, security, and operations) practices involves baking security controls in early and throughout the software development lifecycle (SDLC). Common procedures include automatically carrying out security testing on every piece of code before delivering it into production.
Developers should also be aware of potential threats and vulnerabilities, such as the ones provided by Open Web Application Security Project in the OWASP Top 10—a regularly-updated list of the most critical application security threats.
It is not enough, however, to identify security flaws during application development. DevOps professionals and IT security teams need to protect the entire application development process against common threat methods including phishing, malware, and SQL injection attacks.
At the enterprise level, several application security tools and automation strategies are available to secure applications. For instance, secure application delivery simplifies the process of applying consistent security policies across multi-cloud environments.
Another solution is to implement a web application firewall. This solution filters incoming traffic to applications to detect potential threats and intrusions. Next-generation web application firewalls employ artificial intelligence (AI) and machine learning (ML) capabilities to monitor app behavior and user interactions. These advanced technologies enable organizations to mitigate both known and unknown attacks. They usually provide recommendations for remediation and help complying with regulatory standards.
Securing access to digital workspaces is vital in enterprise environments. Since cloud applications can be accessed from anywhere and from any device, organizations need to ensure access security that doesn’t disrupt the employees’ experience. Implementing access control policies and a zero trust security approach may help achieve security without compromising the ease of use.
Citrix application security solutions provide a holistic approach to managing and maintaining a consistent security posture, in any environment, including cloud and hybrid. | <urn:uuid:0106cc73-02f8-4fd8-8343-2892f004368f> | CC-MAIN-2022-40 | https://www.citrix.com/fi-fi/solutions/app-delivery-and-security/what-is-application-security.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00093.warc.gz | en | 0.917483 | 1,013 | 3.28125 | 3 |
Simple Storage Service (S3)
Learning Center | Glossary | Simple Storage Service (S3)
What is S3?
S3 is short for Amazon Simple Storage Service or Amazon S3. It is a cloud service provided by AWS for secure, highly-available and redundant data storage. It is used by customers of all sizes and industries for a number of use cases, including:
- Backup and restore
- Disaster recovery
- Internet applications
- Data lakes
- Big data analytics
- Hybrid cloud storage
- Cloud-native application data storage
A web console, S3 Management Console, provides easy-to-use management features for organizing data and configuring finely-tuned access controls. Standardized protocols can also be used to upload and access Amazon S3.
Amazon S3’s storage units are objects that are organized into buckets. Buckets are used to organize files, like a folder. An infinite amount of data can be stored in buckets. There is no limit on the number of objects that can be uploaded and each object can contain up to 5 TB of data.
Buckets can be managed with the S3 Management Console, using the AWS SDK or with the Amazon S3 REST API. The HTTP GET interface and the BitTorrent protocol can be also be used to download objects. Items in a bucket can also be served as a BitTorrent feed to reduce bandwidth costs for downloads.
The location of Amazon S3 buckets is specified using the s3 protocol (s3:// Protocol). It also specifies the prefix to be used for reading or writing files in a bucket.
Permissions, revisions and other settings can be defined on a bucket level. Upload and download permissions can be granted to up to three kinds of users. Authentication protects data from unauthorized access.
When logging is enabled, the logs are stored in buckets and can be used for analyzing information, such as:
- Date and time of access to the requested content
- The protocol used (e.g., HTTP, FTP)
- HTTP status codes
- Turnaround time
- HTTP request message
These logs can be analyzed and managed with third-party tools. | <urn:uuid:511dd97b-ea50-4098-8dee-7ab661f78985> | CC-MAIN-2022-40 | https://aviatrix.com/learn-center/glossary/s3/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00093.warc.gz | en | 0.870219 | 453 | 3.03125 | 3 |
A Pew Internet survey report has found that older web users have adopted social networking at a faster rate than younger users.
The report is based on the findings of a daily tracking survey on Americans’ use of the Internet. Princeton Survey Research Associates International conducted telephone interviews among a sample of 2,252 adults, age 18 and older and found that between April 2009 and May 2010, social networking use among internet users ages 50-64 grew by 88 per cent, from 25 per cent to 47 per cent.
“Young adults continue to be the heaviest users of social media, but their growth pales in comparison with recent gains made by older users,” explained Mary Madden, Senior Research Specialist and author of the report.
“Email is still the primary way that older users maintain contact with friends, families and colleagues, but many older users now rely on social network platforms to help manage their daily communications.”
By comparison, social networking use among users ages 18-29 grew by just 13 per cent from 76 per cent to 86 per cent. Among adults ages 65 and older, 13 per cent log on to social networking sites on a typical day, compared with just 4 per cent who did so in 2009.
At the same time, the use of status update services like Twitter has also grown—particularly among those ages 50-64 according to the report. One in ten internet users ages 50 and older now say they use Twitter or another service to share updates about themselves or see updates about others. | <urn:uuid:f957a1c9-a14f-41a2-8a1e-cce1d76a34d4> | CC-MAIN-2022-40 | https://www.pcr-online.biz/2010/08/29/older-web-users-flock-to-social-media/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00093.warc.gz | en | 0.93223 | 305 | 2.546875 | 3 |
Researchers at the University of Illinois Urbana-Champaign (UIUC), Meta, and concrete supplier, Ozinga, have formed a partnership with the aim of discovering better low-carbon concrete formulas using artificial intelligence (AI).
Early-stage results have found that the AI-powered formulas reduce the carbon footprint of the concrete by 40 per cent while maintaining strength and durability. Meta has begun testing the formulas on multiple structures at the company’s DeKalb data centre in Illinois, namely the floor slabs of the guardhouse and the construction management team’s temporary offices.
Concrete is the most popular building material in the world, with between ten and 30 billion tonnes used for construction each year. But the price of that progress is a cost to the environment: Cement, an essential ingredient in concrete, is responsible for eight per cent of global anthropogenic greenhouse gas emissions.
“We designed new formulations that nearly halve the carbon requirements of concrete yet are just as strong or stronger than traditional formulations,” said Lav Varshney, an associate professor of electrical and computer engineering at UIUC. “Given the popularity of concrete, there is a global scale of potential applications.”
With its glue-like properties, cement has historically been combined with other ingredients, such as water, sand, and coarse aggregates, to make concrete. But the manufacture of cement causes enormous amounts of carbon emissions, in part because of the fuels needed to heat some of the ingredients to 1,400 degrees Celsius. In addition, one of the key ingredients is limestone (or calcium carbonate), which releases carbon dioxide during calcination in the manufacturing process.
To replace cement in the concrete mix, researchers had to identify a formula that would be as strong, durable, and workable as the standard one.
Varshney and Nishant Garg, an assistant professor of civil and environmental engineering, trained a model using the Concrete Compressive Strength data set, which is openly available from the UCI Machine Learning Repository. This database has 1,030 concrete formulas along with their validated attributes, including seven-day and 28-day compressive strength data (i.e., how the concrete gained strength seven days and 28 days after pouring). The embodied carbon footprint associated with the concrete formulas was derived using the Cement Sustainability Initiative’s Environmental Product Declaration (EPD) tool.
EPDs are a standardised way of accounting for the environmental impacts of a product or material, including carbon emissions over its life cycle.
Using the input data on concrete formulas along with their corresponding compressive strength and carbon footprint, the Al model was able to generate several promising new concrete mixes that replaced cement with other supplementary materials, such as fly ash and slag. The final recipe was tested and further refined by Ozinga – taking into account several factors including expected cold weather conditions and material availability – before it was poured at the Meta DeKalb data centre.
“A lot of researchers are using AI for predictive purposes, in that you give them certain recipes and they can predict the strength or some other characteristic,” said Garg, who specialises in the chemistry and characterisation of construction materials. “But our approach is unique in that we leverage the best available data and use the model to generate the potential recipes based on our needs. It is tremendously useful.”
The model may also be useful in designing concrete formulas for places where building materials may be less readily available – for example, for constructing cell phone tower foundations in remote rural regions. | <urn:uuid:57a873d2-dd31-454c-a57f-1968e948d439> | CC-MAIN-2022-40 | https://digitalinfranetwork.com/news/ai-developed-low-carbon-concrete-tested-at-new-data-centre-site/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00093.warc.gz | en | 0.938554 | 728 | 3.21875 | 3 |
A character string.
Returns TRUE if the first character of a string is lowercase.
ISLOWER() returns TRUE if the first character of the Character_String is lowercase; otherwise, it returns .F. (FALSE). This function is normally used to search fields where case-sensitivity has significant meaning. Use this function in conjunction with the SUBSTR()function to test the second and subsequent characters in a field.
? islower("celtics") = .T. (TRUE) ' if FIRSTNAME contains "Shirley" ? islower(FIRSTNAME) = .F. (FALSE) | <urn:uuid:4b1d7259-aa0f-425c-b8cf-1750acfe0ad7> | CC-MAIN-2022-40 | https://documentation.alphasoftware.com/documentation/pages/Ref/Api/Functions/Data%20Type/Type%20Checking%20Functions/ISLOWER%20Function.xml | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00093.warc.gz | en | 0.66771 | 154 | 3.3125 | 3 |
A pointer variable that contains the result of the operation. Result has the following properties:
- . HasError
.T. = indexes could not be created .F. = indexes were created
- . ErrorText
If .HasError is .T., then contains the error message.
- . Xbasic
Contains the Xbasic code if Show_Xbasic is .T.
The name of the table containing the indexes to be created.
A list of index definitions created by GET_INDEX_DEFINITIONS().
Logical. Optional. Default = .F. .T. = Show Xbasic code generated to create indexes.
Creates indexes for a table based on a CRLF definition string. Definition string is of format: tagname|OrderExpn|FilterExpn|Flags. Tip: The format string can be created using Get_Index_Definitions().
Creates indexes for a table based on a CR-LF delimited list created by GET_INDEX_DEFINITIONS().
The following example retrieves index definitions from a table, then (after they were manually deleted), recreates the indexes.
list = Get_Index_Definitions("index_entries") ? create_indexes("index_entries", list, .t.) = ErrorText = "" HasError = .F. Xbasic = t.index_create_begin("Entry","PADR(LEFT(ENTRY,40),40,\" \") + PADR(LEFT(SUBENTRY,40),40,\" \") + PADR(LEFT(TOPIC,40),40, \" \")","","") t.index_add("Index_Id","INDEX_ID","","U") t.index_create_end() | <urn:uuid:154f28c4-6fc0-4ad2-98a7-940c683da60d> | CC-MAIN-2022-40 | https://documentation.alphasoftware.com/documentation/pages/Ref/Api/Functions/Document%20Types/DBF/Index%20Functions/CREATE_INDEXES%20Function.xml | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00093.warc.gz | en | 0.672083 | 424 | 2.5625 | 3 |
What is a Rootkit?GRIDINSOFT TEAM
Rootkit, as you can understand from its name, is a program that grants you low-level control of the infected system. And “low-level” means not surface access but possible access from the deepest levels. While most computer viruses are launched just as the applications, some malware types require access on the driver level or even deeper.
It is important to mention how your PC executes the applications you launch. There are four hierarchical levels, called protection rings, that define the rights that programs have when executed by your CPU. Ones that are placed on the higher ring are not able to inflate the apps on lower rings. Ring 0 is given to the operating system's kernel, Ring 1 - to the hardware drivers, and Ring 2 - to the programs with low-level access permissions. Ring 3 is used by most third-party applications since they don’t need any deep permissions. Implementing malware to the Ring 2 means you have control over all user’s applications; on Ring 1 - almost over all things that happen on the computer.
A rootkit is a program or a pack of tools that allow the person who controls it remotely to access the infected system and control it as it wants. It will still be dangerous in solitary but can barely be used to earn money for the victims, as all other viruses do. You can vandalize the infected system, make it malfunction or even not work, but that will not bring you a penny. Until you manage to inject the other, profitable malware.
Rootkit in combination with other viruses
Rootkit malware is extremely useful when you need to give the other viruses the ability to integrate as deep as possible. Such permissions offer cybercriminals access to all disks and even the whole network. Sure, such a rough tool is rarely used for attacks on individuals. Attacking the single users with rootkit+other malware is like hunting rabbits on a tank. However, against corporations or other things that use computer networks and data centers, it is just what is needed.
Attacks on corporations are usually backed by spyware, ransomware, or both simultaneously. Rootkit acts like an aperitive for the main course - encrypted files and stolen data. Both may cost corporations millions of dollars, and in case of a confidential information leak you may also expect reputational losses. And all this spree is provided by one little thing that sits deeply in your system and holds the gates open. But how does rootkit work?
How does that work?
All operating systems among contemporary ones are not 100% invulnerable to malware attacks. Even the latest Windows and macOS have certain security flaws - they’re just not uncovered yet. Those which were reported are usually fixed in the nearest updates. Nonetheless, many companies do not keep an eye on the regular system and software updates. Some ask their employees to update their PCs manually, but they would rather make more memes about Windows updates than update them. Exactly, those breaches are used by rootkits to escalate privileges.
But breaches in operating systems are not so massive and easy to exploit. Most of the vulnerabilities the hackers use are located in third-party applications. MS Azure, Office, and Outlook, almost all Adobe products and apps from different other vendors are full of security flaws that allow the cybercriminals to perform their attacks. Rootkits they use are often created specifically for exploiting the vulnerabilities in certain apps used by the target company. These programs lists, their versions, and all other information that can be useful during the attack are collected during the OSINT operations.
Security breaches are usually the result of a poor or shortsighted software design. These flaws usually allow the user to execute the code with higher privileges or launch certain functions without showing any visible signs. Cyberburglars use such an ability to execute the malicious code with maximum efficiency. When you launch the MS Office macro or open the malicious link in the Adobe document, crooks get this ability and launch the viruses. You can find the list of all detected vulnerabilities on the CVE Mitre website.
Rootkit attack step-by-step
When the rootkit is delivered into a computer in the corporate network, it attempts to use one of the flaws to allow itself the execution on the deepest available level. Such rights then allow crooks to infect all other computers in the network and - more importantly - to brute force the domain controller. Access to the DC means controlling computers in the network and the server. In the companies where the network is not clustered that may mean paralyzing the whole office. For small companies, that usually means days of idle, offices of large companies may stop for weeks.
Cybercriminals do not create any new things. RDP attacks and bait emails are alpha and omega of all modern malware spreading campaigns, but even in further actions there is nothing new. After the successful launch of the rootkit, crooks then start brute-forcing the domain controller and other computers. Simultaneously, additional malware is downloaded and launched with escalated privileges. Computers in the network become infected, and when the DC falls, crooks launch the spyware and start downloading the data from servers.
The stolen data is the source of additional money for crooks. Personal information of the clients, data about the company's financial stats or production plans - all these things are highly-priced on the Darknet. Unknown doers are even more generous in their bids if there is information about the customers of a particular clinic, insurance company, or cybersecurity firm. However, some companies pay the additional ransom the crooks ask to avoid publishing this info. Sometimes, this "second" ransom reaches the sum of the initial one - for file decryption.
How are rootkits distributed?
As it was mentioned, they are spread through the classic ways of initial payload for the cyberattack. RDP technology is great, but was used too rarely to catch and fix all flaws before the pandemic. But even though all attempts of Microsoft to block these breaches, companies do not hurry to update their software. And hackers only say “thank you” for such a gift. Even more thanks they will give you for the employees who open all attachments to the email messages and allow the macros in Office docs.
When it comes to attacking the individuals (i.e., joking or vandalizing), rootkits are distributed as fix tools for a specific problem, system optimizers, or driver updaters, i.e., pseudo-effective software. Sometimes sly users add them to the repacked Windows version and spread on torrent trackers or elsewhere. Then, they are free to do whatever they want to users who install it on their PCs.
How to stop the rootkit?
It is pretty easy to counteract the thing when you know how that acts, and it is not something unstoppable and out-of-ordinary. Rootkits exploit old and well-known things, as it was shown earlier, so it is quite easy to make your system, or even the whole network, invulnerable to malware injection. I will give you multiple pieces of advice, from preventive to defensive.
Do not let it in
Your workers must be aware of email spamming and other forms of baiting/social engineering. Someone may listen to the information but pay zero attention to it and keep doing dangerous things. But the question there is “why is he/she still not fired?”, not “how to make he/she follow the security rules?”. It is the same as smoking in the room with oxygen containers or rolling the mercury balls on the office floor.
Set up a simple user account for employees
Since Windows Vista, this operating system can ask for the administrator’s password to run the program that asks for the admin’s privileges. People keep using admin’s accounts by inertia, but there is no need. Even though rootkit can still find a way to escalate privileges, it will need not just to escalate the rights of the application it exploits but also the rights of the user’s account. More steps → , more time → , more chances to be stopped.
Use anti-malware software
The ideal case for companies is to have a corporate solution for network protection. It will prevent the malware from spreading through the network and stop its attempts to exploit the security breaches. When you cannot afford it or don’t want to have such a massive thing running in the network, you can create proper malware protection using the anti-malware software. GridinSoft Anti-Malware will fit perfectly for that purpose - without any excessive functions and with proactive protection.
Update the software regularly
Rootkits will be useless when the system does not have any of the known vulnerabilities. Sure, there is still a possibility that crooks will use a 0-day exposure. But it is better to have only this risk than to have all flaws available for exploitation + zero-day breaches. Fortunately, nowadays, software vendors offer security patches almost every week. Just keep an eye on it and launch the installation of the update regularly.
Cluster the network
Having your whole network connected to a single Domain Controller is a perfect situation for hackers. Sure, such a network is much easier to administer, but the whole network gets damaged when it comes to cyberattacks. If malware - usually ransomware or spyware - finishes its activity, you cannot use any of the computers in the network. All files are ciphered, credentials are stolen, and ransom notes are everywhere. Your office will be out of the working process, resulting in significant losses. Meanwhile, when you cluster the network, only a part of it gets out of service in case of attack.
Isolate the servers
Sure, it is much more comfortable to manage the servers sitting at your workplace than to have a glorious walk through the server room, from one stand to another. But the latter is much safer in the case of cyberattacks. It is important to keep the balance - some prophylactics and new releases deployment may be done remotely. But most of the time, the server access must be restricted or enabled only after entering the password.
How are rootkits detected and removed?
Detection and removal of the rootkits is a thing that requires the use of anti-malware programs. It is impossible to see any signs of its presence before it is too late, and it has many facilities to make itself more sustainable in the attacked system. Their detection, however, also requires a fast reaction. Such a reaction time can be provided with a proactive protection function, allowing the program to check the running processes in the background. That’s why I will recommend you GridinSoft Anti-Malware once again. It has a function - based on the heuristic engine and neural network. This security tool can effectively counteract the rootkits in the earliest stages of malware infiltration. | <urn:uuid:3a41562e-4e9d-48f9-b668-77c66f9e4f5e> | CC-MAIN-2022-40 | https://gridinsoft.com/rootkit | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00093.warc.gz | en | 0.936252 | 2,245 | 2.546875 | 3 |
With the U.S. economy has been on a downward trend, it seems that there is no end in sight when it comes to the question “How much money is in the world?” In fact, there seems to be a constant search for the elusive answer to this question. This is because of the fact that so many different things have been happening in the real world and the economy has taken off in such a rapid way. The economy, which is a very important thing in everyone’s life, is now in a state of uncertainty as the U.S. government has been making its plans in order to deal with the current economic situation.
If we were to make a guess about how much money is in the world, then it would almost be impossible to predict accurately. It can depend on several different factors and some of them are not even thought of as possibilities. Many central banks have been able to boost the money supply in the past by slashing interest rates.
Gold is by far the largest contributor to the worldwide currency in terms of dollars. Cryptocurrencies can stop central banks from changing the currency supply significantly. If governments decided to change the amount of money they print and make it less expensive, it is possible that many people will lose money and have a hard time making their payments.
One of the biggest factors is that there have been many current stock market trends that have been in place since the inception of the stock market. The number of companies on the market have been increasing over the years and they seem to be getting bigger all of the time. However, it is impossible to know how much money is in the world based on the current numbers.
Gold has also been an important factor in the money cycle for a very long time. Gold has been used for money for many years and the governments that issued gold coins did so for a reason. Since gold is one of the more valuable assets available, it is possible that the governments that issued these coins could have kept gold reserves for very long periods of time. If they did, it would mean that the amount of money in the world would be relatively small compared to other factors.
Interest rates have been increasing for a long time. Many people have said that if this rate goes up, it will create a large amount of money in the world. Since interest rates are tied to an overall economic system, it will affect the amount of money in the economy. When the rates go down, it will lead to a decrease in the amount of money in the world.
The value of money is a factor that has been used for many years to determine what countries have the best currency in the world. If the currency of a country is higher than the one of another, the country that has the better currency is said to have the best economic system. When the two systems match, then there is a good chance that people will have a more stable economic system and a smaller amount of money in the world.
Money will always be involved with international trade and the process of moving goods from one nation to another. Some nations will always have a stronger currency and others will have weaker ones. When there is an increase in the value of the currency, it will mean that the nation is doing well and that its goods will be more easily traded and sold. It can even mean that it has the ability to buy goods that the other country does not have.
Another way that currency and the economic stability of a country can affect the amount of money that people in that nation have is when the value of the currency is down. Many times the governments will raise the price of a little bit and hope that it will lead to an increase in the value of the money. If the value of the currency is down, then the nation’s government may be suffering.
Money is something that everyone is able to use on a day-to-day basis. Whether it is used in a good or bad way is up to each person and their decisions. There is a possibility that the amount of money that people have in the world could be dependent on the actions of the governments around the world. | <urn:uuid:2b5cbe58-88ab-4419-a6e3-c34d769cf2ee> | CC-MAIN-2022-40 | https://economystandard.com/how-much-money-is-in-the-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00093.warc.gz | en | 0.980389 | 838 | 3.109375 | 3 |
An entity for which identity information is stored and managed by a system.
Jane Smith is an employee of BigCorp.
The representation of a principal in a system.
A user may be represented in a system by storing their attributes in a database such as first name, last name, email address, etc. E.g. BigCorp's database stores Jane Smith's name, email, employee Id, etc.
The process of establishing that identity information associated with a particular entity is correct.
HR has verified the evidence that proves the user in the HR database with the name Jane Smith and personal email address of j***@gm**l.com is a person born on MM/DD/YYYY, with a social security number of XXX-XX-XXXX, having a California Driver's License number: XXXXXX and a physical address of 123 Main St. Somewhere, CA USA.
The act of securely associating a principal's identity to an app, a browser or a device upon request by the principal. The user agents could include APIs that are used by one system to communicate with another.
The user email@example.com is now associated with the web browser session as identified by a cookie.
In order to prevent impersonation of one principal by another, this requires the principal to prove their identity using a combination of mechanisms such as passwords, SMS-based verification, biometric proofs, security keys containing one-time-passwords or cryptographic private keys.
User firstname.lastname@example.org is authenticated using a password and a Yubikey hardware security token.
A central storage of identities in an organization. This is typically organized as a hierarchy, where each identity is uniquely identified by a path from the root of the hierarchy to the specific identity.
Jane Smith's directory entry in BigCorp's “Active Directory” is identified by “CN=JSmith,OU=Support,OU=Users,DC=BigCorp,DC=com”
This would generally mean that Jane Smith has an entry in the directory as a user in the support organization of BigCorp.
A hierarchical unit of directory organization. Principals (e.g. users or computers) belong in exactly one OU, but OUs may be nested within other OUs. An OU may be used as a principal in many situations such as applying policies.
As seen in the previous example, BigCorp's support employees are all stored in the OU named “support”. The “support” OU is stored in the OU named “Users”.
Users in a directory that may or may not belong in the same OU are grouped together in a “free form” group. In the context of authorization, a group is referred to as a role. A group may be used as a principal in many situations such as applying policies.
Jane Smith is a member of the group “Tiger Team”, along with select members of engineering, product management and the leadership team.
The privilege granted to a principal to access specific resources in specific ways.
Jane Smith has authorization to update a discount percentage in an invoice record of the customer SuperCo. In Role-based access control (see below), the authorization could be something like: Tiger team has authorization to update customer billing information.
A mechanism, typically based on open, standards-based protocols, in which an independent provider (the identity provider) can authenticate a user in order for them to be authenticated at another system (the service provider or relying party). Also known as federated authentication
BigCorp uses SAML (popular federated identity protocol) to login employees such as Jane to their 401(k) provider using their employee authentication.
Or Jane uses an expenses app on their mobile phone to authenticate themselves using OpenID Connect (another popular federated identity protocol) to authenticate as a BigCorp employee and enter their expenses.
The authenticating service or software in a federated identity system.
Okta, Microsoft Azure AD, Sign-in with Google.
In the context of federated identity, this is the system that consumes the authentication token from the identity provider so that the principal may be authenticated without separately authenticating them at the service provider. A service provider is also known as a Relying Party.
Cloud SaaS platforms such as Salesforce, ServiceNow, Workday can be configured to rely on identity providers for user authentication.
Security Assertion Markup Language. A federated identity protocol that is popular with enterprise services. It is used mainly for single sign-on between independent web-based services, such as SaaS providers. It also has the capability to provision new accounts at the service provider and to logout the user from multiple websites where they are signed in using SAML. However, the provisioning and single logout features are not commonly used.
SAML Identity providers include Microsoft Azure AD, Okta and Google
SAML service providers include Salesforce, ServiceNow and Workday.
A federated identity protocol that leverages the Open Authorization 2.0 protocol (OAuth 2.0) to implement single sign-on and optionally, authorization. It is more often used where the interaction is client application based rather than web-based, but many websites also use OIDC as a federated identity sign-in mechanism.
OIDC Identity providers include: Microsoft Azure AD, Okta, and Google
OIDC relying parties include: Gmail mobile app, Uber mobile app, Dropbox and Expedia.
The result of a federated authentication protocol exchange, which conveys the identity of a user from an identity provider to a relying party (AKA service provider.)
A SAML token is also known as a SAML Assertion. An OIDC token is also known as an ID token.
An action taken by an authenticated principal to interact with objects in a system. Objects with respect to access could be data, applications, computing resources (virtual machines, storage buckets, etc...) or physical resources, e.g. rooms, buildings, etc...
Interactions are dependent on the type of object. E.g. a principal may create, read, modify or delete data. They may start or stop virtual machines, etc.
A cryptographically un-spoofable data item that represents specific access granted to the possessor of the token.
Sometimes, such a token is just a long, unguessable number. Sometimes it is a digitally signed JSON web token.
An open standard protocol of communication between independent systems that results in an access token being received by the requesting system, which enables them specific access capabilities on behalf of a principal.
The protocol defines scope codes (called just “scopes” in the protocol) understood by the “resource server” (i.e. the system of which the access is requested) that are used limit what the requester has access to
The protocol also defines an optional mechanism for a user principal to provide consent to the specific scopes being requested.
Consumer example: A ride-sharing mobile application uses OAuth to obtain a user's profile picture from Facebook. Facebook requires consent from the user in order to release this information
Enterprise example: An API client from a security company may use OAuth to access an API provided by a smart-devices manufacturer to assign a specific device to a customer user of that company.
The authorization server (AS) in an OAuth protocol issues access tokens after authenticating and authorizing the request.
A service or server that accepts an OAuth token, verifies the scopes and grants access to the possessor of the access token to resources within its service.
Google Drive service
A system to manage authorization that enable an organization to specify which principals have access to which objects.
A software component that makes an authorization decision based on policy
A software component that provides information required by the PDP to make an authorization decision
A software component that enforces an authorization decision made by a PDP
A software component that enables users to define and manages policies
An authorization system wherein access control rules are different for smaller collections of objects rather than broad categories.
An authorization system wherein the principals that are either allowed or denied access to a specific object or collection of objects are specified in a list. If the members of the list are allowed access, then the list is called an “allow list” (formerly known as a “whitelist”). If the members of the list are denied access, then the list is called the “deny list” (formerly known as a “blacklist”). The access may be a specific type of access (e.g. create or delete), or a generic access, which could mean any operation on the target object.
“email@example.com, firstname.lastname@example.org and email@example.com” can update firewall rules.
A system to manage shared credentials to highly sensitive resources. Users are often required to use secure two-factor authentication to login to the PAM system. The PAM system then uses the shared credential to enable the user to access the critical resource, such as a production server.
An authorization system wherein each principal is assigned a role, and access control lists include such roles rather than the principals directly within them.
Managers have access to historical employee performance report files.
An authorization system wherein the decision to grant access to a principal to a specific object is determined at run-time, based on factors that may change more rapidly than the content of an access control list.
Environmental properties that affect a user's access to an organization's resources.
This may include the user's authentication mechanism (e.g. password or two-factor), IP address, time of the day, whether the device they are accessing from is secure or not.
A dynamic authorization system wherein policy rules can be specified regarding access to objects or collections of objects. The rules can include roles, but also more dynamic attributes such as the access posture. In some cases, principals and objects / collections are assigned text attributes, and policies can also include such text attributes in determining whether the user has access to a specific object or not.
Example policy: A user accessing from a private location (as indicated by their IP address) may access files that have the attribute “PII” (personally identifiable information) if the user's role is in the access control list of the file being accessed.
A dynamic authorization system wherein policy rules may include attributes as in ABAC, and specific computations that can determine a business justification for a user to have access to a certain object.
Example policy: A user with the right access posture and role may access a customer's data if they are assigned an open case which includes the customer as an affected entity and the user's jurisdiction is the same as the customer's jurisdiction.
The OpenID standard defines an ID token with specific fields in it.
In the OAuth protocol, an access token may be renewed by presenting the refresh token to the authorization server. | <urn:uuid:c4c74104-0e42-4232-973c-84a16eddb5ef> | CC-MAIN-2022-40 | https://sgnl.ai/glossary/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00093.warc.gz | en | 0.926611 | 2,296 | 2.6875 | 3 |
I want to take a break from my Small and Medium business series to discuss a topic that I think still causes a lot of confusion. Let’s talk about cyber risk.
Cyber risk is a hot topic these days, but, in my opinion, it is often misunderstood or conflated with other cybersecurity and privacy concerns. I’ve had conversations where all involved were using the phrase ‘cyber risk’ differently, and, despite hours of talking, the results were still muddled, and the participants emerged confused.
In this blog, I’d like to offer a common definition of cyber risk and outline a few models of cyber risk that I’ve found handy in my time working in the Cyber Risk Management field.
I’d like to start this post with a simple definition and a simple method for measuring cyber risk and move to more advanced, and hopefully more precise, methods of measuring cyber risk in future posts. I’d also like to talk about what we can do with these measurements of cyber risk once we have found them in order to achieve rudimentary cyber risk management.
Related: Creating A Third-Party Cyber Risk Management Program – Where To Begin
The Basic Definition of Cyber Risk
NIST, ISO, AICPA, and DHS are among the multiple organizations that have offered a definition of cyber risk. While the multiple definitions of cyber risk all differ to a lesser or greater extent, a few key elements remain the same. Let’s examine these constants to get a little better understanding of what the absolute broadest understanding of ‘Cyber Risk’ is. Let’s break it down into three ideas:
- Let’s acknowledge that a bad thing can happen to our cyber assets.
- However, just because some bad things can happen, doesn’t mean that they will happen.
- But, if a bad thing does occur, it will do some amount of damage.
Cyber risk is, therefore, a prediction that is a combination of how frequently we can expect a bad thing to happen, and how bad it can be. Obviously, this is a very simple definition, but I think it is one that is pretty universal.
The concept of Cyber risk can be pretty handy for organizing business responses to the bad things that can happen.
Cyber Risk – Basic Qualitative Measurement Model
Qualitative Cyber Risk Measurement is a way of measuring cyber risk without using numbers. We could use this if it is not very important to be precise, or we don’t have specific numbers about the frequency of an event or the negative impact the event could have.
In our qualitative risk analysis, we will plot the probability that an event occurs and the negative impact of an event along two ordinal axis. Let’s use the ordinal series Low, Moderate, and High to represent the probability and impact of a bad event. Using this chart, we can plot the following events:
- An event with a low likelihood of occurrence and low impact
- An event with a low likelihood of occurrence and high impact
- An event with a high likelihood of occurrence and low impact
- An event with a moderate likelihood of occurrence and moderate impact
- An event with a high likelihood of occurrence and high impact
Qualitative cyber risk measurement is among the easiest methods of working with cyber risk in your organization, but because there are no numbers attached, it may be less meaningful than other methods, and less precise.
As we can see, event 1 (low/low) poses a fairly low risk as expected, and event 5 (high/high) poses a high risk as expected. However, in this qualitative model, the combination of a high/low event or a low/high event equates roughly to the same level of risk as a medium/medium event.
Please, tune in for the next iteration in the Cyber Risk Series where we will discuss a more advanced and precise model of measuring risk, as well as some interesting risk management data points to look for in your organization. | <urn:uuid:b870e0c6-4003-4083-ad58-634d1969b77f> | CC-MAIN-2022-40 | https://www.cybergrx.com/resources/research-and-insights/blog/what-is-cyber-risk-how-can-we-measure-cyber-risk | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00093.warc.gz | en | 0.951482 | 832 | 2.53125 | 3 |
In this week’s video update, Damson’s Charlotte Moore will be talking to us about the role of Labels in organising data in Drive.
This video will explain what labels are, how to use them, and how they can help your business stay organised and protected against data loss.
So, if you’re interested in learning more about Labels and how they can streamline the way your Google Drive is organised, stay tuned!
What are Labels?
Labels in Google Drive are a form of metadata that you can apply to documents, sheets, and slides. They can be named and given a colour.
How are Labels used in Google Drive?
Google drive label makes finding and organising your files easier. You can add a label for a project, a client, or a level of security. Once you have labeled your file you can filter your documents by label, cutting down on the time taken to find specific documents.
Labels can be added to each file by the end users that are granted permission to edit it.
You can add multiple labels on each file to add layers of detail to a project such as its priority or the action needed on that file. For example, you could use a label to mark documents that need signatures and are due before a certain date.
What are the Different Types of Labels used in Google Drive?
There are two types of labels available currently through Google Drive:, standard and badged. Standard labels work the same as badged labels but are not shown prominently beside the name of the file. Badged labels show as a coloured badge with the name of the label beside the title of the document. These labels are best used for very pertinent information that should be drawn to your attention when working on the file. This is good for sensitivity labels which show how public that information is.
How are Labels Useful to your Organisation?
Labels help your organisation cut down on confusion and time taken to find files and documents. They can add layers of metadata to your work which makes searching for information by project, client, or priority simple. Labels can also help with data loss prevention in your organisation. You can set up a red, amber, green labelling system for how private your documents are. When used in badged labels this level of sensitivity will be featured by the title to raise awareness among everyone in your organisation. This will not prevent the documents, sheets, or slides from being shared but will remind the person working on them of their sensitive nature to reduce the likelihood of data loss.
Please contact firstname.lastname@example.org if you would like Google Workspace training for your organisation or if you have any questions regarding these tools.
As always, we want to hear your thoughts- is Google Drive something you or your organisation find helpful in the day-to-day running of your business?
As a longstanding member of the Google Cloud Partner Program, Damson Cloud specialises in bringing people and ideas together through new ways of working. We champion the very best practices in remote working and change management, helping companies and their teams collaborate productively from anywhere in the world. To find out more about our services, check out our library of tutorial videos or our blog. | <urn:uuid:0709ee72-4f64-4893-bfda-83f350614239> | CC-MAIN-2022-40 | https://damsoncloud.com/how-to-use-labels-in-google-drive/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00093.warc.gz | en | 0.926608 | 662 | 3.015625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.